andrew_gelman_stats andrew_gelman_stats-2013 andrew_gelman_stats-2013-2099 knowledge-graph by maker-knowledge-mining

2099 andrew gelman stats-2013-11-13-“What are some situations in which the classical approach (or a naive implementation of it, based on cookbook recipes) gives worse results than a Bayesian approach, results that actually impeded the science?”


meta infos for this blog

Source: html

Introduction: Phil Nelson writes in the context of a biostatistics textbook he is writing, “Physical models of living systems”: There are a number of classic statistical problems that arise every day in the lab, and which are discussed in any book: 1. In a control group, M untreated rats out of 20 got a form of cancer. In a test group, N treated rats out of 20 got that cancer. Is this a significant difference? 2. In a control group of 20 untreated rates, their body weights at 2 weeks were w_1,…, w_20. In a test group of 20 treated rats, their body weights at 2 weeks were w’_1,…, w’_20. Are the means significantly different? 3. In a group of 20 rats, each given dose d_i of a drug, their body weights at 2 weeks were w_i. Is there a significant correlation between d and w? I would like to ask: What are some situations in which the classical approach (or a naive implementation of it, based on cookbook recipes) gives worse results than a Bayesian approach, results that actually impeded the scien


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Phil Nelson writes in the context of a biostatistics textbook he is writing, “Physical models of living systems”: There are a number of classic statistical problems that arise every day in the lab, and which are discussed in any book: 1. [sent-1, score-0.169]

2 In a control group, M untreated rats out of 20 got a form of cancer. [sent-2, score-0.704]

3 In a test group, N treated rats out of 20 got that cancer. [sent-3, score-0.627]

4 In a control group of 20 untreated rates, their body weights at 2 weeks were w_1,…, w_20. [sent-6, score-1.002]

5 In a test group of 20 treated rats, their body weights at 2 weeks were w’_1,…, w’_20. [sent-7, score-0.925]

6 In a group of 20 rats, each given dose d_i of a drug, their body weights at 2 weeks were w_i. [sent-10, score-0.82]

7 Is there a significant correlation between d and w? [sent-11, score-0.078]

8 I would like to ask: What are some situations in which the classical approach (or a naive implementation of it, based on cookbook recipes) gives worse results than a Bayesian approach, results that actually impeded the science? [sent-12, score-0.561]

9 (No doubt both approaches agree if the 20 rats are replaced by 20000. [sent-13, score-0.435]

10 ) That is, there must be cautionary case studies in which the assumptions of classical statistics were proved not useful for some real experiment. [sent-14, score-0.231]

11 Such case studies in my opinion are invaluable for focusing students’ attention, particularly if they have already been subjected to a cookbook statistics course. [sent-15, score-0.52]

12 It is a study with n=3000, looking at the attractiveness of parents and the sexes of their children. [sent-18, score-0.278]

13 The published analysis compared the proportion of girl births among the parents who were labeled “very attractive,” compared to the proportion of girl births of the other parents. [sent-19, score-1.226]

14 However, there is a lot of prior information on this topic. [sent-23, score-0.105]

15 It would be inplausible for the true difference in the population to be as large as 0. [sent-24, score-0.22]

16 A reasonable prior distribution might have a mean of 0 and a standard deviation of 0. [sent-26, score-0.325]

17 Under such a prior, the Bayesian inference is that the population difference is very close to 0. [sent-28, score-0.299]

18 Thus, in the Bayesian analysis, the result is not anything close to statistically significant. [sent-34, score-0.236]

19 In this case, the bad, non-Bayesian, answer impeded the science, at least in the sense that it resulted in a wrong result being published in a reputable journal (Journal of Theoretical Biology, impact factor 3) and also used as the basis of a pop-science book. [sent-35, score-0.509]

20 A wonderful illustration of the principle that God is in every leaf of every tree. [sent-38, score-0.353]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('rats', 0.435), ('girl', 0.211), ('impeded', 0.206), ('cookbook', 0.194), ('weights', 0.194), ('body', 0.194), ('group', 0.186), ('untreated', 0.186), ('weeks', 0.159), ('births', 0.156), ('difference', 0.142), ('deviation', 0.123), ('treated', 0.116), ('parents', 0.11), ('proportion', 0.11), ('prior', 0.105), ('invaluable', 0.103), ('standard', 0.097), ('every', 0.096), ('classical', 0.095), ('recipes', 0.093), ('subjected', 0.087), ('dose', 0.087), ('reputable', 0.087), ('statistically', 0.085), ('leaf', 0.085), ('sexes', 0.085), ('posterior', 0.085), ('attractiveness', 0.083), ('control', 0.083), ('compared', 0.081), ('bayesian', 0.08), ('close', 0.079), ('resulted', 0.078), ('significant', 0.078), ('population', 0.078), ('test', 0.076), ('illustration', 0.076), ('nelson', 0.076), ('biostatistics', 0.073), ('result', 0.072), ('god', 0.071), ('studies', 0.07), ('thus', 0.07), ('pr', 0.067), ('case', 0.066), ('attractive', 0.066), ('journal', 0.066), ('physicist', 0.066), ('approach', 0.066)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 2099 andrew gelman stats-2013-11-13-“What are some situations in which the classical approach (or a naive implementation of it, based on cookbook recipes) gives worse results than a Bayesian approach, results that actually impeded the science?”

Introduction: Phil Nelson writes in the context of a biostatistics textbook he is writing, “Physical models of living systems”: There are a number of classic statistical problems that arise every day in the lab, and which are discussed in any book: 1. In a control group, M untreated rats out of 20 got a form of cancer. In a test group, N treated rats out of 20 got that cancer. Is this a significant difference? 2. In a control group of 20 untreated rates, their body weights at 2 weeks were w_1,…, w_20. In a test group of 20 treated rats, their body weights at 2 weeks were w’_1,…, w’_20. Are the means significantly different? 3. In a group of 20 rats, each given dose d_i of a drug, their body weights at 2 weeks were w_i. Is there a significant correlation between d and w? I would like to ask: What are some situations in which the classical approach (or a naive implementation of it, based on cookbook recipes) gives worse results than a Bayesian approach, results that actually impeded the scien

2 0.2400026 791 andrew gelman stats-2011-07-08-Censoring on one end, “outliers” on the other, what can we do with the middle?

Introduction: This post was written by Phil. A medical company is testing a cancer drug. They get a 16 genetically identical (or nearly identical) rats that all have the same kind of tumor, give 8 of them the drug and leave 8 untreated…or maybe they give them a placebo, I don’t know; is there a placebo effect in rats?. Anyway, after a while the rats are killed and examined. If the tumors in the treated rats are smaller than the tumors in the untreated rats, then all of the rats have their blood tested for dozens of different proteins that are known to be associated with tumor growth or suppression. If there is a “significant” difference in one of the protein levels, then the working assumption is that the drug increases or decreases levels of that protein and that may be the mechanism by which the drug affects cancer. All of the above is done on many different cancer types and possibly several different types of rats. It’s just the initial screening: if things look promising, many more tests an

3 0.19145679 1941 andrew gelman stats-2013-07-16-Priors

Introduction: Nick Firoozye writes: While I am absolutely sympathetic to the Bayesian agenda I am often troubled by the requirement of having priors. We must have priors on the parameter of an infinite number of model we have never seen before and I find this troubling. There is a similarly troubling problem in economics of utility theory. Utility is on consumables. To be complete a consumer must assign utility to all sorts of things they never would have encountered. More recent versions of utility theory instead make consumption goods a portfolio of attributes. Cadillacs are x many units of luxury y of transport etc etc. And we can automatically have personal utilities to all these attributes. I don’t ever see parameters. Some model have few and some have hundreds. Instead, I see data. So I don’t know how to have an opinion on parameters themselves. Rather I think it far more natural to have opinions on the behavior of models. The prior predictive density is a good and sensible notion. Also

4 0.15706362 1149 andrew gelman stats-2012-02-01-Philosophy of Bayesian statistics: my reactions to Cox and Mayo

Introduction: The journal Rationality, Markets and Morals has finally posted all the articles in their special issue on the philosophy of Bayesian statistics. My contribution is called Induction and Deduction in Bayesian Data Analysis. I’ll also post my reactions to the other articles. I wrote these notes a few weeks ago and could post them all at once, but I think it will be easier if I post my reactions to each article separately. To start with my best material, here’s my reaction to David Cox and Deborah Mayo, “A Statistical Scientist Meets a Philosopher of Science.” I recommend you read all the way through my long note below; there’s good stuff throughout: 1. Cox: “[Philosophy] forces us to say what it is that we really want to know when we analyze a situation statistically.” This reminds me of a standard question that Don Rubin (who, unlike me, has little use for philosophy in his research) asks in virtually any situation: “What would you do if you had all the data?” For

5 0.14749654 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

Introduction: Robert Bell pointed me to this post by Brad De Long on Bayesian statistics, and then I also noticed this from Noah Smith, who wrote: My impression is that although the Bayesian/Frequentist debate is interesting and intellectually fun, there’s really not much “there” there… despite being so-hip-right-now, Bayesian is not the Statistical Jesus. I’m happy to see the discussion going in this direction. Twenty-five years ago or so, when I got into this biz, there were some serious anti-Bayesian attitudes floating around in mainstream statistics. Discussions in the journals sometimes devolved into debates of the form, “Bayesians: knaves or fools?”. You’d get all sorts of free-floating skepticism about any prior distribution at all, even while people were accepting without question (and doing theory on) logistic regressions, proportional hazards models, and all sorts of strong strong models. (In the subfield of survey sampling, various prominent researchers would refuse to mode

6 0.12827235 2351 andrew gelman stats-2014-05-28-Bayesian nonparametric weighted sampling inference

7 0.12483525 1474 andrew gelman stats-2012-08-29-More on scaled-inverse Wishart and prior independence

8 0.11303157 1072 andrew gelman stats-2011-12-19-“The difference between . . .”: It’s not just p=.05 vs. p=.06

9 0.11157303 2115 andrew gelman stats-2013-11-27-Three unblinded mice

10 0.1109603 784 andrew gelman stats-2011-07-01-Weighting and prediction in sample surveys

11 0.11075306 1955 andrew gelman stats-2013-07-25-Bayes-respecting experimental design and other things

12 0.10712566 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

13 0.10543303 1981 andrew gelman stats-2013-08-14-The robust beauty of improper linear models in decision making

14 0.10250342 184 andrew gelman stats-2010-08-04-That half-Cauchy prior

15 0.10210666 1155 andrew gelman stats-2012-02-05-What is a prior distribution?

16 0.10001005 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

17 0.099682704 972 andrew gelman stats-2011-10-25-How do you interpret standard errors from a regression fit to the entire population?

18 0.099589057 2333 andrew gelman stats-2014-05-13-Personally, I’d rather go with Teragram

19 0.09798383 2042 andrew gelman stats-2013-09-28-Difficulties of using statistical significance (or lack thereof) to sift through and compare research hypotheses

20 0.097786479 2041 andrew gelman stats-2013-09-27-Setting up Jitts online


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.201), (1, 0.092), (2, 0.021), (3, -0.083), (4, -0.037), (5, -0.014), (6, 0.031), (7, 0.054), (8, -0.066), (9, -0.038), (10, 0.003), (11, -0.023), (12, 0.039), (13, 0.002), (14, 0.036), (15, -0.001), (16, 0.003), (17, 0.012), (18, 0.048), (19, -0.006), (20, 0.01), (21, 0.02), (22, -0.008), (23, 0.002), (24, 0.016), (25, 0.02), (26, -0.013), (27, -0.012), (28, -0.014), (29, -0.001), (30, 0.001), (31, -0.022), (32, -0.029), (33, 0.05), (34, 0.024), (35, 0.061), (36, -0.01), (37, 0.051), (38, -0.021), (39, 0.003), (40, -0.026), (41, -0.04), (42, 0.01), (43, -0.012), (44, 0.021), (45, -0.005), (46, 0.008), (47, 0.027), (48, 0.043), (49, -0.009)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97981352 2099 andrew gelman stats-2013-11-13-“What are some situations in which the classical approach (or a naive implementation of it, based on cookbook recipes) gives worse results than a Bayesian approach, results that actually impeded the science?”

Introduction: Phil Nelson writes in the context of a biostatistics textbook he is writing, “Physical models of living systems”: There are a number of classic statistical problems that arise every day in the lab, and which are discussed in any book: 1. In a control group, M untreated rats out of 20 got a form of cancer. In a test group, N treated rats out of 20 got that cancer. Is this a significant difference? 2. In a control group of 20 untreated rates, their body weights at 2 weeks were w_1,…, w_20. In a test group of 20 treated rats, their body weights at 2 weeks were w’_1,…, w’_20. Are the means significantly different? 3. In a group of 20 rats, each given dose d_i of a drug, their body weights at 2 weeks were w_i. Is there a significant correlation between d and w? I would like to ask: What are some situations in which the classical approach (or a naive implementation of it, based on cookbook recipes) gives worse results than a Bayesian approach, results that actually impeded the scien

2 0.76452965 2223 andrew gelman stats-2014-02-24-“Edlin’s rule” for routinely scaling down published estimates

Introduction: A few months ago I reacted (see further discussion in comments here ) to a recent study on early childhood intervention, in which researchers Paul Gertler, James Heckman, Rodrigo Pinto, Arianna Zanolini, Christel Vermeerch, Susan Walker, Susan M. Chang, and Sally Grantham-McGregor estimated that a particular intervention on young children had raised incomes on young adults by 42%. I wrote: Major decisions on education policy can turn on the statistical interpretation of small, idiosyncratic data sets — in this case, a study of 129 Jamaican children. . . . Overall, I have no reason to doubt the direction of the effect, namely, that psychosocial stimulation should be good. But I’m skeptical of the claim that income differed by 42%, due to the reason of the statistical significance filter . In section 2.3, the authors are doing lots of hypothesizing based on some comparisons being statistically significant and others being non-significant. There’s nothing wrong with speculation, b

3 0.75460619 1206 andrew gelman stats-2012-03-10-95% intervals that I don’t believe, because they’re from a flat prior I don’t believe

Introduction: Arnaud Trolle (no relation ) writes: I have a question about the interpretation of (non-)overlapping of 95% credibility intervals. In a Bayesian ANOVA (a within-subjects one), I computed 95% credibility intervals about the main effects of a factor. I’d like to compare two by two the main effects across the different conditions of the factor. Can I directly interpret the (non-)overlapping of these credibility intervals and make the following statements: “As the 95% credibility intervals do not overlap, both conditions have significantly different main effects” or conversely “As the 95% credibility intervals overlap, the main effects of both conditions are not significantly different, i.e. equivalent”? I heard that, in the case of classical confidence intervals, the second statement is false, but what happens when working within a Bayesian framework? My reply: I think it makes more sense to directly look at inference for the difference. Also, your statements about equivalence

4 0.7481454 2180 andrew gelman stats-2014-01-21-Everything I need to know about Bayesian statistics, I learned in eight schools.

Introduction: This post is by Phil. I’m aware that there  are  some people who use a Bayesian approach largely because it allows them to provide a highly informative prior distribution based subjective judgment, but that is not the appeal of Bayesian methods for a lot of us practitioners. It’s disappointing and surprising, twenty years after my initial experiences, to still hear highly informed professional statisticians who think that what distinguishes Bayesian statistics from Frequentist statistics is “subjectivity” ( as seen in  a recent blog post and its comments ). My first encounter with Bayesian statistics was just over 20 years ago. I was a postdoc at Lawrence Berkeley National Laboratory, with a new PhD in theoretical atomic physics but working on various problems related to the geographical and statistical distribution of indoor radon (a naturally occurring radioactive gas that can be dangerous if present at high concentrations). One of the issues I ran into right at the start was th

5 0.7431832 1955 andrew gelman stats-2013-07-25-Bayes-respecting experimental design and other things

Introduction: Dan Lakeland writes: I have some questions about some basic statistical ideas and would like your opinion on them: 1) Parameters that manifestly DON’T exist: It makes good sense to me to think about Bayesian statistics as narrowing in on the value of parameters based on a model and some data. But there are cases where “the parameter” simply doesn’t make sense as an actual thing. Yet, it’s not really a complete fiction, like unicorns either, it’s some kind of “effective” thing maybe. Here’s an example of what I mean. I did a simple toy experiment where we dropped crumpled up balls of paper and timed their fall times. (see here: http://models.street-artists.org/?s=falling+ball ) It was pretty instructive actually, and I did it to figure out how to in a practical way use an ODE to get a likelihood in MCMC procedures. One of the parameters in the model is the radius of the spherical ball of paper. But the ball of paper isn’t a sphere, not even approximately. There’s no single valu

6 0.74266046 1149 andrew gelman stats-2012-02-01-Philosophy of Bayesian statistics: my reactions to Cox and Mayo

7 0.73920864 1209 andrew gelman stats-2012-03-12-As a Bayesian I want scientists to report their data non-Bayesianly

8 0.7365104 368 andrew gelman stats-2010-10-25-Is instrumental variables analysis particularly susceptible to Type M errors?

9 0.73158276 1941 andrew gelman stats-2013-07-16-Priors

10 0.722808 1150 andrew gelman stats-2012-02-02-The inevitable problems with statistical significance and 95% intervals

11 0.72273588 804 andrew gelman stats-2011-07-15-Static sensitivity analysis

12 0.71717393 2201 andrew gelman stats-2014-02-06-Bootstrap averaging: Examples where it works and where it doesn’t work

13 0.7064932 314 andrew gelman stats-2010-10-03-Disconnect between drug and medical device approval

14 0.706357 1114 andrew gelman stats-2012-01-12-Controversy about average personality differences between men and women

15 0.7050491 2248 andrew gelman stats-2014-03-15-Problematic interpretations of confidence intervals

16 0.70412654 1926 andrew gelman stats-2013-07-05-More plain old everyday Bayesianism

17 0.70210582 2115 andrew gelman stats-2013-11-27-Three unblinded mice

18 0.70122737 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

19 0.69693667 2159 andrew gelman stats-2014-01-04-“Dogs are sensitive to small variations of the Earth’s magnetic field”

20 0.69375485 2140 andrew gelman stats-2013-12-19-Revised evidence for statistical standards


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(2, 0.077), (15, 0.047), (16, 0.033), (24, 0.232), (27, 0.015), (41, 0.021), (45, 0.062), (51, 0.032), (59, 0.02), (63, 0.027), (64, 0.02), (86, 0.025), (99, 0.284)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98462725 2099 andrew gelman stats-2013-11-13-“What are some situations in which the classical approach (or a naive implementation of it, based on cookbook recipes) gives worse results than a Bayesian approach, results that actually impeded the science?”

Introduction: Phil Nelson writes in the context of a biostatistics textbook he is writing, “Physical models of living systems”: There are a number of classic statistical problems that arise every day in the lab, and which are discussed in any book: 1. In a control group, M untreated rats out of 20 got a form of cancer. In a test group, N treated rats out of 20 got that cancer. Is this a significant difference? 2. In a control group of 20 untreated rates, their body weights at 2 weeks were w_1,…, w_20. In a test group of 20 treated rats, their body weights at 2 weeks were w’_1,…, w’_20. Are the means significantly different? 3. In a group of 20 rats, each given dose d_i of a drug, their body weights at 2 weeks were w_i. Is there a significant correlation between d and w? I would like to ask: What are some situations in which the classical approach (or a naive implementation of it, based on cookbook recipes) gives worse results than a Bayesian approach, results that actually impeded the scien

2 0.95850849 1089 andrew gelman stats-2011-12-28-Path sampling for models of varying dimension

Introduction: Somebody asks: I’m reading your paper on path sampling. It essentially solves the problem of computing the ratio \int q0(omega)d omega/\int q1(omega) d omega. I.e the arguments in q0() and q1() are the same. But this assumption is not always true in Bayesian model selection using Bayes factor. In general (for BF), we have this problem, t1 and t2 may have no relation at all. \int f1(y|t1)p1(t1) d t1 / \int f2(y|t2)p2(t2) d t2 As an example, suppose that we want to compare two sets of normally distributed data with known variance whether they have the same mean (H0) or they are not necessarily have the same mean (H1). Then the dummy variable should be mu in H0 (which is the common mean of both set of samples), and should be (mu1, mu2) (which are the means for each set of samples). One straight method to address my problem is to preform path integration for the numerate and the denominator, as both the numerate and the denominator are integrals. Each integral can be rewrit

3 0.95506036 1240 andrew gelman stats-2012-04-02-Blogads update

Introduction: A few months ago I reported on someone who wanted to insert text links into the blog. I asked her how much they would pay and got no answer. Yesterday, though, I received this reply: Hello Andrew, I am sorry for the delay in getting back to you. I’d like to make a proposal for your site. Please refer below. We would like to place a simple text link ad on page http://andrewgelman.com/2011/07/super_sam_fuld/ to link to *** with the key phrase ***. We will incorporate the key phrase into a sentence so it would read well. Rest assured it won’t sound obnoxious or advertorial. We will then process the final text link code as soon as you agree to our proposal. We can offer you $200 for this with the assumption that you will keep the link “live” on that page for 12 months or longer if you prefer. Please get back to us with a quick reply on your thoughts on this and include your Paypal ID for payment process. Hoping for a positive response from you. I wrote back: Hi,

4 0.95376569 1941 andrew gelman stats-2013-07-16-Priors

Introduction: Nick Firoozye writes: While I am absolutely sympathetic to the Bayesian agenda I am often troubled by the requirement of having priors. We must have priors on the parameter of an infinite number of model we have never seen before and I find this troubling. There is a similarly troubling problem in economics of utility theory. Utility is on consumables. To be complete a consumer must assign utility to all sorts of things they never would have encountered. More recent versions of utility theory instead make consumption goods a portfolio of attributes. Cadillacs are x many units of luxury y of transport etc etc. And we can automatically have personal utilities to all these attributes. I don’t ever see parameters. Some model have few and some have hundreds. Instead, I see data. So I don’t know how to have an opinion on parameters themselves. Rather I think it far more natural to have opinions on the behavior of models. The prior predictive density is a good and sensible notion. Also

5 0.95353377 1367 andrew gelman stats-2012-06-05-Question 26 of my final exam for Design and Analysis of Sample Surveys

Introduction: 26. You have just graded an an exam with 28 questions and 15 students. You fit a logistic item- response model estimating ability, difficulty, and discrimination parameters. Which of the following statements are basically true? (Indicate all that apply.) (a) If a question is answered correctly by students with very low and very high ability, but is missed by students in the middle, it will have a high value for its discrimination parameter. (b) It is not possible to fit an item-response model when you have more questions than students. In order to fit the model, you either need to reduce the number of questions (for example, by discarding some questions or by putting together some questions into a combined score) or increase the number of students in the dataset. (c) To keep the model identified, you can set one of the difficulty parameters or one of the ability parameters to zero and set one of the discrimination parameters to 1. (d) If two students answer the same number of q

6 0.95214492 2129 andrew gelman stats-2013-12-10-Cross-validation and Bayesian estimation of tuning parameters

7 0.95207506 2358 andrew gelman stats-2014-06-03-Did you buy laundry detergent on their most recent trip to the store? Also comments on scientific publication and yet another suggestion to do a study that allows within-person comparisons

8 0.94918418 669 andrew gelman stats-2011-04-19-The mysterious Gamma (1.4, 0.4)

9 0.9487921 310 andrew gelman stats-2010-10-02-The winner’s curse

10 0.94877696 197 andrew gelman stats-2010-08-10-The last great essayist?

11 0.94854993 1171 andrew gelman stats-2012-02-16-“False-positive psychology”

12 0.94791591 1087 andrew gelman stats-2011-12-27-“Keeping things unridiculous”: Berger, O’Hagan, and me on weakly informative priors

13 0.94768786 2109 andrew gelman stats-2013-11-21-Hidden dangers of noninformative priors

14 0.94721031 1644 andrew gelman stats-2012-12-30-Fixed effects, followed by Bayes shrinkage?

15 0.94681168 2247 andrew gelman stats-2014-03-14-The maximal information coefficient

16 0.94668359 612 andrew gelman stats-2011-03-14-Uh-oh

17 0.94662458 494 andrew gelman stats-2010-12-31-Type S error rates for classical and Bayesian single and multiple comparison procedures

18 0.94645542 847 andrew gelman stats-2011-08-10-Using a “pure infographic” to explore differences between information visualization and statistical graphics

19 0.94578862 1838 andrew gelman stats-2013-05-03-Setting aside the politics, the debate over the new health-care study reveals that we’re moving to a new high standard of statistical journalism

20 0.94563413 1062 andrew gelman stats-2011-12-16-Mr. Pearson, meet Mr. Mandelbrot: Detecting Novel Associations in Large Data Sets