andrew_gelman_stats andrew_gelman_stats-2011 andrew_gelman_stats-2011-1041 knowledge-graph by maker-knowledge-mining

1041 andrew gelman stats-2011-12-04-David MacKay and Occam’s Razor


meta infos for this blog

Source: html

Introduction: In my comments on David MacKay’s 2003 book on Bayesian inference, I wrote that I hate all the Occam-factor stuff that MacKay talks about, and I linked to this quote from Radford Neal: Sometimes a simple model will outperform a more complex model . . . Nevertheless, I believe that deliberately limiting the complexity of the model is not fruitful when the problem is evidently complex. Instead, if a simple model is found that outperforms some particular complex model, the appropriate response is to define a different complex model that captures whatever aspect of the problem led to the simple model performing well. MacKay replied as follows: When you said you disagree with me on Occam factors I think what you meant was that you agree with me on them. I’ve read your post on the topic and completely agreed with you (and Radford) that we should be using models the size of a house, models that we believe in, and that anyone who thinks it is a good idea to bias the model toward


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 In my comments on David MacKay’s 2003 book on Bayesian inference, I wrote that I hate all the Occam-factor stuff that MacKay talks about, and I linked to this quote from Radford Neal: Sometimes a simple model will outperform a more complex model . [sent-1, score-0.526]

2 Nevertheless, I believe that deliberately limiting the complexity of the model is not fruitful when the problem is evidently complex. [sent-4, score-0.23]

3 Instead, if a simple model is found that outperforms some particular complex model, the appropriate response is to define a different complex model that captures whatever aspect of the problem led to the simple model performing well. [sent-5, score-0.872]

4 I’ve read your post on the topic and completely agreed with you (and Radford) that we should be using models the size of a house, models that we believe in, and that anyone who thinks it is a good idea to bias the model towards simpler models for some arbitrary reason is deluded and dangerous. [sent-7, score-0.723]

5 Take a model for a function y(x) for example, which is parameterized with an infinite number of parameters, and whose prior has a hyperparameter alpha, which controls wiggliness, and which we put a hyperprior on, then get data t ~ N( y , sigma**2 ), and do inference of y and/or alpha. [sent-13, score-0.388]

6 It will be the case that the inference prefers some values of alpha over others – the posterior on alpha usually has a bump. [sent-14, score-0.542]

7 Now I think pedagogically it is interesting to note that smaller values of alpha (associated with wigglier, more “complex” functions) would have “fitted” the data better, but they are not preferred; the value of alpha at the peak of the bump is preferred, more probable, etc. [sent-16, score-0.429]

8 “Of course the briefest explanation is the most probable explanation! [sent-22, score-0.199]

9 ” You might enjoy reading about the “bits back” coding method of Geoff Hinton, which accurately embodies this data-compression / Bayesian phenomenon for quite complicated models including latent variables. [sent-24, score-0.258]

10 I reckon that people who understand these issues may write good Monte Carlo methods for navigating around interesting models. [sent-28, score-0.181]

11 I reckon that people who don’t understand these issues often write atrociously bad Monte Carlo methods that (eg) claim to perform Bayesian model comparison but don’t. [sent-29, score-0.361]

12 So I think it is great to speak the language of energy and entropy; the complementary language of bits and description lengths; and the language of log-likelihoods added to Occam factors. [sent-30, score-0.411]

13 html There I explain this terminology [marginal likelihood = best fit likelihood * occam factor] with reasonably simple examples. [sent-38, score-0.647]

14 And here’s my reply to MacKay: I don’t know what it means for a model to be “the size of a house,” but I do know that I almost always wish I was fitting a model bigger than what I’m actually using. [sent-39, score-0.453]

15 Bigger models need bigger prior distributions, and until recently I hadn’t thought very seriously about how to do this. [sent-41, score-0.242]

16 The Occam applications I don’t like are the discrete versions such as advocated by Adrian Raftery and others, in which some version of Bayesian calculation is used to get results saying that the posterior probability is 60%, say, that a certain coefficient in a model is exactly zero. [sent-43, score-0.246]

17 I’d rather keep the term in the model and just shrink it continuously toward zero. [sent-44, score-0.18]

18 MacKay’s formulation in terms of “briefer explanations” is interesting but it doesn’t really work for me because, from a modeling point of view, once I’ve set up a model I’d like to keep all of it, maybe shrinking some parts toward zero but not getting rid of coefficients entirely. [sent-46, score-0.282]

19 This doesn’t fit into the usual “Bayesian model comparison” or “Bayesian model averaging” that I’ve seen, but I could well believe that the principle holds on some deep level. [sent-51, score-0.41]

20 In practice, though, I worry that “Occam’s razor” is more likely being used as an excuse to set parameters to zero and to favor simpler models rather than to encourage researchers to think more carefully about their complicated models (as in Radford’s quote above). [sent-52, score-0.339]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('occam', 0.58), ('mackay', 0.272), ('alpha', 0.187), ('model', 0.18), ('bayesian', 0.139), ('radford', 0.127), ('briefer', 0.118), ('hinton', 0.118), ('reckon', 0.118), ('explanation', 0.116), ('bits', 0.105), ('inference', 0.102), ('language', 0.102), ('razor', 0.101), ('carlo', 0.1), ('complex', 0.099), ('simpler', 0.097), ('monte', 0.095), ('models', 0.094), ('bigger', 0.093), ('eg', 0.088), ('probable', 0.083), ('modelling', 0.08), ('automatic', 0.073), ('preferred', 0.068), ('simple', 0.067), ('posterior', 0.066), ('honest', 0.066), ('view', 0.065), ('understand', 0.063), ('log', 0.063), ('agreed', 0.063), ('becomes', 0.059), ('method', 0.059), ('smaller', 0.055), ('prior', 0.055), ('coefficients', 0.054), ('house', 0.054), ('complicated', 0.054), ('energies', 0.054), ('brevity', 0.054), ('pedagogy', 0.054), ('geoff', 0.051), ('deluded', 0.051), ('embodies', 0.051), ('approximates', 0.051), ('hyperprior', 0.051), ('thermodynamic', 0.051), ('believe', 0.05), ('shrinking', 0.048)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999988 1041 andrew gelman stats-2011-12-04-David MacKay and Occam’s Razor

Introduction: In my comments on David MacKay’s 2003 book on Bayesian inference, I wrote that I hate all the Occam-factor stuff that MacKay talks about, and I linked to this quote from Radford Neal: Sometimes a simple model will outperform a more complex model . . . Nevertheless, I believe that deliberately limiting the complexity of the model is not fruitful when the problem is evidently complex. Instead, if a simple model is found that outperforms some particular complex model, the appropriate response is to define a different complex model that captures whatever aspect of the problem led to the simple model performing well. MacKay replied as follows: When you said you disagree with me on Occam factors I think what you meant was that you agree with me on them. I’ve read your post on the topic and completely agreed with you (and Radford) that we should be using models the size of a house, models that we believe in, and that anyone who thinks it is a good idea to bias the model toward

2 0.39149788 1392 andrew gelman stats-2012-06-26-Occam

Introduction: Cosma Shalizi and Larry Wasserman discuss some papers from a conference on Ockham’s Razor. I don’t have anything new to add on this so let me link to past blog entries on the topic and repost the following from 2004 : A lot has been written in statistics about “parsimony”—that is, the desire to explain phenomena using fewer parameters–but I’ve never seen any good general justification for parsimony. (I don’t count “Occam’s Razor,” or “Ockham’s Razor,” or whatever, as a justification. You gotta do better than digging up a 700-year-old quote.) Maybe it’s because I work in social science, but my feeling is: if you can approximate reality with just a few parameters, fine. If you can use more parameters to fold in more information, that’s even better. In practice, I often use simple models—because they are less effort to fit and, especially, to understand. But I don’t kid myself that they’re better than more complicated efforts! My favorite quote on this comes from Rad

3 0.26888272 1586 andrew gelman stats-2012-11-21-Readings for a two-week segment on Bayesian modeling?

Introduction: Michael Landy writes: I’m in Psych and Center for Neural Science and I’m teaching a doctoral course this term in methods in psychophysics (never mind the details) at the tail end of which I’m planning on at least 2 lectures on Bayesian parameter estimation and Bayesian model comparison. So far, all the readings I have are a bit too obscure and either glancing (bits of machine-learning books: Bishop, MacKay) or too low-level. The only useful reference I’ve got is an application of these methods (a methods article of mine in a Neuroscience Methods journal). The idea is to give them a decent idea of both estimation (Jeffries priors, marginals of the posterior over the parameters) and model comparison (cross-validation, AIC, BIC, full-blown Bayesian model posterior comparisons, Bayes factor, Occam factor, blah blah blah). So: have you any suggestions for articles or chapters that might be suitable (yes, I’m aware you have an entire book that’s obviously relevant)? In the class topic

4 0.26107875 984 andrew gelman stats-2011-11-01-David MacKay sez . . . 12??

Introduction: I’ve recently been reading David MacKay’s 2003 book , Information Theory, Inference, and Learning Algorithms. It’s great background for my Bayesian computation class because he has lots of pictures and detailed discussions of the algorithms. (Regular readers of this blog will not be surprised to hear that I hate all the Occam -factor stuff that MacKay talks about, but overall it’s a great book.) Anyway, I happened to notice the following bit, under the heading, “How many samples are needed?”: In many problems, we really only need about twelve independent samples from P(x). Imagine that x is an unknown vector such as the amount of corrosion present in each of 10 000 underground pipelines around Cambridge, and φ(x) is the total cost of repairing those pipelines. The distribution P(x) describes the probability of a state x given the tests that have been carried out on some pipelines and the assumptions about the physics of corrosion. The quantity Φ is the expected cost of the repa

5 0.22036549 2133 andrew gelman stats-2013-12-13-Flexibility is good

Introduction: If I made a separate post for each interesting blog discussion, we’d get overwhelmed. That’s why I often leave detailed responses in the comments section, even though I’m pretty sure that most readers don’t look in the comments at all. Sometimes, though, I think it’s good to bring such discussions to light. Here’s a recent example. Michael wrote : Poor predictive performance usually indicates that the model isn’t sufficiently flexible to explain the data, and my understanding of the proper Bayesian strategy is to feed that back into your original model and try again until you achieve better performance. Corey replied : It was my impression that — in ML at least — poor predictive performance is more often due to the model being too flexible and fitting noise. And Rahul agreed : Good point. A very flexible model will describe your training data perfectly and then go bonkers when unleashed on wild data. But I wrote : Overfitting comes from a model being flex

6 0.17756625 754 andrew gelman stats-2011-06-09-Difficulties with Bayesian model averaging

7 0.16339184 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

8 0.16215026 1972 andrew gelman stats-2013-08-07-When you’re planning on fitting a model, build up to it by fitting simpler models first. Then, once you have a model you like, check the hell out of it

9 0.15035279 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

10 0.14508893 811 andrew gelman stats-2011-07-20-Kind of Bayesian

11 0.14477651 1999 andrew gelman stats-2013-08-27-Bayesian model averaging or fitting a larger model

12 0.14470899 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

13 0.13302942 2254 andrew gelman stats-2014-03-18-Those wacky anti-Bayesians used to be intimidating, but now they’re just pathetic

14 0.13058554 1554 andrew gelman stats-2012-10-31-It not necessary that Bayesian methods conform to the likelihood principle

15 0.12949061 244 andrew gelman stats-2010-08-30-Useful models, model checking, and external validation: a mini-discussion

16 0.12919578 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

17 0.12757322 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

18 0.12586434 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

19 0.12451296 1719 andrew gelman stats-2013-02-11-Why waste time philosophizing?

20 0.1190789 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.235), (1, 0.199), (2, -0.028), (3, 0.088), (4, -0.048), (5, 0.016), (6, 0.019), (7, -0.033), (8, 0.085), (9, -0.008), (10, 0.009), (11, 0.016), (12, -0.062), (13, -0.007), (14, -0.001), (15, -0.007), (16, 0.044), (17, 0.002), (18, -0.016), (19, 0.01), (20, 0.007), (21, -0.032), (22, -0.013), (23, -0.042), (24, -0.002), (25, 0.005), (26, 0.0), (27, 0.023), (28, 0.028), (29, -0.013), (30, -0.049), (31, -0.017), (32, 0.022), (33, -0.003), (34, 0.016), (35, 0.015), (36, 0.018), (37, -0.049), (38, 0.026), (39, 0.002), (40, -0.026), (41, -0.01), (42, -0.012), (43, 0.028), (44, 0.03), (45, -0.03), (46, 0.0), (47, 0.008), (48, 0.039), (49, -0.002)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97597176 1041 andrew gelman stats-2011-12-04-David MacKay and Occam’s Razor

Introduction: In my comments on David MacKay’s 2003 book on Bayesian inference, I wrote that I hate all the Occam-factor stuff that MacKay talks about, and I linked to this quote from Radford Neal: Sometimes a simple model will outperform a more complex model . . . Nevertheless, I believe that deliberately limiting the complexity of the model is not fruitful when the problem is evidently complex. Instead, if a simple model is found that outperforms some particular complex model, the appropriate response is to define a different complex model that captures whatever aspect of the problem led to the simple model performing well. MacKay replied as follows: When you said you disagree with me on Occam factors I think what you meant was that you agree with me on them. I’ve read your post on the topic and completely agreed with you (and Radford) that we should be using models the size of a house, models that we believe in, and that anyone who thinks it is a good idea to bias the model toward

2 0.90598476 754 andrew gelman stats-2011-06-09-Difficulties with Bayesian model averaging

Introduction: In response to this article by Cosma Shalizi and myself on the philosophy of Bayesian statistics, David Hogg writes: I [Hogg] agree–even in physics and astronomy–that the models are not “True” in the God-like sense of being absolute reality (that is, I am not a realist); and I have argued (a philosophically very naive paper, but hey, I was new to all this) that for pretty fundamental reasons we could never arrive at the True (with a capital “T”) model of the Universe. The goal of inference is to find the “best” model, where “best” might have something to do with prediction, or explanation, or message length, or (horror!) our utility. Needless to say, most of my physics friends *are* realists, even in the face of “effective theories” as Newtonian mechanics is an effective theory of GR and GR is an effective theory of “quantum gravity” (this plays to your point, because if you think any theory is possibly an effective theory, how could you ever find Truth?). I also liked the i

3 0.8999474 1392 andrew gelman stats-2012-06-26-Occam

Introduction: Cosma Shalizi and Larry Wasserman discuss some papers from a conference on Ockham’s Razor. I don’t have anything new to add on this so let me link to past blog entries on the topic and repost the following from 2004 : A lot has been written in statistics about “parsimony”—that is, the desire to explain phenomena using fewer parameters–but I’ve never seen any good general justification for parsimony. (I don’t count “Occam’s Razor,” or “Ockham’s Razor,” or whatever, as a justification. You gotta do better than digging up a 700-year-old quote.) Maybe it’s because I work in social science, but my feeling is: if you can approximate reality with just a few parameters, fine. If you can use more parameters to fold in more information, that’s even better. In practice, I often use simple models—because they are less effort to fit and, especially, to understand. But I don’t kid myself that they’re better than more complicated efforts! My favorite quote on this comes from Rad

4 0.87014788 320 andrew gelman stats-2010-10-05-Does posterior predictive model checking fit with the operational subjective approach?

Introduction: David Rohde writes: I have been thinking a lot lately about your Bayesian model checking approach. This is in part because I have been working on exploratory data analysis and wishing to avoid controversy and mathematical statistics we omitted model checking from our discussion. This is something that the refereeing process picked us up on and we ultimately added a critical discussion of null-hypothesis testing to our paper . The exploratory technique we discussed was essentially a 2D histogram approach, but we used Polya models as a formal model for the histogram. We are currently working on a new paper, and we are thinking through how or if we should do “confirmatory analysis” or model checking in the paper. What I find most admirable about your statistical work is that you clearly use the Bayesian approach to do useful applied statistical analysis. My own attempts at applied Bayesian analysis makes me greatly admire your applied successes. On the other hand it may be t

5 0.86966503 811 andrew gelman stats-2011-07-20-Kind of Bayesian

Introduction: Astrophysicist Andrew Jaffe pointed me to this and discussion of my philosophy of statistics (which is, in turn, my rational reconstruction of the statistical practice of Bayesians such as Rubin and Jaynes). Jaffe’s summary is fair enough and I only disagree in a few points: 1. Jaffe writes: Subjective probability, at least the way it is actually used by practicing scientists, is a sort of “as-if” subjectivity — how would an agent reason if her beliefs were reflected in a certain set of probability distributions? This is why when I discuss probability I try to make the pedantic point that all probabilities are conditional, at least on some background prior information or context. I agree, and my problem with the usual procedures used for Bayesian model comparison and Bayesian model averaging is not that these approaches are subjective but that the particular models being considered don’t make sense. I’m thinking of the sorts of models that say the truth is either A or

6 0.86696196 1406 andrew gelman stats-2012-07-05-Xiao-Li Meng and Xianchao Xie rethink asymptotics

7 0.86371028 244 andrew gelman stats-2010-08-30-Useful models, model checking, and external validation: a mini-discussion

8 0.86336887 776 andrew gelman stats-2011-06-22-Deviance, DIC, AIC, cross-validation, etc

9 0.86231166 1374 andrew gelman stats-2012-06-11-Convergence Monitoring for Non-Identifiable and Non-Parametric Models

10 0.86030585 1510 andrew gelman stats-2012-09-25-Incoherence of Bayesian data analysis

11 0.8591072 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

12 0.85045969 1999 andrew gelman stats-2013-08-27-Bayesian model averaging or fitting a larger model

13 0.84920681 1459 andrew gelman stats-2012-08-15-How I think about mixture models

14 0.84682268 1972 andrew gelman stats-2013-08-07-When you’re planning on fitting a model, build up to it by fitting simpler models first. Then, once you have a model you like, check the hell out of it

15 0.8459639 1723 andrew gelman stats-2013-02-15-Wacky priors can work well?

16 0.84406674 1856 andrew gelman stats-2013-05-14-GPstuff: Bayesian Modeling with Gaussian Processes

17 0.84138471 964 andrew gelman stats-2011-10-19-An interweaving-transformation strategy for boosting MCMC efficiency

18 0.83673185 614 andrew gelman stats-2011-03-15-Induction within a model, deductive inference for model evaluation

19 0.83221126 1739 andrew gelman stats-2013-02-26-An AI can build and try out statistical models using an open-ended generative grammar

20 0.83174384 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(9, 0.011), (16, 0.055), (21, 0.013), (24, 0.163), (27, 0.013), (29, 0.059), (42, 0.015), (43, 0.016), (48, 0.012), (56, 0.022), (79, 0.105), (82, 0.015), (86, 0.068), (95, 0.025), (99, 0.273)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96107161 1041 andrew gelman stats-2011-12-04-David MacKay and Occam’s Razor

Introduction: In my comments on David MacKay’s 2003 book on Bayesian inference, I wrote that I hate all the Occam-factor stuff that MacKay talks about, and I linked to this quote from Radford Neal: Sometimes a simple model will outperform a more complex model . . . Nevertheless, I believe that deliberately limiting the complexity of the model is not fruitful when the problem is evidently complex. Instead, if a simple model is found that outperforms some particular complex model, the appropriate response is to define a different complex model that captures whatever aspect of the problem led to the simple model performing well. MacKay replied as follows: When you said you disagree with me on Occam factors I think what you meant was that you agree with me on them. I’ve read your post on the topic and completely agreed with you (and Radford) that we should be using models the size of a house, models that we believe in, and that anyone who thinks it is a good idea to bias the model toward

2 0.95450032 1786 andrew gelman stats-2013-04-03-Hierarchical array priors for ANOVA decompositions

Introduction: Alexander Volfovsky and Peter Hoff write : ANOVA decompositions are a standard method for describing and estimating heterogeneity among the means of a response variable across levels of multiple categorical factors. In such a decomposition, the complete set of main effects and interaction terms can be viewed as a collection of vectors, matrices and arrays that share various index sets defined by the factor levels. For many types of categorical factors, it is plausible that an ANOVA decomposition exhibits some consistency across orders of effects, in that the levels of a factor that have similar main-effect coefficients may also have similar coefficients in higher-order interaction terms. In such a case, estimation of the higher-order interactions should be improved by borrowing information from the main effects and lower-order interactions. To take advantage of such patterns, this article introduces a class of hierarchical prior distributions for collections of interaction arrays t

3 0.95422858 1172 andrew gelman stats-2012-02-17-Rare name analysis and wealth convergence

Introduction: Steve Hsu summarizes the research of economic historian Greg Clark and Neil Cummins : Using rare surnames we track the socio-economic status of descendants of a sample of English rich and poor in 1800, until 2011. We measure social status through wealth, education, occupation, and age at death. Our method allows unbiased estimates of mobility rates. Paradoxically, we find two things. Mobility rates are lower than conventionally estimated. There is considerable persistence of status, even after 200 years. But there is convergence with each generation. The 1800 underclass has already attained mediocrity. And the 1800 upper class will eventually dissolve into the mass of society, though perhaps not for another 300 years, or longer. Read more at Steven’s blog. The idea of rare names to perform this analysis is interesting – and has been recently applied to the study of nepotism in Italy . I haven’t looked into the details of the methodology, but rare events

4 0.9447639 1515 andrew gelman stats-2012-09-29-Jost Haidt

Introduction: Research psychologist John Jost reviews the recent book, “The Righteous Mind,” by research psychologist Jonathan Haidt. Some of my thoughts on Haidt’s book are here . And here’s some of Jost’s review: Haidt’s book is creative, interesting, and provocative. . . . The book shines a new light on moral psychology and presents a bold, confrontational message. From a scientific perspective, however, I worry that his theory raises more questions than it answers. Why do some individuals feel that it is morally good (or necessary) to obey authority, favor the ingroup, and maintain purity, whereas others are skeptical? (Perhaps parenting style is relevant after all.) Why do some people think that it is morally acceptable to judge or even mistreat others such as gay or lesbian couples or, only a generation ago, interracial couples because they dislike or feel disgusted by them, whereas others do not? Why does the present generation “care about violence toward many more classes of victims

5 0.94413215 1884 andrew gelman stats-2013-06-05-A story of fake-data checking being used to shoot down a flawed analysis at the Farm Credit Agency

Introduction: Austin Kelly writes: While reading your postings [or here ] on the subject of testing your model by running fake data I was reminded of the fact that I got one of these kinds of tests actually published in a GAO report back in the day. Reading your posts on Unz and political vs. economic discourse made me think of that work again. I thought I’d actually drop you a line on the subject. Back in 2003 GAO was asked to look at Farmer Mac, including a look at the Farm Credit Agency’s regulation of Farmer Mac. As the resident mortgage econometrician back then I was asked to look at FCA’s risk based capital stress test for Farmer Mac. The work was pretty easy. I found a lot of oddities, but the biggest one was that they were using a discrete choice set up (loan goes bad or doesn’t) instead of a hazard model (loan goes bad this period or survives to the next). Not necessarily a problem – lots of mortgage models run that way. But you have to be really careful with your independe

6 0.94139892 845 andrew gelman stats-2011-08-08-How adoption speed affects the abandonment of cultural tastes

7 0.94085789 639 andrew gelman stats-2011-03-31-Bayes: radical, liberal, or conservative?

8 0.93595719 1825 andrew gelman stats-2013-04-25-It’s binless! A program for computing normalizing functions

9 0.93408972 1379 andrew gelman stats-2012-06-14-Cool-ass signal processing using Gaussian processes (birthdays again)

10 0.93403375 1944 andrew gelman stats-2013-07-18-You’ll get a high Type S error rate if you use classical statistical methods to analyze data from underpowered studies

11 0.93239343 2145 andrew gelman stats-2013-12-24-Estimating and summarizing inference for hierarchical variance parameters when the number of groups is small

12 0.93218476 2057 andrew gelman stats-2013-10-10-Chris Chabris is irritated by Malcolm Gladwell

13 0.93170124 1538 andrew gelman stats-2012-10-17-Rust

14 0.93146539 1940 andrew gelman stats-2013-07-16-A poll that throws away data???

15 0.93102682 1392 andrew gelman stats-2012-06-26-Occam

16 0.92814434 888 andrew gelman stats-2011-09-03-A psychology researcher asks: Is Anova dead?

17 0.92753881 399 andrew gelman stats-2010-11-07-Challenges of experimental design; also another rant on the practice of mentioning the publication of an article but not naming its author

18 0.92613614 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

19 0.92606491 2118 andrew gelman stats-2013-11-30-???

20 0.92596257 1384 andrew gelman stats-2012-06-19-Slick time series decomposition of the birthdays data