andrew_gelman_stats andrew_gelman_stats-2010 andrew_gelman_stats-2010-244 knowledge-graph by maker-knowledge-mining

244 andrew gelman stats-2010-08-30-Useful models, model checking, and external validation: a mini-discussion


meta infos for this blog

Source: html

Introduction: I sent a copy of my paper (coauthored with Cosma Shalizi) on Philosophy and the practice of Bayesian statistics in the social sciences to Richard Berk , who wrote: I read your paper this morning. I think we are pretty much on the same page about all models being wrong. I like very much the way you handle this in the paper. Yes, Newton’s work is wrong, but surely useful. I also like your twist on Bayesian methods. Makes good sense to me. Perhaps most important, your paper raises some difficult issues I have been trying to think more carefully about. 1. If the goal of a model is to be useful, surely we need to explore that “useful” means. At the very least, usefulness will depend on use. So a model that is useful for forecasting may or may not be useful for causal inference. 2. Usefulness will be a matter of degree. So that for each use we will need one or more metrics to represent how useful the model is. In what looks at first to be simple example, if the use is forecasting,


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 I sent a copy of my paper (coauthored with Cosma Shalizi) on Philosophy and the practice of Bayesian statistics in the social sciences to Richard Berk , who wrote: I read your paper this morning. [sent-1, score-0.239]

2 If the goal of a model is to be useful, surely we need to explore that “useful” means. [sent-9, score-0.306]

3 At the very least, usefulness will depend on use. [sent-10, score-0.362]

4 So a model that is useful for forecasting may or may not be useful for causal inference. [sent-11, score-1.023]

5 So that for each use we will need one or more metrics to represent how useful the model is. [sent-14, score-0.629]

6 In what looks at first to be simple example, if the use is forecasting, forecasting accuracy by something like MSE may be a place to start. [sent-15, score-0.398]

7 But that will depend on one’s forecasting loss function, which might not be quadratic or even symmetric. [sent-16, score-0.434]

8 Other kinds of use imply a very different set of metrics — what is a good usefulness metric for causal inference, for instance? [sent-18, score-0.653]

9 It seems to me that your Bayesian approach is one of several good ways (and not mutually exclusive ways) of doing data analysis. [sent-20, score-0.421]

10 There are these days so many interesting ways that statisticians have been thinking about description that I suspect it will be a while (if ever) before we have a compelling and systematic way to think about the process. [sent-23, score-0.232]

11 I guess I am uneasy with your approach when it uses the same data to build and evaluate a model. [sent-26, score-0.416]

12 Along with Andreas Buja and Larry Shepp, we are working on appropriate methods to post-model selection inference, given that current practice is just plain wrong and often very misleading. [sent-33, score-0.25]

13 Bottom line: what does one make of Bayesian output when the model involved has been tuned to the data? [sent-34, score-0.28]

14 We always talk about a model being “useful” but the concept is hard to quantify. [sent-36, score-0.192]

15 Regarding point #4, the use of the same data to build and evaluate the model is not particularly Bayesian. [sent-39, score-0.605]

16 I see what we do as an extension of non-Bayesian ideas such as chi^2 tests, residual plots, and exploratory data analysis–all of which, in different ways, are methods for assessing model fit using the data that were used to fit the model. [sent-40, score-0.563]

17 In any case, I agree that out-of-sample checks are vital to true statistical understanding. [sent-41, score-0.378]

18 To put it another way: I think you’re imagining that I’m proposing within-sample checks as an alternative to out-of-sample checking. [sent-42, score-0.511]

19 What I’m proposing is to do within-sample checks as an alternative to doing no checking at all , which unfortunately is the standard in much of the Bayesian world (abetted by the subjective-Bayes theory/ideology). [sent-44, score-0.437]

20 When a model passes a within-sample check, it doesn’t mean the model is correct. [sent-45, score-0.466]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('usefulness', 0.247), ('forecasting', 0.227), ('model', 0.192), ('useful', 0.186), ('checks', 0.183), ('bayesian', 0.177), ('larry', 0.165), ('proposing', 0.165), ('metrics', 0.157), ('ways', 0.138), ('bayes', 0.129), ('inference', 0.124), ('depend', 0.115), ('agree', 0.115), ('surely', 0.114), ('build', 0.108), ('data', 0.106), ('evaluate', 0.105), ('berk', 0.097), ('chi', 0.097), ('criminologists', 0.097), ('uneasy', 0.097), ('zhao', 0.097), ('description', 0.094), ('use', 0.094), ('liberty', 0.092), ('andreas', 0.092), ('hierarchy', 0.092), ('mutually', 0.092), ('quadratic', 0.092), ('alternative', 0.089), ('tuned', 0.088), ('working', 0.088), ('exclusive', 0.085), ('linda', 0.085), ('methods', 0.085), ('hierarchical', 0.084), ('passes', 0.082), ('newton', 0.082), ('paper', 0.081), ('vital', 0.08), ('multidimensional', 0.078), ('causal', 0.078), ('may', 0.077), ('practice', 0.077), ('metric', 0.077), ('coauthored', 0.075), ('regarding', 0.075), ('imagining', 0.074), ('residual', 0.074)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999976 244 andrew gelman stats-2010-08-30-Useful models, model checking, and external validation: a mini-discussion

Introduction: I sent a copy of my paper (coauthored with Cosma Shalizi) on Philosophy and the practice of Bayesian statistics in the social sciences to Richard Berk , who wrote: I read your paper this morning. I think we are pretty much on the same page about all models being wrong. I like very much the way you handle this in the paper. Yes, Newton’s work is wrong, but surely useful. I also like your twist on Bayesian methods. Makes good sense to me. Perhaps most important, your paper raises some difficult issues I have been trying to think more carefully about. 1. If the goal of a model is to be useful, surely we need to explore that “useful” means. At the very least, usefulness will depend on use. So a model that is useful for forecasting may or may not be useful for causal inference. 2. Usefulness will be a matter of degree. So that for each use we will need one or more metrics to represent how useful the model is. In what looks at first to be simple example, if the use is forecasting,

2 0.23370667 1972 andrew gelman stats-2013-08-07-When you’re planning on fitting a model, build up to it by fitting simpler models first. Then, once you have a model you like, check the hell out of it

Introduction: In response to my remarks on his online book, Think Bayes, Allen Downey wrote: I [Downey] have a question about one of your comments: My [Gelman's] main criticism with both books is that they talk a lot about inference but not so much about model building or model checking (recall the three steps of Bayesian data analysis). I think it’s ok for an introductory book to focus on inference, which of course is central to the data-analytic process—but I’d like them to at least mention that Bayesian ideas arise in model building and model checking as well. This sounds like something I agree with, and one of the things I tried to do in the book is to put modeling decisions front and center. But the word “modeling” is used in lots of ways, so I want to see if we are talking about the same thing. For example, in many chapters, I start with a simple model of the scenario, do some analysis, then check whether the model is good enough, and iterate. Here’s the discussion of modeling

3 0.23018208 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

Introduction: I’ve been writing a lot about my philosophy of Bayesian statistics and how it fits into Popper’s ideas about falsification and Kuhn’s ideas about scientific revolutions. Here’s my long, somewhat technical paper with Cosma Shalizi. Here’s our shorter overview for the volume on the philosophy of social science. Here’s my latest try (for an online symposium), focusing on the key issues. I’m pretty happy with my approach–the familiar idea that Bayesian data analysis iterates the three steps of model building, inference, and model checking–but it does have some unresolved (maybe unresolvable) problems. Here are a couple mentioned in the third of the above links. Consider a simple model with independent data y_1, y_2, .., y_10 ~ N(θ,σ^2), with a prior distribution θ ~ N(0,10^2) and σ known and taking on some value of approximately 10. Inference about μ is straightforward, as is model checking, whether based on graphs or numerical summaries such as the sample variance and skewn

4 0.21897331 754 andrew gelman stats-2011-06-09-Difficulties with Bayesian model averaging

Introduction: In response to this article by Cosma Shalizi and myself on the philosophy of Bayesian statistics, David Hogg writes: I [Hogg] agree–even in physics and astronomy–that the models are not “True” in the God-like sense of being absolute reality (that is, I am not a realist); and I have argued (a philosophically very naive paper, but hey, I was new to all this) that for pretty fundamental reasons we could never arrive at the True (with a capital “T”) model of the Universe. The goal of inference is to find the “best” model, where “best” might have something to do with prediction, or explanation, or message length, or (horror!) our utility. Needless to say, most of my physics friends *are* realists, even in the face of “effective theories” as Newtonian mechanics is an effective theory of GR and GR is an effective theory of “quantum gravity” (this plays to your point, because if you think any theory is possibly an effective theory, how could you ever find Truth?). I also liked the i

5 0.21439707 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

Introduction: Robert Bell pointed me to this post by Brad De Long on Bayesian statistics, and then I also noticed this from Noah Smith, who wrote: My impression is that although the Bayesian/Frequentist debate is interesting and intellectually fun, there’s really not much “there” there… despite being so-hip-right-now, Bayesian is not the Statistical Jesus. I’m happy to see the discussion going in this direction. Twenty-five years ago or so, when I got into this biz, there were some serious anti-Bayesian attitudes floating around in mainstream statistics. Discussions in the journals sometimes devolved into debates of the form, “Bayesians: knaves or fools?”. You’d get all sorts of free-floating skepticism about any prior distribution at all, even while people were accepting without question (and doing theory on) logistic regressions, proportional hazards models, and all sorts of strong strong models. (In the subfield of survey sampling, various prominent researchers would refuse to mode

6 0.2102243 1712 andrew gelman stats-2013-02-07-Philosophy and the practice of Bayesian statistics (with all the discussions!)

7 0.20786113 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

8 0.20502247 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

9 0.20415442 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

10 0.18786527 1898 andrew gelman stats-2013-06-14-Progress! (on the understanding of the role of randomization in Bayesian inference)

11 0.18601079 1719 andrew gelman stats-2013-02-11-Why waste time philosophizing?

12 0.18459964 320 andrew gelman stats-2010-10-05-Does posterior predictive model checking fit with the operational subjective approach?

13 0.18235895 1469 andrew gelman stats-2012-08-25-Ways of knowing

14 0.18103066 1395 andrew gelman stats-2012-06-27-Cross-validation (What is it good for?)

15 0.1799165 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

16 0.17140222 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

17 0.16948359 1554 andrew gelman stats-2012-10-31-It not necessary that Bayesian methods conform to the likelihood principle

18 0.16836092 1560 andrew gelman stats-2012-11-03-Statistical methods that work in some settings but not others

19 0.16687486 1948 andrew gelman stats-2013-07-21-Bayes related

20 0.16392802 811 andrew gelman stats-2011-07-20-Kind of Bayesian


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.317), (1, 0.223), (2, -0.093), (3, 0.079), (4, -0.084), (5, 0.005), (6, -0.079), (7, 0.023), (8, 0.103), (9, 0.007), (10, 0.021), (11, 0.006), (12, -0.054), (13, 0.001), (14, -0.006), (15, 0.027), (16, 0.043), (17, 0.013), (18, -0.028), (19, 0.051), (20, -0.022), (21, -0.018), (22, -0.03), (23, 0.006), (24, 0.001), (25, -0.013), (26, -0.009), (27, -0.03), (28, 0.013), (29, 0.035), (30, 0.008), (31, -0.023), (32, 0.01), (33, -0.026), (34, -0.01), (35, 0.007), (36, -0.006), (37, -0.018), (38, 0.002), (39, 0.013), (40, -0.005), (41, 0.023), (42, -0.014), (43, 0.038), (44, -0.012), (45, 0.005), (46, -0.019), (47, -0.028), (48, 0.003), (49, 0.01)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98234683 244 andrew gelman stats-2010-08-30-Useful models, model checking, and external validation: a mini-discussion

Introduction: I sent a copy of my paper (coauthored with Cosma Shalizi) on Philosophy and the practice of Bayesian statistics in the social sciences to Richard Berk , who wrote: I read your paper this morning. I think we are pretty much on the same page about all models being wrong. I like very much the way you handle this in the paper. Yes, Newton’s work is wrong, but surely useful. I also like your twist on Bayesian methods. Makes good sense to me. Perhaps most important, your paper raises some difficult issues I have been trying to think more carefully about. 1. If the goal of a model is to be useful, surely we need to explore that “useful” means. At the very least, usefulness will depend on use. So a model that is useful for forecasting may or may not be useful for causal inference. 2. Usefulness will be a matter of degree. So that for each use we will need one or more metrics to represent how useful the model is. In what looks at first to be simple example, if the use is forecasting,

2 0.92644215 1469 andrew gelman stats-2012-08-25-Ways of knowing

Introduction: In this discussion from last month, computer science student and Judea Pearl collaborator Elias Barenboim expressed an attitude that hierarchical Bayesian methods might be fine in practice but that they lack theory, that Bayesians can’t succeed in toy problems. I posted a P.S. there which might not have been noticed so I will put it here: I now realize that there is some disagreement about what constitutes a “guarantee.” In one of his comments, Barenboim writes, “the assurance we have that the result must hold as long as the assumptions in the model are correct should be regarded as a guarantee.” In that sense, yes, we have guarantees! It is fundamental to Bayesian inference that the result must hold if the assumptions in the model are correct. We have lots of that in Bayesian Data Analysis (particularly in the first four chapters but implicitly elsewhere as well), and this is also covered in the classic books by Lindley, Jaynes, and others. This sort of guarantee is indeed p

3 0.91905355 320 andrew gelman stats-2010-10-05-Does posterior predictive model checking fit with the operational subjective approach?

Introduction: David Rohde writes: I have been thinking a lot lately about your Bayesian model checking approach. This is in part because I have been working on exploratory data analysis and wishing to avoid controversy and mathematical statistics we omitted model checking from our discussion. This is something that the refereeing process picked us up on and we ultimately added a critical discussion of null-hypothesis testing to our paper . The exploratory technique we discussed was essentially a 2D histogram approach, but we used Polya models as a formal model for the histogram. We are currently working on a new paper, and we are thinking through how or if we should do “confirmatory analysis” or model checking in the paper. What I find most admirable about your statistical work is that you clearly use the Bayesian approach to do useful applied statistical analysis. My own attempts at applied Bayesian analysis makes me greatly admire your applied successes. On the other hand it may be t

4 0.90959847 1041 andrew gelman stats-2011-12-04-David MacKay and Occam’s Razor

Introduction: In my comments on David MacKay’s 2003 book on Bayesian inference, I wrote that I hate all the Occam-factor stuff that MacKay talks about, and I linked to this quote from Radford Neal: Sometimes a simple model will outperform a more complex model . . . Nevertheless, I believe that deliberately limiting the complexity of the model is not fruitful when the problem is evidently complex. Instead, if a simple model is found that outperforms some particular complex model, the appropriate response is to define a different complex model that captures whatever aspect of the problem led to the simple model performing well. MacKay replied as follows: When you said you disagree with me on Occam factors I think what you meant was that you agree with me on them. I’ve read your post on the topic and completely agreed with you (and Radford) that we should be using models the size of a house, models that we believe in, and that anyone who thinks it is a good idea to bias the model toward

5 0.9084602 811 andrew gelman stats-2011-07-20-Kind of Bayesian

Introduction: Astrophysicist Andrew Jaffe pointed me to this and discussion of my philosophy of statistics (which is, in turn, my rational reconstruction of the statistical practice of Bayesians such as Rubin and Jaynes). Jaffe’s summary is fair enough and I only disagree in a few points: 1. Jaffe writes: Subjective probability, at least the way it is actually used by practicing scientists, is a sort of “as-if” subjectivity — how would an agent reason if her beliefs were reflected in a certain set of probability distributions? This is why when I discuss probability I try to make the pedantic point that all probabilities are conditional, at least on some background prior information or context. I agree, and my problem with the usual procedures used for Bayesian model comparison and Bayesian model averaging is not that these approaches are subjective but that the particular models being considered don’t make sense. I’m thinking of the sorts of models that say the truth is either A or

6 0.90445089 1719 andrew gelman stats-2013-02-11-Why waste time philosophizing?

7 0.90337467 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

8 0.90193391 1529 andrew gelman stats-2012-10-11-Bayesian brains?

9 0.89919192 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

10 0.89860004 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

11 0.88654262 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

12 0.88023514 1406 andrew gelman stats-2012-07-05-Xiao-Li Meng and Xianchao Xie rethink asymptotics

13 0.87925816 1510 andrew gelman stats-2012-09-25-Incoherence of Bayesian data analysis

14 0.87028342 614 andrew gelman stats-2011-03-15-Induction within a model, deductive inference for model evaluation

15 0.86939114 1208 andrew gelman stats-2012-03-11-Gelman on Hennig on Gelman on Bayes

16 0.86570555 1280 andrew gelman stats-2012-04-24-Non-Bayesian analysis of Bayesian agents?

17 0.86470449 754 andrew gelman stats-2011-06-09-Difficulties with Bayesian model averaging

18 0.86319625 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

19 0.86176705 2254 andrew gelman stats-2014-03-18-Those wacky anti-Bayesians used to be intimidating, but now they’re just pathetic

20 0.85833502 1648 andrew gelman stats-2013-01-02-A important new survey of Bayesian predictive methods for model assessment, selection and comparison


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(2, 0.016), (6, 0.012), (12, 0.095), (13, 0.012), (15, 0.071), (16, 0.096), (23, 0.013), (24, 0.166), (35, 0.012), (43, 0.011), (63, 0.017), (77, 0.013), (84, 0.012), (86, 0.023), (99, 0.329)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.97496623 189 andrew gelman stats-2010-08-06-Proposal for a moratorium on the use of the words “fashionable” and “trendy”

Introduction: Tyler Cowen links to an interesting article by Terry Teachout on David Mamet’s political conservatism. I don’t think of playwrights as gurus, but I do find it interesting to consider the political orientations of authors and celebrities . I have only one problem with Teachout’s thought-provoking article. He writes: As early as 2002 . . . Arguing that “the Western press [had] embraced antisemitism as the new black,” Mamet drew a sharp contrast between that trendy distaste for Jews and the harsh realities of daily life in Israel . . . In 2006, Mamet published a collection of essays called The Wicked Son: Anti-Semitism, Jewish Self-Hatred and the Jews that made the point even more bluntly. “The Jewish State,” he wrote, “has offered the Arab world peace since 1948; it has received war, and slaughter, and the rhetoric of annihilation.” He went on to argue that secularized Jews who “reject their birthright of ‘connection to the Divine’” succumb in time to a self-hatred tha

same-blog 2 0.96418977 244 andrew gelman stats-2010-08-30-Useful models, model checking, and external validation: a mini-discussion

Introduction: I sent a copy of my paper (coauthored with Cosma Shalizi) on Philosophy and the practice of Bayesian statistics in the social sciences to Richard Berk , who wrote: I read your paper this morning. I think we are pretty much on the same page about all models being wrong. I like very much the way you handle this in the paper. Yes, Newton’s work is wrong, but surely useful. I also like your twist on Bayesian methods. Makes good sense to me. Perhaps most important, your paper raises some difficult issues I have been trying to think more carefully about. 1. If the goal of a model is to be useful, surely we need to explore that “useful” means. At the very least, usefulness will depend on use. So a model that is useful for forecasting may or may not be useful for causal inference. 2. Usefulness will be a matter of degree. So that for each use we will need one or more metrics to represent how useful the model is. In what looks at first to be simple example, if the use is forecasting,

3 0.96261442 211 andrew gelman stats-2010-08-17-Deducer update

Introduction: A year ago we blogged about Ian Fellows’s R Gui called Deducer (oops, my bad, I meant to link to this ). Fellows sends in this update: Since version 0.1, I [Fellows] have added: 1. A nice plug-in interface, so that people can extend Deducer’s capability without leaving the comfort of R. (see: http://www.deducer.org/pmwiki/pmwiki.php?n=Main.Development ) 2. Several new dialogs. 3. A one-step installer for windows. 4. A plug-in package (DeducerExtras) which extends the scope of analyses covered. 5. A plotting GUI that can create anything from simple histograms to complex custom graphics. Deducer is designed to be a free easy to use alternative to proprietary data analysis software such as SPSS, JMP, and Minitab. It has a menu system to do common data manipulation and analysis tasks, and an excel-like spreadsheet in which to view and edit data frames. The goal of the project is two fold. Provide an intuitive interface so that non-technical users can learn and p

4 0.96161699 2244 andrew gelman stats-2014-03-11-What if I were to stop publishing in journals?

Introduction: In our recent discussion of modes of publication, Joseph Wilson wrote, “The single best reform science can make right now is to decouple publication from career advancement, thereby reducing the number of publications by an order of magnitude and then move to an entirely disjointed, informal, online free-for-all communication system for research results.” My first thought on this was: Sure, yeah, that makes sense. But then I got to thinking: what would it really mean to decouple publication from career advancement? This is too late for me—I’m middle-aged and have no career advancement in my future—but it got me thinking more carefully about the role of publication in the research process, and this seemed worth a blog (the simplest sort of publication available to me). However, somewhere between writing the above paragraphs and writing the blog entry, I forgot exactly what I was going to say! I guess I should’ve just typed it all in then. In the old days I just wouldn’t run this

5 0.96061605 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

Introduction: I discussed two problems: 1. An artificial scarcity applied to journal publication, a scarcity which I believe is being enforced based on a monetary principle of not wanting to reduce the value of publication. The problem is that journals don’t just spread information and improve communication, they also represent chits for hiring and promotion. I’d prefer to separate these two aspects of publication. To keep these functions tied together seems to me like a terrible mistake. It would be as if, instead of using dollar bills as currency, we were to just use paper , and then if the government kept paper artificially scarce to retain the value of money, so that we were reduced to scratching notes to each other on walls and tables. 2. The discontinuous way in which unpublished papers and submissions to journals are taken as highly suspect and requiring a strong justification of all methods and assumptions, but once a paper becomes published its conclusions are taken as true unless

6 0.95932978 1848 andrew gelman stats-2013-05-09-A tale of two discussion papers

7 0.9589259 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

8 0.9583708 2177 andrew gelman stats-2014-01-19-“The British amateur who debunked the mathematics of happiness”

9 0.95832837 902 andrew gelman stats-2011-09-12-The importance of style in academic writing

10 0.95824426 434 andrew gelman stats-2010-11-28-When Small Numbers Lead to Big Errors

11 0.95739603 2227 andrew gelman stats-2014-02-27-“What Can we Learn from the Many Labs Replication Project?”

12 0.95735073 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

13 0.95724499 2287 andrew gelman stats-2014-04-09-Advice: positive-sum, zero-sum, or negative-sum

14 0.95690519 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

15 0.95675886 481 andrew gelman stats-2010-12-22-The Jumpstart financial literacy survey and the different purposes of tests

16 0.95523262 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

17 0.95499736 1560 andrew gelman stats-2012-11-03-Statistical methods that work in some settings but not others

18 0.95338774 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

19 0.95335114 167 andrew gelman stats-2010-07-27-Why don’t more medical discoveries become cures?

20 0.95334882 2091 andrew gelman stats-2013-11-06-“Marginally significant”