andrew_gelman_stats andrew_gelman_stats-2010 andrew_gelman_stats-2010-427 knowledge-graph by maker-knowledge-mining

427 andrew gelman stats-2010-11-23-Bayesian adaptive methods for clinical trials


meta infos for this blog

Source: html

Introduction: Scott Berry, Brad Carlin, Jack Lee, and Peter Muller recently came out with a book with the above title. The book packs a lot into its 280 pages and is fun to read as well (even if they do use the word “modalities” in their first paragraph, and later on they use the phrase “DIC criterion,” which upsets my tidy, logical mind). The book starts off fast on page 1 and never lets go. Clinical trials are a big part of statistics and it’s cool to see the topic taken seriously and being treated rigorously. (Here I’m not talking about empty mathematical rigor (or, should I say, “rigor”), so-called optimal designs and all that, but rather the rigor of applied statistics, mapping models to reality.) Also I have a few technical suggestions. 1. The authors fit a lot of models in Bugs, which is fine, but they go overboard on the WinBUGS thing. There’s WinBUGS, OpenBUGS, JAGS: they’re all Bugs recommend running Bugs from R using the clunky BRugs interface rather than the smoother bugs(


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Scott Berry, Brad Carlin, Jack Lee, and Peter Muller recently came out with a book with the above title. [sent-1, score-0.119]

2 The book packs a lot into its 280 pages and is fun to read as well (even if they do use the word “modalities” in their first paragraph, and later on they use the phrase “DIC criterion,” which upsets my tidy, logical mind). [sent-2, score-0.41]

3 The book starts off fast on page 1 and never lets go. [sent-3, score-0.232]

4 (Here I’m not talking about empty mathematical rigor (or, should I say, “rigor”), so-called optimal designs and all that, but rather the rigor of applied statistics, mapping models to reality. [sent-5, score-0.488]

5 The authors fit a lot of models in Bugs, which is fine, but they go overboard on the WinBUGS thing. [sent-8, score-0.247]

6 There’s WinBUGS, OpenBUGS, JAGS: they’re all Bugs recommend running Bugs from R using the clunky BRugs interface rather than the smoother bugs() function, which has good defaults and conveniently returns graphical summaries and convergence diagnostics. [sent-9, score-0.28]

7 On page 61 they demonstrate an excellent graphical summary that reveals that, in a particular example, their posterior distribution is improper–or, strictly speaking, that the posterior depends strongly on the choice of an arbitrary truncation point in the prior distribution. [sent-12, score-0.258]

8 They cover all of Bayesian inference in a couple chapters, which is fine–interested readers can learn the whole thing from the Carlin and Louis book–but in their haste they sometimes slip up. [sent-17, score-0.169]

9 There are difference, however, in the Bayesian and frequentist views of randomization. [sent-19, score-0.128]

10 In the latter, randomization serves as the basis for inference, whereas the basis for inference in the Bayesian approach is subjective probability, which does not require randomization. [sent-20, score-0.766]

11 First, randomization is a basis for frequentist inference, but it’s not fair to call it the basis. [sent-22, score-0.447]

12 There’s lots of frequentist inference for nonrandomized studies. [sent-23, score-0.369]

13 Second, I agree that the basis for Bayesian inference is probability but I don’t buy the “subjective” part (except to the extent that all science is subjective). [sent-24, score-0.323]

14 Greenland uses Bayesian methods and has thought a lot about bias and causal inference in practical medical settings. [sent-29, score-0.304]

15 Finally, the above paragraph is a bit odd in that “test-based estimation” and “semiparametric Cox partial likelihood” are nowhere defined in the book (or, at least, I couldn’t find them in the index). [sent-33, score-0.363]

16 The very last section covers subgroup analysis and then mentions multilevel models (the natural Bayesian approach to the problem) but then doesn’t really follow through. [sent-36, score-0.396]

17 That’s fine, but I’d like to see a worked example of a multilevel model for subgroup analysis, instead of just the reference to Hodges et al. [sent-38, score-0.282]

18 I hope that everyone working on clinical trials reads it and that it has a large influence. [sent-41, score-0.175]

19 ” In particular, my own books don’t have anything to say on multiple-bias models, test-based estimation, semiparametric Cox partial likelihood, multilevel models for subgroup analysis, or various other topics I’m asking for elaboration on. [sent-43, score-0.755]

20 As it stands, Berry, Carlin, Lee, and Muller have packed a lot into 280 pages. [sent-44, score-0.143]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('semiparametric', 0.228), ('greenland', 0.227), ('carlin', 0.22), ('subgroup', 0.187), ('rigor', 0.187), ('bugs', 0.182), ('inference', 0.169), ('cox', 0.167), ('randomization', 0.165), ('basis', 0.154), ('muller', 0.152), ('bayesian', 0.138), ('partial', 0.131), ('frequentist', 0.128), ('winbugs', 0.128), ('subjective', 0.124), ('berry', 0.12), ('book', 0.119), ('models', 0.114), ('paragraph', 0.113), ('page', 0.113), ('estimation', 0.107), ('multilevel', 0.095), ('index', 0.09), ('lee', 0.09), ('trials', 0.089), ('clinical', 0.086), ('graphical', 0.08), ('fine', 0.078), ('pages', 0.076), ('openbugs', 0.076), ('brugs', 0.076), ('upsets', 0.076), ('packed', 0.076), ('lanes', 0.072), ('nonrandomized', 0.072), ('packs', 0.072), ('hodges', 0.072), ('digression', 0.068), ('smoother', 0.068), ('clunky', 0.068), ('sander', 0.068), ('bias', 0.068), ('lot', 0.067), ('likelihood', 0.067), ('drift', 0.066), ('advocate', 0.066), ('overboard', 0.066), ('summary', 0.065), ('conveniently', 0.064)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999964 427 andrew gelman stats-2010-11-23-Bayesian adaptive methods for clinical trials

Introduction: Scott Berry, Brad Carlin, Jack Lee, and Peter Muller recently came out with a book with the above title. The book packs a lot into its 280 pages and is fun to read as well (even if they do use the word “modalities” in their first paragraph, and later on they use the phrase “DIC criterion,” which upsets my tidy, logical mind). The book starts off fast on page 1 and never lets go. Clinical trials are a big part of statistics and it’s cool to see the topic taken seriously and being treated rigorously. (Here I’m not talking about empty mathematical rigor (or, should I say, “rigor”), so-called optimal designs and all that, but rather the rigor of applied statistics, mapping models to reality.) Also I have a few technical suggestions. 1. The authors fit a lot of models in Bugs, which is fine, but they go overboard on the WinBUGS thing. There’s WinBUGS, OpenBUGS, JAGS: they’re all Bugs recommend running Bugs from R using the clunky BRugs interface rather than the smoother bugs(

2 0.20395204 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

Introduction: I sent Deborah Mayo a link to my paper with Cosma Shalizi on the philosophy of statistics, and she sent me the link to this conference which unfortunately already occurred. (It’s too bad, because I’d have liked to have been there.) I summarized my philosophy as follows: I am highly sympathetic to the approach of Lakatos (or of Popper, if you consider Lakatos’s “Popper_2″ to be a reasonable simulation of the true Popperism), in that (a) I view statistical models as being built within theoretical structures, and (b) I see the checking and refutation of models to be a key part of scientific progress. A big problem I have with mainstream Bayesianism is its “inductivist” view that science can operate completely smoothly with posterior updates: the idea that new data causes us to increase the posterior probability of good models and decrease the posterior probability of bad models. I don’t buy that: I see models as ever-changing entities that are flexible and can be patched and ex

3 0.18435887 2273 andrew gelman stats-2014-03-29-References (with code) for Bayesian hierarchical (multilevel) modeling and structural equation modeling

Introduction: A student writes: I am new to Bayesian methods. While I am reading your book, I have some questions for you. I am interested in doing Bayesian hierarchical (multi-level) linear regression (e.g., random-intercept model) and Bayesian structural equation modeling (SEM)—for causality. Do you happen to know if I could find some articles, where authors could provide data w/ R and/or BUGS codes that I could replicate them? My reply: For Bayesian hierarchical (multi-level) linear regression and causal inference, see my book with Jennifer Hill. For Bayesian structural equation modeling, try google and you’ll find some good stuff. Also, I recommend Stan (http://mc-stan.org/) rather than Bugs.

4 0.17886806 1713 andrew gelman stats-2013-02-08-P-values and statistical practice

Introduction: From my new article in the journal Epidemiology: Sander Greenland and Charles Poole accept that P values are here to stay but recognize that some of their most common interpretations have problems. The casual view of the P value as posterior probability of the truth of the null hypothesis is false and not even close to valid under any reasonable model, yet this misunderstanding persists even in high-stakes settings (as discussed, for example, by Greenland in 2011). The formal view of the P value as a probability conditional on the null is mathematically correct but typically irrelevant to research goals (hence, the popularity of alternative—if wrong—interpretations). A Bayesian interpretation based on a spike-and-slab model makes little sense in applied contexts in epidemiology, political science, and other fields in which true effects are typically nonzero and bounded (thus violating both the “spike” and the “slab” parts of the model). I find Greenland and Poole’s perspective t

5 0.16585357 1898 andrew gelman stats-2013-06-14-Progress! (on the understanding of the role of randomization in Bayesian inference)

Introduction: Leading theoretical statistician Larry Wassserman in 2008 : Some of the greatest contributions of statistics to science involve adding additional randomness and leveraging that randomness. Examples are randomized experiments, permutation tests, cross-validation and data-splitting. These are unabashedly frequentist ideas and, while one can strain to fit them into a Bayesian framework, they don’t really have a place in Bayesian inference. The fact that Bayesian methods do not naturally accommodate such a powerful set of statistical ideas seems like a serious deficiency. To which I responded on the second-to-last paragraph of page 8 here . Larry Wasserman in 2013 : Some people say that there is no role for randomization in Bayesian inference. In other words, the randomization mechanism plays no role in Bayes’ theorem. But this is not really true. Without randomization, we can indeed derive a posterior for theta but it is highly sensitive to the prior. This is just a restat

6 0.16037254 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

7 0.15627819 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

8 0.15533519 1149 andrew gelman stats-2012-02-01-Philosophy of Bayesian statistics: my reactions to Cox and Mayo

9 0.15351337 1151 andrew gelman stats-2012-02-03-Philosophy of Bayesian statistics: my reactions to Senn

10 0.15120605 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

11 0.1479632 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

12 0.14593926 1554 andrew gelman stats-2012-10-31-It not necessary that Bayesian methods conform to the likelihood principle

13 0.14272504 662 andrew gelman stats-2011-04-15-Bayesian statistical pragmatism

14 0.13943852 1469 andrew gelman stats-2012-08-25-Ways of knowing

15 0.13732418 1418 andrew gelman stats-2012-07-16-Long discussion about causal inference and the use of hierarchical models to bridge between different inferential settings

16 0.13702489 1948 andrew gelman stats-2013-07-21-Bayes related

17 0.13527733 1188 andrew gelman stats-2012-02-28-Reference on longitudinal models?

18 0.13407129 320 andrew gelman stats-2010-10-05-Does posterior predictive model checking fit with the operational subjective approach?

19 0.12969729 317 andrew gelman stats-2010-10-04-Rob Kass on statistical pragmatism, and my reactions

20 0.12948962 244 andrew gelman stats-2010-08-30-Useful models, model checking, and external validation: a mini-discussion


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.242), (1, 0.145), (2, -0.086), (3, 0.076), (4, -0.06), (5, 0.016), (6, -0.01), (7, 0.015), (8, 0.077), (9, -0.03), (10, -0.009), (11, -0.057), (12, 0.008), (13, 0.004), (14, 0.117), (15, 0.02), (16, -0.005), (17, 0.039), (18, 0.018), (19, -0.01), (20, -0.015), (21, 0.028), (22, 0.051), (23, 0.017), (24, 0.028), (25, -0.004), (26, 0.01), (27, -0.004), (28, 0.003), (29, 0.06), (30, -0.068), (31, -0.003), (32, 0.027), (33, -0.015), (34, -0.039), (35, -0.009), (36, -0.008), (37, -0.06), (38, 0.019), (39, 0.042), (40, -0.01), (41, -0.04), (42, -0.013), (43, -0.001), (44, 0.005), (45, -0.033), (46, 0.0), (47, 0.051), (48, -0.031), (49, -0.018)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97997326 427 andrew gelman stats-2010-11-23-Bayesian adaptive methods for clinical trials

Introduction: Scott Berry, Brad Carlin, Jack Lee, and Peter Muller recently came out with a book with the above title. The book packs a lot into its 280 pages and is fun to read as well (even if they do use the word “modalities” in their first paragraph, and later on they use the phrase “DIC criterion,” which upsets my tidy, logical mind). The book starts off fast on page 1 and never lets go. Clinical trials are a big part of statistics and it’s cool to see the topic taken seriously and being treated rigorously. (Here I’m not talking about empty mathematical rigor (or, should I say, “rigor”), so-called optimal designs and all that, but rather the rigor of applied statistics, mapping models to reality.) Also I have a few technical suggestions. 1. The authors fit a lot of models in Bugs, which is fine, but they go overboard on the WinBUGS thing. There’s WinBUGS, OpenBUGS, JAGS: they’re all Bugs recommend running Bugs from R using the clunky BRugs interface rather than the smoother bugs(

2 0.79822725 1469 andrew gelman stats-2012-08-25-Ways of knowing

Introduction: In this discussion from last month, computer science student and Judea Pearl collaborator Elias Barenboim expressed an attitude that hierarchical Bayesian methods might be fine in practice but that they lack theory, that Bayesians can’t succeed in toy problems. I posted a P.S. there which might not have been noticed so I will put it here: I now realize that there is some disagreement about what constitutes a “guarantee.” In one of his comments, Barenboim writes, “the assurance we have that the result must hold as long as the assumptions in the model are correct should be regarded as a guarantee.” In that sense, yes, we have guarantees! It is fundamental to Bayesian inference that the result must hold if the assumptions in the model are correct. We have lots of that in Bayesian Data Analysis (particularly in the first four chapters but implicitly elsewhere as well), and this is also covered in the classic books by Lindley, Jaynes, and others. This sort of guarantee is indeed p

3 0.79557425 1898 andrew gelman stats-2013-06-14-Progress! (on the understanding of the role of randomization in Bayesian inference)

Introduction: Leading theoretical statistician Larry Wassserman in 2008 : Some of the greatest contributions of statistics to science involve adding additional randomness and leveraging that randomness. Examples are randomized experiments, permutation tests, cross-validation and data-splitting. These are unabashedly frequentist ideas and, while one can strain to fit them into a Bayesian framework, they don’t really have a place in Bayesian inference. The fact that Bayesian methods do not naturally accommodate such a powerful set of statistical ideas seems like a serious deficiency. To which I responded on the second-to-last paragraph of page 8 here . Larry Wasserman in 2013 : Some people say that there is no role for randomization in Bayesian inference. In other words, the randomization mechanism plays no role in Bayes’ theorem. But this is not really true. Without randomization, we can indeed derive a posterior for theta but it is highly sensitive to the prior. This is just a restat

4 0.79467934 1948 andrew gelman stats-2013-07-21-Bayes related

Introduction: Dave Decker writes: I’ve seen some Bayes related things recently that might make for interesting fodder on your blog. There are two books, teaching Bayesian analysis from a programming perspective. And also a “web application for data analysis using powerful Bayesian statistical methods.” I took a look. The first book is Think Bayes: Bayesian Statistics Made Simple, by Allen B. Downey . It’s super readable and, amazingly, has approximately zero overlap with Bayesian Data Analysis. Downey discusses lots of little problems in a conversational way. In some ways it’s like an old-style math stat textbook (although with a programming rather than mathematical flavor) in that the examples are designed for simplicity rather than realism. I like it! Our book already exists; it’s good to have something else for people to read, coming from an entirely different perspective. The second book is Probabilistic Programming and Bayesian Methods for Hackers , by Cameron Davidson-P

5 0.78925633 2254 andrew gelman stats-2014-03-18-Those wacky anti-Bayesians used to be intimidating, but now they’re just pathetic

Introduction: From 2006 : Eric Archer forwarded this document by Nick Freemantle, “The Reverend Bayes—was he really a prophet?”, in the Journal of the Royal Society of Medicine: Does [Bayes's] contribution merit the enthusiasms of his followers? Or is his legacy overhyped? . . . First, Bayesians appear to have an absolute right to disapprove of any conventional approach in statistics without offering a workable alternative—for example, a colleague recently stated at a meeting that ‘. . . it is OK to have multiple comparisons because Bayesians’ don’t believe in alpha spending’. . . . Second, Bayesians appear to build an army of straw men—everything it seems is different and better from a Bayesian perspective, although many of the concepts seem remarkably familiar. For example, a very well known Bayesian statistician recently surprised the audience with his discovery of the P value as a useful Bayesian statistic at a meeting in Birmingham. Third, Bayesians possess enormous enthusiasm fo

6 0.78042001 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

7 0.77932054 183 andrew gelman stats-2010-08-04-Bayesian models for simultaneous equation systems?

8 0.77601516 2273 andrew gelman stats-2014-03-29-References (with code) for Bayesian hierarchical (multilevel) modeling and structural equation modeling

9 0.77375418 117 andrew gelman stats-2010-06-29-Ya don’t know Bayes, Jack

10 0.773332 1571 andrew gelman stats-2012-11-09-The anti-Bayesian moment and its passing

11 0.76926625 1586 andrew gelman stats-2012-11-21-Readings for a two-week segment on Bayesian modeling?

12 0.76899332 1781 andrew gelman stats-2013-03-29-Another Feller theory

13 0.76874924 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

14 0.76756698 1438 andrew gelman stats-2012-07-31-What is a Bayesian?

15 0.75303996 1188 andrew gelman stats-2012-02-28-Reference on longitudinal models?

16 0.74861914 1610 andrew gelman stats-2012-12-06-Yes, checking calibration of probability forecasts is part of Bayesian statistics

17 0.74826407 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

18 0.74787784 2182 andrew gelman stats-2014-01-22-Spell-checking example demonstrates key aspects of Bayesian data analysis

19 0.74539733 1157 andrew gelman stats-2012-02-07-Philosophy of Bayesian statistics: my reactions to Hendry

20 0.74401748 1912 andrew gelman stats-2013-06-24-Bayesian quality control?


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(8, 0.01), (15, 0.017), (16, 0.091), (24, 0.199), (27, 0.028), (30, 0.027), (36, 0.019), (53, 0.02), (58, 0.015), (65, 0.017), (72, 0.071), (82, 0.025), (86, 0.012), (89, 0.012), (99, 0.283)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97367883 427 andrew gelman stats-2010-11-23-Bayesian adaptive methods for clinical trials

Introduction: Scott Berry, Brad Carlin, Jack Lee, and Peter Muller recently came out with a book with the above title. The book packs a lot into its 280 pages and is fun to read as well (even if they do use the word “modalities” in their first paragraph, and later on they use the phrase “DIC criterion,” which upsets my tidy, logical mind). The book starts off fast on page 1 and never lets go. Clinical trials are a big part of statistics and it’s cool to see the topic taken seriously and being treated rigorously. (Here I’m not talking about empty mathematical rigor (or, should I say, “rigor”), so-called optimal designs and all that, but rather the rigor of applied statistics, mapping models to reality.) Also I have a few technical suggestions. 1. The authors fit a lot of models in Bugs, which is fine, but they go overboard on the WinBUGS thing. There’s WinBUGS, OpenBUGS, JAGS: they’re all Bugs recommend running Bugs from R using the clunky BRugs interface rather than the smoother bugs(

2 0.97212875 2208 andrew gelman stats-2014-02-12-How to think about “identifiability” in Bayesian inference?

Introduction: We had some questions on the Stan list regarding identification. The topic arose because people were fitting models with improper posterior distributions, the kind of model where there’s a ridge in the likelihood and the parameters are not otherwise constrained. I tried to help by writing something on Bayesian identifiability for the Stan list. Then Ben Goodrich came along and cleaned up what I wrote. I think this might be of interest to many of you so I’ll repeat the discussion here. Here’s what I wrote: Identification is actually a tricky concept and is not so clearly defined. In the broadest sense, a Bayesian model is identified if the posterior distribution is proper. Then one can do Bayesian inference and that’s that. No need to require a finite variance or even a finite mean, all that’s needed is a finite integral of the probability distribution. That said, there are some reasons why a stronger definition can be useful: 1. Weak identification. Suppose that, wit

3 0.96960211 727 andrew gelman stats-2011-05-23-My new writing strategy

Introduction: In high school and college I would write long assignments using a series of outlines. I’d start with a single sheet where I’d write down the key phrases, connect them with lines, and then write more and more phrases until the page was filled up. Then I’d write a series of outlines, culminating in a sentence-level outline that was roughly one line per sentence of the paper. Then I’d write. It worked pretty well. Or horribly, depending on how you look at it. I was able to produce 10-page papers etc. on time. But I think it crippled my writing style for years. It’s taken me a long time to learn how to write directly–to explain clearly what I’ve done and why. And I’m still working on the “why” part. There’s a thin line between verbosity and terseness. I went to MIT and my roommate was a computer science major. He wrote me a word processor on his Atari 800, which did the job pretty well. For my senior thesis I broke down and used the computers in campus. I formatted it in tro

4 0.96066821 807 andrew gelman stats-2011-07-17-Macro causality

Introduction: David Backus writes: This is from my area of work, macroeconomics. The suggestion here is that the economy is growing slowly because consumers aren’t spending money. But how do we know it’s not the reverse: that consumers are spending less because the economy isn’t doing well. As a teacher, I can tell you that it’s almost impossible to get students to understand that the first statement isn’t obviously true. What I’d call the demand-side story (more spending leads to more output) is everywhere, including this piece, from the usually reliable David Leonhardt. This whole situation reminds me of the story of the village whose inhabitants support themselves by taking in each others’ laundry. I guess we’re rich enough in the U.S. that we can stay afloat for a few decades just buying things from each other? Regarding the causal question, I’d like to move away from the idea of “Does A causes B or does B cause A” and toward a more intervention-based framework (Rubin’s model for

5 0.9601844 1155 andrew gelman stats-2012-02-05-What is a prior distribution?

Introduction: Some recent blog discussion revealed some confusion that I’ll try to resolve here. I wrote that I’m not a big fan of subjective priors. Various commenters had difficulty with this point, and I think the issue was most clearly stated by Bill Jeff re erys, who wrote : It seems to me that your prior has to reflect your subjective information before you look at the data. How can it not? But this does not mean that the (subjective) prior that you choose is irrefutable; Surely a prior that reflects prior information just does not have to be inconsistent with that information. But that still leaves a range of priors that are consistent with it, the sort of priors that one would use in a sensitivity analysis, for example. I think I see what Bill is getting at. A prior represents your subjective belief, or some approximation to your subjective belief, even if it’s not perfect. That sounds reasonable but I don’t think it works. Or, at least, it often doesn’t work. Let’s start

6 0.95979297 2109 andrew gelman stats-2013-11-21-Hidden dangers of noninformative priors

7 0.95930904 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

8 0.95925832 2089 andrew gelman stats-2013-11-04-Shlemiel the Software Developer and Unknown Unknowns

9 0.95922315 1644 andrew gelman stats-2012-12-30-Fixed effects, followed by Bayes shrinkage?

10 0.95912492 1240 andrew gelman stats-2012-04-02-Blogads update

11 0.95892215 799 andrew gelman stats-2011-07-13-Hypothesis testing with multiple imputations

12 0.95873278 1167 andrew gelman stats-2012-02-14-Extra babies on Valentine’s Day, fewer on Halloween?

13 0.95860982 247 andrew gelman stats-2010-09-01-How does Bayes do it?

14 0.95784235 898 andrew gelman stats-2011-09-10-Fourteen magic words: an update

15 0.95754981 2358 andrew gelman stats-2014-06-03-Did you buy laundry detergent on their most recent trip to the store? Also comments on scientific publication and yet another suggestion to do a study that allows within-person comparisons

16 0.95752311 2201 andrew gelman stats-2014-02-06-Bootstrap averaging: Examples where it works and where it doesn’t work

17 0.95744127 1881 andrew gelman stats-2013-06-03-Boot

18 0.95723403 1713 andrew gelman stats-2013-02-08-P-values and statistical practice

19 0.9566859 2086 andrew gelman stats-2013-11-03-How best to compare effects measured in two different time periods?

20 0.95655441 1966 andrew gelman stats-2013-08-03-Uncertainty in parameter estimates using multilevel models