andrew_gelman_stats andrew_gelman_stats-2011 andrew_gelman_stats-2011-1087 knowledge-graph by maker-knowledge-mining

1087 andrew gelman stats-2011-12-27-“Keeping things unridiculous”: Berger, O’Hagan, and me on weakly informative priors


meta infos for this blog

Source: html

Introduction: Deborah Mayo sent me this quote from Jim Berger: Too often I see people pretending to be subjectivists, and then using “weakly informative” priors that the objective Bayesian community knows are terrible and will give ridiculous answers; subjectivism is then being used as a shield to hide ignorance. . . . In my own more provocative moments, I claim that the only true subjectivists are the objective Bayesians, because they refuse to use subjectivism as a shield against criticism of sloppy pseudo-Bayesian practice. This caught my attention because I’ve become more and more convinced that weakly informative priors are the right way to go in many different situations. I don’t think Berger was talking about me , though, as the above quote came from a publication in 2006, at which time I’d only started writing about weakly informative priors. Going back to Berger’s article , I see that his “weakly informative priors” remark was aimed at this article by Anthony O’Hagan, who w


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 In my own more provocative moments, I claim that the only true subjectivists are the objective Bayesians, because they refuse to use subjectivism as a shield against criticism of sloppy pseudo-Bayesian practice. [sent-5, score-0.582]

2 This caught my attention because I’ve become more and more convinced that weakly informative priors are the right way to go in many different situations. [sent-6, score-1.121]

3 I don’t think Berger was talking about me , though, as the above quote came from a publication in 2006, at which time I’d only started writing about weakly informative priors. [sent-7, score-0.892]

4 Going back to Berger’s article , I see that his “weakly informative priors” remark was aimed at this article by Anthony O’Hagan, who wrote: When prior information is weak, and the evidence from the data is relatively much stronger, then the data will dominate and . [sent-8, score-0.592]

5 a weakly informative prior can be expected to give essentially the same posterior distribution as a more carefully considered prior distribution. [sent-11, score-1.219]

6 The role of weakly informative priors is thus to provide approximations to a more meticulous Bayesian analysis. [sent-12, score-1.249]

7 Te second reason why this is important is that the situation of weak prior information is one where it is particularly difficult to formulate a genuine prior distribution carefully. [sent-17, score-0.516]

8 For this reason, I [O'Hagan] use weakly informative priors liberally in my own Bayesian analyses. [sent-21, score-1.126]

9 Everything we do in practice is an approximation in exactly this sense: there is nothing special about using weakly informative priors in this way. [sent-30, score-1.141]

10 I pretty much agree with O’Hagan here except that I’d go even further and say that in many cases it’s not clear what the correct fully informative model would be. [sent-31, score-0.502]

11 Given the information available in any given problem, I think I would in many cases prefer a weakly informative prior to a full subjective prior even if I were able to construct such a thing. [sent-32, score-1.44]

12 ” Keeping things unridiculous is what regularization’s all about, and one challenge of regularization (as compared to pure subjective priors) is that the answer to the question, What is a good regularizing prior? [sent-34, score-0.332]

13 There’s a lot of interesting theory and practice relating to weakly informative priors for regularization, a lot out there that goes beyond the idea of noninformativity. [sent-36, score-1.071]

14 But, more and more, I’m coming across applied problems where I wouldn’t want to be noninformative even if I could, problems where some weak prior information regularizes my inferences and keeps them sane and under control. [sent-38, score-0.453]

15 Finally, I think subjectivity and objectivity both are necessary parts of research. [sent-39, score-0.334]

16 Science is objective in that it aims for reproducible findings that exist independent of the observer, and it’s subjective in that the process of science involves many individual choices. [sent-40, score-0.356]

17 And I think the statistics I do (mostly, but not always, using Bayesian methods) is both objective and subjective in that way. [sent-41, score-0.359]

18 That said, I think I see where Berger is coming from: objectivity is a goal we are aiming for, whereas subjectivity is an unavoidable weakness that we try to minimize. [sent-42, score-0.453]

19 I think weakly informative priors are, or can be, as objective as many other statistical choices, such as assumptions of additivity, linearity, and symmetry, choices of functional forms such as in logistic regression, and so forth. [sent-43, score-1.465]

20 I agree with Berger that objectivity is a desirable goal, and I think we can get closer to that goal by stating our assumptions clearly enough that they can be defended or contradicted by scientific theory and data—a position to which I expect Deborah Mayo would agree as well. [sent-46, score-0.506]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('weakly', 0.441), ('berger', 0.367), ('informative', 0.35), ('priors', 0.28), ('objectivity', 0.197), ('objective', 0.187), ('prior', 0.185), ('mayo', 0.144), ('shield', 0.12), ('subjectivists', 0.12), ('subjective', 0.119), ('regularization', 0.109), ('subjectivism', 0.109), ('hagan', 0.103), ('weak', 0.089), ('subjectivity', 0.084), ('approximations', 0.077), ('noninformative', 0.074), ('deborah', 0.072), ('bayesian', 0.071), ('approximation', 0.07), ('constraints', 0.065), ('goal', 0.064), ('give', 0.058), ('answers', 0.058), ('information', 0.057), ('choices', 0.056), ('unavoidable', 0.055), ('liberally', 0.055), ('te', 0.055), ('unridiculous', 0.055), ('fully', 0.054), ('think', 0.053), ('meticulous', 0.051), ('space', 0.051), ('role', 0.05), ('many', 0.05), ('pretending', 0.049), ('regularizing', 0.049), ('quote', 0.048), ('agree', 0.048), ('assumptions', 0.048), ('linearity', 0.048), ('sane', 0.048), ('contradicted', 0.048), ('analyses', 0.047), ('observer', 0.046), ('provocative', 0.046), ('demanding', 0.046), ('purity', 0.046)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999964 1087 andrew gelman stats-2011-12-27-“Keeping things unridiculous”: Berger, O’Hagan, and me on weakly informative priors

Introduction: Deborah Mayo sent me this quote from Jim Berger: Too often I see people pretending to be subjectivists, and then using “weakly informative” priors that the objective Bayesian community knows are terrible and will give ridiculous answers; subjectivism is then being used as a shield to hide ignorance. . . . In my own more provocative moments, I claim that the only true subjectivists are the objective Bayesians, because they refuse to use subjectivism as a shield against criticism of sloppy pseudo-Bayesian practice. This caught my attention because I’ve become more and more convinced that weakly informative priors are the right way to go in many different situations. I don’t think Berger was talking about me , though, as the above quote came from a publication in 2006, at which time I’d only started writing about weakly informative priors. Going back to Berger’s article , I see that his “weakly informative priors” remark was aimed at this article by Anthony O’Hagan, who w

2 0.68321818 1092 andrew gelman stats-2011-12-29-More by Berger and me on weakly informative priors

Introduction: A couple days ago we discussed some remarks by Tony O’Hagan and Jim Berger on weakly informative priors. Jim followed up on Deborah Mayo’s blog with this: Objective Bayesian priors are often improper (i.e., have infinite total mass), but this is not a problem when they are developed correctly. But not every improper prior is satisfactory. For instance, the constant prior is known to be unsatisfactory in many situations. The ‘solution’ pseudo-Bayesians often use is to choose a constant prior over a large but bounded set (a ‘weakly informative’ prior), saying it is now proper and so all is well. This is not true; if the constant prior on the whole parameter space is bad, so will be the constant prior over the bounded set. The problem is, in part, that some people confuse proper priors with subjective priors and, having learned that true subjective priors are fine, incorrectly presume that weakly informative proper priors are fine. I have a few reactions to this: 1. I agree

3 0.37379408 846 andrew gelman stats-2011-08-09-Default priors update?

Introduction: Ryan King writes: I was wondering if you have a brief comment on the state of the art for objective priors for hierarchical generalized linear models (generalized linear mixed models). I have been working off the papers in Bayesian Analysis (2006) 1, Number 3 (Browne and Draper, Kass and Natarajan, Gelman). There seems to have been continuous work for matching priors in linear mixed models, but GLMMs less so because of the lack of an analytic marginal likelihood for the variance components. There are a number of additional suggestions in the literature since 2006, but little robust practical guidance. I’m interested in both mean parameters and the variance components. I’m almost always concerned with logistic random effect models. I’m fascinated by the matching-priors idea of higher-order asymptotic improvements to maximum likelihood, and need to make some kind of defensible default recommendation. Given the massive scale of the datasets (genetics …), extensive sensitivity a

4 0.30387747 1155 andrew gelman stats-2012-02-05-What is a prior distribution?

Introduction: Some recent blog discussion revealed some confusion that I’ll try to resolve here. I wrote that I’m not a big fan of subjective priors. Various commenters had difficulty with this point, and I think the issue was most clearly stated by Bill Jeff re erys, who wrote : It seems to me that your prior has to reflect your subjective information before you look at the data. How can it not? But this does not mean that the (subjective) prior that you choose is irrefutable; Surely a prior that reflects prior information just does not have to be inconsistent with that information. But that still leaves a range of priors that are consistent with it, the sort of priors that one would use in a sensitivity analysis, for example. I think I see what Bill is getting at. A prior represents your subjective belief, or some approximation to your subjective belief, even if it’s not perfect. That sounds reasonable but I don’t think it works. Or, at least, it often doesn’t work. Let’s start

5 0.29182723 1454 andrew gelman stats-2012-08-11-Weakly informative priors for Bayesian nonparametric models?

Introduction: Nathaniel Egwu writes: I am a PhD student working on machine learning using artificial neural networks . . . Do you have some recent publications related to how one can construct priors depending on the type of input data available for training? I intend to construct a prior distribution for a given trade-off parameter of my non model obtained through training a neural network. At this stage, my argument is due to the fact that Bayesian nonparameteric estimation offers some insight on how to proceed on this problem. As I’ve been writing here for awhile, I’ve been interested in weakly informative priors. But I have little experience with nonparametric models. Perhaps Aki Vehtari or David Dunson or some other expert on these models can discuss how to set them up with weakly informative priors? This sounds like it could be important to me.

6 0.25493556 1946 andrew gelman stats-2013-07-19-Prior distributions on derived quantities rather than on parameters themselves

7 0.24700414 2109 andrew gelman stats-2013-11-21-Hidden dangers of noninformative priors

8 0.24689433 1149 andrew gelman stats-2012-02-01-Philosophy of Bayesian statistics: my reactions to Cox and Mayo

9 0.23704493 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

10 0.22112642 779 andrew gelman stats-2011-06-25-Avoiding boundary estimates using a prior distribution as regularization

11 0.22049421 1486 andrew gelman stats-2012-09-07-Prior distributions for regression coefficients

12 0.22007202 1941 andrew gelman stats-2013-07-16-Priors

13 0.21546216 1858 andrew gelman stats-2013-05-15-Reputations changeable, situations tolerable

14 0.21087085 1209 andrew gelman stats-2012-03-12-As a Bayesian I want scientists to report their data non-Bayesianly

15 0.19871613 801 andrew gelman stats-2011-07-13-On the half-Cauchy prior for a global scale parameter

16 0.18321989 2017 andrew gelman stats-2013-09-11-“Informative g-Priors for Logistic Regression”

17 0.18254226 1046 andrew gelman stats-2011-12-07-Neutral noninformative and informative conjugate beta and gamma prior distributions

18 0.17949212 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

19 0.17705907 1432 andrew gelman stats-2012-07-27-“Get off my lawn”-blogging

20 0.17609432 468 andrew gelman stats-2010-12-15-Weakly informative priors and imprecise probabilities


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.202), (1, 0.212), (2, -0.039), (3, 0.099), (4, -0.118), (5, -0.11), (6, 0.164), (7, 0.063), (8, -0.239), (9, 0.103), (10, 0.036), (11, 0.004), (12, 0.09), (13, 0.078), (14, 0.001), (15, -0.005), (16, 0.008), (17, 0.012), (18, 0.017), (19, 0.048), (20, -0.086), (21, -0.059), (22, -0.056), (23, 0.042), (24, -0.038), (25, 0.006), (26, 0.075), (27, -0.036), (28, -0.069), (29, 0.054), (30, 0.018), (31, -0.054), (32, 0.04), (33, -0.005), (34, -0.007), (35, 0.04), (36, 0.005), (37, 0.02), (38, 0.058), (39, 0.007), (40, -0.001), (41, -0.049), (42, 0.11), (43, 0.02), (44, 0.033), (45, 0.028), (46, -0.091), (47, -0.037), (48, -0.023), (49, 0.036)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97035867 1087 andrew gelman stats-2011-12-27-“Keeping things unridiculous”: Berger, O’Hagan, and me on weakly informative priors

Introduction: Deborah Mayo sent me this quote from Jim Berger: Too often I see people pretending to be subjectivists, and then using “weakly informative” priors that the objective Bayesian community knows are terrible and will give ridiculous answers; subjectivism is then being used as a shield to hide ignorance. . . . In my own more provocative moments, I claim that the only true subjectivists are the objective Bayesians, because they refuse to use subjectivism as a shield against criticism of sloppy pseudo-Bayesian practice. This caught my attention because I’ve become more and more convinced that weakly informative priors are the right way to go in many different situations. I don’t think Berger was talking about me , though, as the above quote came from a publication in 2006, at which time I’d only started writing about weakly informative priors. Going back to Berger’s article , I see that his “weakly informative priors” remark was aimed at this article by Anthony O’Hagan, who w

2 0.96111774 1092 andrew gelman stats-2011-12-29-More by Berger and me on weakly informative priors

Introduction: A couple days ago we discussed some remarks by Tony O’Hagan and Jim Berger on weakly informative priors. Jim followed up on Deborah Mayo’s blog with this: Objective Bayesian priors are often improper (i.e., have infinite total mass), but this is not a problem when they are developed correctly. But not every improper prior is satisfactory. For instance, the constant prior is known to be unsatisfactory in many situations. The ‘solution’ pseudo-Bayesians often use is to choose a constant prior over a large but bounded set (a ‘weakly informative’ prior), saying it is now proper and so all is well. This is not true; if the constant prior on the whole parameter space is bad, so will be the constant prior over the bounded set. The problem is, in part, that some people confuse proper priors with subjective priors and, having learned that true subjective priors are fine, incorrectly presume that weakly informative proper priors are fine. I have a few reactions to this: 1. I agree

3 0.90407056 468 andrew gelman stats-2010-12-15-Weakly informative priors and imprecise probabilities

Introduction: Giorgio Corani writes: Your work on weakly informative priors is close to some research I [Corani] did (together with Prof. Zaffalon) in the last years using the so-called imprecise probabilities. The idea is to work with a set of priors (containing even very different priors); to update them via Bayes’ rule and then compute a set of posteriors. The set of priors is convex and the priors are Dirichlet (thus, conjugate to the likelihood); this allows to compute the set of posteriors exactly and efficiently. I [Corani] have used this approach for classification, extending naive Bayes and TAN to imprecise probabilities. Classifiers based on imprecise probabilities return more classes when they find that the most probable class is prior-dependent, i.e., if picking different priors in the convex set leads to identify different classes as the most probable one. Instead of returning a single (unreliable) prior-dependent class, credal classifiers in this case preserve reliability by

4 0.88951945 1858 andrew gelman stats-2013-05-15-Reputations changeable, situations tolerable

Introduction: David Kessler, Peter Hoff, and David Dunson write : Marginally specified priors for nonparametric Bayesian estimation Prior specification for nonparametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. Realistically, a statistician is unlikely to have informed opinions about all aspects of such a parameter, but may have real information about functionals of the parameter, such the population mean or variance. This article proposes a new framework for nonparametric Bayes inference in which the prior distribution for a possibly infinite-dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a nonparametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard nonparametric prior distributions in common use, and inherit the large support of the standard priors upon which they are based. Ad

5 0.8888101 1155 andrew gelman stats-2012-02-05-What is a prior distribution?

Introduction: Some recent blog discussion revealed some confusion that I’ll try to resolve here. I wrote that I’m not a big fan of subjective priors. Various commenters had difficulty with this point, and I think the issue was most clearly stated by Bill Jeff re erys, who wrote : It seems to me that your prior has to reflect your subjective information before you look at the data. How can it not? But this does not mean that the (subjective) prior that you choose is irrefutable; Surely a prior that reflects prior information just does not have to be inconsistent with that information. But that still leaves a range of priors that are consistent with it, the sort of priors that one would use in a sensitivity analysis, for example. I think I see what Bill is getting at. A prior represents your subjective belief, or some approximation to your subjective belief, even if it’s not perfect. That sounds reasonable but I don’t think it works. Or, at least, it often doesn’t work. Let’s start

6 0.85258371 2138 andrew gelman stats-2013-12-18-In Memoriam Dennis Lindley

7 0.84513015 801 andrew gelman stats-2011-07-13-On the half-Cauchy prior for a global scale parameter

8 0.84140021 1946 andrew gelman stats-2013-07-19-Prior distributions on derived quantities rather than on parameters themselves

9 0.83586437 1046 andrew gelman stats-2011-12-07-Neutral noninformative and informative conjugate beta and gamma prior distributions

10 0.83093494 1454 andrew gelman stats-2012-08-11-Weakly informative priors for Bayesian nonparametric models?

11 0.82338238 846 andrew gelman stats-2011-08-09-Default priors update?

12 0.80555308 2109 andrew gelman stats-2013-11-21-Hidden dangers of noninformative priors

13 0.80338317 1474 andrew gelman stats-2012-08-29-More on scaled-inverse Wishart and prior independence

14 0.79775542 2017 andrew gelman stats-2013-09-11-“Informative g-Priors for Logistic Regression”

15 0.7672804 1486 andrew gelman stats-2012-09-07-Prior distributions for regression coefficients

16 0.72129577 1149 andrew gelman stats-2012-02-01-Philosophy of Bayesian statistics: my reactions to Cox and Mayo

17 0.71838737 1465 andrew gelman stats-2012-08-21-D. Buggin

18 0.71700865 1941 andrew gelman stats-2013-07-16-Priors

19 0.71649611 669 andrew gelman stats-2011-04-19-The mysterious Gamma (1.4, 0.4)

20 0.71393794 639 andrew gelman stats-2011-03-31-Bayes: radical, liberal, or conservative?


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(15, 0.035), (16, 0.046), (24, 0.338), (47, 0.018), (81, 0.01), (84, 0.043), (86, 0.032), (88, 0.083), (99, 0.234)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.97053921 482 andrew gelman stats-2010-12-23-Capitalism as a form of voluntarism

Introduction: Interesting discussion by Alex Tabarrok (following up on an article by Rebecca Solnit) on the continuum between voluntarism (or, more generally, non-cash transactions) and markets with monetary exchange. I just have a few comments of my own: 1. Solnit writes of “the iceberg economy,” which she characterizes as “based on gift economies, barter, mutual aid, and giving without hope of return . . . the relations between friends, between family members, the activities of volunteers or those who have chosen their vocation on principle rather than for profit.” I just wonder whether “barter” completely fits in here. Maybe it depends on context. Sometimes barter is an informal way of keeping track (you help me and I help you), but in settings of low liquidity I could imagine barter being simply an inefficient way of performing an economic transaction. 2. I am no expert on capitalism but my impression is that it’s not just about “competition and selfishness” but also is related to the

2 0.9690907 1706 andrew gelman stats-2013-02-04-Too many MC’s not enough MIC’s, or What principles should govern attempts to summarize bivariate associations in large multivariate datasets?

Introduction: Justin Kinney writes: Since your blog has discussed the “maximal information coefficient” (MIC) of Reshef et al., I figured you might want to see the critique that Gurinder Atwal and I have posted. In short, Reshef et al.’s central claim that MIC is “equitable” is incorrect. We [Kinney and Atwal] offer mathematical proof that the definition of “equitability” Reshef et al. propose is unsatisfiable—no nontrivial dependence measure, including MIC, has this property. Replicating the simulations in their paper with modestly larger data sets validates this finding. The heuristic notion of equitability, however, can be formalized instead as a self-consistency condition closely related to the Data Processing Inequality. Mutual information satisfies this new definition of equitability but MIC does not. We therefore propose that simply estimating mutual information will, in many cases, provide the sort of dependence measure Reshef et al. seek. For background, here are my two p

3 0.96702874 743 andrew gelman stats-2011-06-03-An argument that can’t possibly make sense

Introduction: Tyler Cowen writes : Texas has begun to enforce [a law regarding parallel parking] only recently . . . Up until now, of course, there has been strong net mobility into the state of Texas, so was the previous lack of enforcement so bad? I care not at all about the direction in which people park their cars and I have no opinion on this law, but I have to raise an alarm at Cowen’s argument here. Let me strip it down to its basic form: 1. Until recently, state X had policy A. 2. Up until now, there has been strong net mobility into state X 3. Therefore, the presumption is that policy A is ok. In this particular case, I think we can safely assume that parallel parking regulations have had close to zero impact on the population flows into and out of Texas. More generally, I think logicians could poke some holes into the argument that 1 and 2 above imply 3. For one thing, you could apply this argument to any policy in any state that’s had positive net migration. Hai

4 0.96680224 1978 andrew gelman stats-2013-08-12-Fixing the race, ethnicity, and national origin questions on the U.S. Census

Introduction: In his new book, “What is Your Race? The Census and Our Flawed Efforts to Classify Americans,” former Census Bureau director Ken Prewitt recommends taking the race question off the decennial census: He recommends gradual changes, integrating the race and national origin questions while improving both. In particular, he would replace the main “race” question by a “race or origin” question, with the instruction to “Mark one or more” of the following boxes: “White,” “Black, African Am., or Negro,” “Hispanic, Latino, or Spanish origin,” “American Indian or Alaska Native,” “Asian”, “Native Hawaiian or Other Pacific Islander,” and “Some other race or origin.” Then the next question is to write in “specific race, origin, or enrolled or principal tribe.” Prewitt writes: His suggestion is to go with these questions in 2020 and 2030, then in 2040 “drop the race question and use only the national origin question.” He’s also relying on the American Community Survey to gather a lo

5 0.96624058 1092 andrew gelman stats-2011-12-29-More by Berger and me on weakly informative priors

Introduction: A couple days ago we discussed some remarks by Tony O’Hagan and Jim Berger on weakly informative priors. Jim followed up on Deborah Mayo’s blog with this: Objective Bayesian priors are often improper (i.e., have infinite total mass), but this is not a problem when they are developed correctly. But not every improper prior is satisfactory. For instance, the constant prior is known to be unsatisfactory in many situations. The ‘solution’ pseudo-Bayesians often use is to choose a constant prior over a large but bounded set (a ‘weakly informative’ prior), saying it is now proper and so all is well. This is not true; if the constant prior on the whole parameter space is bad, so will be the constant prior over the bounded set. The problem is, in part, that some people confuse proper priors with subjective priors and, having learned that true subjective priors are fine, incorrectly presume that weakly informative proper priors are fine. I have a few reactions to this: 1. I agree

6 0.9657957 938 andrew gelman stats-2011-10-03-Comparing prediction errors

7 0.96451342 241 andrew gelman stats-2010-08-29-Ethics and statistics in development research

8 0.96437109 1376 andrew gelman stats-2012-06-12-Simple graph WIN: the example of birthday frequencies

9 0.96397066 278 andrew gelman stats-2010-09-15-Advice that might make sense for individuals but is negative-sum overall

same-blog 10 0.96358681 1087 andrew gelman stats-2011-12-27-“Keeping things unridiculous”: Berger, O’Hagan, and me on weakly informative priors

11 0.96297169 1479 andrew gelman stats-2012-09-01-Mothers and Moms

12 0.96229053 2143 andrew gelman stats-2013-12-22-The kluges of today are the textbook solutions of tomorrow.

13 0.9622395 1455 andrew gelman stats-2012-08-12-Probabilistic screening to get an approximate self-weighted sample

14 0.96194148 1891 andrew gelman stats-2013-06-09-“Heterogeneity of variance in experimental studies: A challenge to conventional interpretations”

15 0.95964772 1999 andrew gelman stats-2013-08-27-Bayesian model averaging or fitting a larger model

16 0.95928609 38 andrew gelman stats-2010-05-18-Breastfeeding, infant hyperbilirubinemia, statistical graphics, and modern medicine

17 0.95902789 2229 andrew gelman stats-2014-02-28-God-leaf-tree

18 0.95872808 1787 andrew gelman stats-2013-04-04-Wanna be the next Tyler Cowen? It’s not as easy as you might think!

19 0.95824909 197 andrew gelman stats-2010-08-10-The last great essayist?

20 0.95664185 2017 andrew gelman stats-2013-09-11-“Informative g-Priors for Logistic Regression”