andrew_gelman_stats andrew_gelman_stats-2013 andrew_gelman_stats-2013-2138 knowledge-graph by maker-knowledge-mining
Source: html
Introduction: So. Farewell then Dennis Lindley. You held the Hard line on Bayesianism When others Had doubts. And you share The name of a famous Paradox. What is your subjective Prior now? We can only Infer. R. A. Thribb (17 1/2) P.S.
sentIndex sentText sentNum sentScore
1 You held the Hard line on Bayesianism When others Had doubts. [sent-3, score-0.741]
wordName wordTfidf (topN-words)
[('bayesianism', 0.447), ('dennis', 0.441), ('held', 0.343), ('subjective', 0.336), ('famous', 0.282), ('share', 0.279), ('name', 0.248), ('line', 0.223), ('prior', 0.209), ('hard', 0.177), ('others', 0.175)]
simIndex simValue blogId blogTitle
same-blog 1 1.0000001 2138 andrew gelman stats-2013-12-18-In Memoriam Dennis Lindley
Introduction: So. Farewell then Dennis Lindley. You held the Hard line on Bayesianism When others Had doubts. And you share The name of a famous Paradox. What is your subjective Prior now? We can only Infer. R. A. Thribb (17 1/2) P.S.
2 0.2613638 1155 andrew gelman stats-2012-02-05-What is a prior distribution?
Introduction: Some recent blog discussion revealed some confusion that I’ll try to resolve here. I wrote that I’m not a big fan of subjective priors. Various commenters had difficulty with this point, and I think the issue was most clearly stated by Bill Jeff re erys, who wrote : It seems to me that your prior has to reflect your subjective information before you look at the data. How can it not? But this does not mean that the (subjective) prior that you choose is irrefutable; Surely a prior that reflects prior information just does not have to be inconsistent with that information. But that still leaves a range of priors that are consistent with it, the sort of priors that one would use in a sensitivity analysis, for example. I think I see what Bill is getting at. A prior represents your subjective belief, or some approximation to your subjective belief, even if it’s not perfect. That sounds reasonable but I don’t think it works. Or, at least, it often doesn’t work. Let’s start
3 0.16822989 476 andrew gelman stats-2010-12-19-Google’s word count statistics viewer
Introduction: Word count stats from the Google books database prove that Bayesianism is expanding faster than the universe. A n-gram is a tuple of n words.
4 0.15491542 1092 andrew gelman stats-2011-12-29-More by Berger and me on weakly informative priors
Introduction: A couple days ago we discussed some remarks by Tony O’Hagan and Jim Berger on weakly informative priors. Jim followed up on Deborah Mayo’s blog with this: Objective Bayesian priors are often improper (i.e., have infinite total mass), but this is not a problem when they are developed correctly. But not every improper prior is satisfactory. For instance, the constant prior is known to be unsatisfactory in many situations. The ‘solution’ pseudo-Bayesians often use is to choose a constant prior over a large but bounded set (a ‘weakly informative’ prior), saying it is now proper and so all is well. This is not true; if the constant prior on the whole parameter space is bad, so will be the constant prior over the bounded set. The problem is, in part, that some people confuse proper priors with subjective priors and, having learned that true subjective priors are fine, incorrectly presume that weakly informative proper priors are fine. I have a few reactions to this: 1. I agree
5 0.13607021 1151 andrew gelman stats-2012-02-03-Philosophy of Bayesian statistics: my reactions to Senn
Introduction: Continuing with my discussion of the articles in the special issue of the journal Rationality, Markets and Morals on the philosophy of Bayesian statistics: Stephen Senn, “You May Believe You Are a Bayesian But You Are Probably Wrong”: I agree with Senn’s comments on the impossibility of the de Finetti subjective Bayesian approach. As I wrote in 2008, if you could really construct a subjective prior you believe in, why not just look at the data and write down your subjective posterior. The immense practical difficulties with any serious system of inference render it absurd to think that it would be possible to just write down a probability distribution to represent uncertainty. I wish, however, that Senn would recognize my Bayesian approach (which is also that of John Carlin, Hal Stern, Don Rubin, and, I believe, others). De Finetti is no longer around, but we are! I have to admit that my own Bayesian views and practices have changed. In particular, I resonate wit
6 0.1224987 2200 andrew gelman stats-2014-02-05-Prior distribution for a predicted probability
7 0.11979546 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”
9 0.11548439 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes
10 0.10526865 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis
11 0.10331918 920 andrew gelman stats-2011-09-22-Top 10 blog obsessions
12 0.099492468 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes
13 0.098412097 1941 andrew gelman stats-2013-07-16-Priors
14 0.097282596 1149 andrew gelman stats-2012-02-01-Philosophy of Bayesian statistics: my reactions to Cox and Mayo
15 0.096042201 1946 andrew gelman stats-2013-07-19-Prior distributions on derived quantities rather than on parameters themselves
16 0.093303442 1334 andrew gelman stats-2012-05-21-Question 11 of my final exam for Design and Analysis of Sample Surveys
17 0.09068542 1474 andrew gelman stats-2012-08-29-More on scaled-inverse Wishart and prior independence
18 0.090642184 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox
19 0.086316824 2109 andrew gelman stats-2013-11-21-Hidden dangers of noninformative priors
20 0.08434128 1087 andrew gelman stats-2011-12-27-“Keeping things unridiculous”: Berger, O’Hagan, and me on weakly informative priors
topicId topicWeight
[(0, 0.061), (1, 0.059), (2, -0.012), (3, 0.056), (4, -0.057), (5, -0.068), (6, 0.077), (7, 0.03), (8, -0.102), (9, 0.027), (10, 0.001), (11, -0.018), (12, 0.033), (13, 0.034), (14, 0.0), (15, 0.007), (16, 0.011), (17, -0.015), (18, 0.025), (19, 0.014), (20, -0.019), (21, -0.01), (22, -0.022), (23, -0.007), (24, -0.005), (25, -0.023), (26, 0.054), (27, -0.055), (28, -0.037), (29, 0.007), (30, 0.033), (31, -0.007), (32, -0.025), (33, 0.004), (34, -0.028), (35, 0.031), (36, 0.009), (37, 0.028), (38, -0.039), (39, 0.013), (40, -0.022), (41, -0.051), (42, 0.038), (43, 0.01), (44, 0.017), (45, 0.025), (46, -0.03), (47, -0.015), (48, 0.019), (49, -0.014)]
simIndex simValue blogId blogTitle
same-blog 1 0.9919042 2138 andrew gelman stats-2013-12-18-In Memoriam Dennis Lindley
Introduction: So. Farewell then Dennis Lindley. You held the Hard line on Bayesianism When others Had doubts. And you share The name of a famous Paradox. What is your subjective Prior now? We can only Infer. R. A. Thribb (17 1/2) P.S.
2 0.82185751 1092 andrew gelman stats-2011-12-29-More by Berger and me on weakly informative priors
Introduction: A couple days ago we discussed some remarks by Tony O’Hagan and Jim Berger on weakly informative priors. Jim followed up on Deborah Mayo’s blog with this: Objective Bayesian priors are often improper (i.e., have infinite total mass), but this is not a problem when they are developed correctly. But not every improper prior is satisfactory. For instance, the constant prior is known to be unsatisfactory in many situations. The ‘solution’ pseudo-Bayesians often use is to choose a constant prior over a large but bounded set (a ‘weakly informative’ prior), saying it is now proper and so all is well. This is not true; if the constant prior on the whole parameter space is bad, so will be the constant prior over the bounded set. The problem is, in part, that some people confuse proper priors with subjective priors and, having learned that true subjective priors are fine, incorrectly presume that weakly informative proper priors are fine. I have a few reactions to this: 1. I agree
3 0.81573248 1155 andrew gelman stats-2012-02-05-What is a prior distribution?
Introduction: Some recent blog discussion revealed some confusion that I’ll try to resolve here. I wrote that I’m not a big fan of subjective priors. Various commenters had difficulty with this point, and I think the issue was most clearly stated by Bill Jeff re erys, who wrote : It seems to me that your prior has to reflect your subjective information before you look at the data. How can it not? But this does not mean that the (subjective) prior that you choose is irrefutable; Surely a prior that reflects prior information just does not have to be inconsistent with that information. But that still leaves a range of priors that are consistent with it, the sort of priors that one would use in a sensitivity analysis, for example. I think I see what Bill is getting at. A prior represents your subjective belief, or some approximation to your subjective belief, even if it’s not perfect. That sounds reasonable but I don’t think it works. Or, at least, it often doesn’t work. Let’s start
Introduction: Deborah Mayo sent me this quote from Jim Berger: Too often I see people pretending to be subjectivists, and then using “weakly informative” priors that the objective Bayesian community knows are terrible and will give ridiculous answers; subjectivism is then being used as a shield to hide ignorance. . . . In my own more provocative moments, I claim that the only true subjectivists are the objective Bayesians, because they refuse to use subjectivism as a shield against criticism of sloppy pseudo-Bayesian practice. This caught my attention because I’ve become more and more convinced that weakly informative priors are the right way to go in many different situations. I don’t think Berger was talking about me , though, as the above quote came from a publication in 2006, at which time I’d only started writing about weakly informative priors. Going back to Berger’s article , I see that his “weakly informative priors” remark was aimed at this article by Anthony O’Hagan, who w
5 0.73527622 1858 andrew gelman stats-2013-05-15-Reputations changeable, situations tolerable
Introduction: David Kessler, Peter Hoff, and David Dunson write : Marginally specified priors for nonparametric Bayesian estimation Prior specification for nonparametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. Realistically, a statistician is unlikely to have informed opinions about all aspects of such a parameter, but may have real information about functionals of the parameter, such the population mean or variance. This article proposes a new framework for nonparametric Bayes inference in which the prior distribution for a possibly infinite-dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a nonparametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard nonparametric prior distributions in common use, and inherit the large support of the standard priors upon which they are based. Ad
6 0.70600355 2017 andrew gelman stats-2013-09-11-“Informative g-Priors for Logistic Regression”
7 0.69634706 1946 andrew gelman stats-2013-07-19-Prior distributions on derived quantities rather than on parameters themselves
9 0.69448167 468 andrew gelman stats-2010-12-15-Weakly informative priors and imprecise probabilities
10 0.69299173 2109 andrew gelman stats-2013-11-21-Hidden dangers of noninformative priors
11 0.67834538 801 andrew gelman stats-2011-07-13-On the half-Cauchy prior for a global scale parameter
12 0.67227519 1474 andrew gelman stats-2012-08-29-More on scaled-inverse Wishart and prior independence
13 0.66028452 1454 andrew gelman stats-2012-08-11-Weakly informative priors for Bayesian nonparametric models?
14 0.65681189 1130 andrew gelman stats-2012-01-20-Prior beliefs about locations of decision boundaries
15 0.65607446 639 andrew gelman stats-2011-03-31-Bayes: radical, liberal, or conservative?
16 0.63634992 2200 andrew gelman stats-2014-02-05-Prior distribution for a predicted probability
17 0.63176125 1941 andrew gelman stats-2013-07-16-Priors
18 0.62664133 1486 andrew gelman stats-2012-09-07-Prior distributions for regression coefficients
19 0.62068224 669 andrew gelman stats-2011-04-19-The mysterious Gamma (1.4, 0.4)
20 0.60861641 846 andrew gelman stats-2011-08-09-Default priors update?
topicId topicWeight
[(24, 0.331), (65, 0.153), (91, 0.283)]
simIndex simValue blogId blogTitle
same-blog 1 0.9724344 2138 andrew gelman stats-2013-12-18-In Memoriam Dennis Lindley
Introduction: So. Farewell then Dennis Lindley. You held the Hard line on Bayesianism When others Had doubts. And you share The name of a famous Paradox. What is your subjective Prior now? We can only Infer. R. A. Thribb (17 1/2) P.S.
2 0.74609637 476 andrew gelman stats-2010-12-19-Google’s word count statistics viewer
Introduction: Word count stats from the Google books database prove that Bayesianism is expanding faster than the universe. A n-gram is a tuple of n words.
3 0.71754599 613 andrew gelman stats-2011-03-15-Gay-married state senator shot down gay marriage
Introduction: This is pretty amazing.
4 0.71754599 712 andrew gelman stats-2011-05-14-The joys of working in the public domain
Introduction: Stan will make a total lifetime profit of $0, so we can’t be sued !
5 0.71754599 723 andrew gelman stats-2011-05-21-Literary blurb translation guide
Introduction: “Just like literature, only smaller.”
6 0.71754599 1242 andrew gelman stats-2012-04-03-Best lottery story ever
7 0.71754599 1252 andrew gelman stats-2012-04-08-Jagdish Bhagwati’s definition of feminist sincerity
8 0.71592122 59 andrew gelman stats-2010-05-30-Extended Binary Format Support for Mac OS X
9 0.69872802 471 andrew gelman stats-2010-12-17-Attractive models (and data) wanted for statistical art show.
10 0.69428593 1437 andrew gelman stats-2012-07-31-Paying survey respondents
11 0.68848938 1046 andrew gelman stats-2011-12-07-Neutral noninformative and informative conjugate beta and gamma prior distributions
12 0.68414307 2024 andrew gelman stats-2013-09-15-Swiss Jonah Lehrer update
13 0.6703521 240 andrew gelman stats-2010-08-29-ARM solutions
14 0.66199917 545 andrew gelman stats-2011-01-30-New innovations in spam
15 0.64930701 643 andrew gelman stats-2011-04-02-So-called Bayesian hypothesis testing is just as bad as regular hypothesis testing
16 0.64825946 373 andrew gelman stats-2010-10-27-It’s better than being forwarded the latest works of you-know-who
17 0.6411534 1063 andrew gelman stats-2011-12-16-Suspicious histogram bars
18 0.6354993 19 andrew gelman stats-2010-05-06-OK, so this is how I ended up working with three different guys named Matt
19 0.62662965 38 andrew gelman stats-2010-05-18-Breastfeeding, infant hyperbilirubinemia, statistical graphics, and modern medicine
20 0.61397827 2229 andrew gelman stats-2014-02-28-God-leaf-tree