andrew_gelman_stats andrew_gelman_stats-2011 andrew_gelman_stats-2011-662 knowledge-graph by maker-knowledge-mining

662 andrew gelman stats-2011-04-15-Bayesian statistical pragmatism


meta infos for this blog

Source: html

Introduction: Rob Kass’s article on statistical pragmatism is scheduled to appear in Statistical Science along with some discussions. Here are my comments. I agree with Rob Kass’s point that we can and should make use of statistical methods developed under different philosophies, and I am happy to take the opportunity to elaborate on some of his arguments. I’ll discuss the following: - Foundations of probability - Confidence intervals and hypothesis tests - Sampling - Subjectivity and belief - Different schools of statistics Foundations of probability. Kass describes probability theory as anchored upon physical randomization (coin flips, die rolls and the like) but being useful more generally as a mathematical model. I completely agree but would also add another anchoring point: calibration. Calibration of probability assessments is an objective, not subjective process, although some subjectivity (or scientific judgment) is necessarily involved in the choice of events used


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 I agree with Rob Kass’s point that we can and should make use of statistical methods developed under different philosophies, and I am happy to take the opportunity to elaborate on some of his arguments. [sent-3, score-0.324]

2 I’ll discuss the following: - Foundations of probability - Confidence intervals and hypothesis tests - Sampling - Subjectivity and belief - Different schools of statistics Foundations of probability. [sent-4, score-0.543]

3 Kass describes probability theory as anchored upon physical randomization (coin flips, die rolls and the like) but being useful more generally as a mathematical model. [sent-5, score-0.447]

4 Calibration of probability assessments is an objective, not subjective process, although some subjectivity (or scientific judgment) is necessarily involved in the choice of events used in the calibration. [sent-7, score-0.495]

5 In that way, Bayesian probability calibration is closely connected to frequentist probability statements, in that both are conditional on “reference sets” of comparable events. [sent-8, score-0.458]

6 I agree with Kass that confidence and statistical significance are “valuable inferential tools. [sent-11, score-0.411]

7 In the Neyman-Pearson theory of inference, confidence and statistical significance are two sides of the same coin, with a confidence interval being the set of parameter values not rejected by a significance test. [sent-13, score-0.665]

8 In a modern Bayesian approach, confidence intervals and hypothesis testing are both important but are not isomorphic; they represent two different steps of inference. [sent-15, score-0.53]

9 Kass discusses the role of sampling as a model for understanding statistical inference. [sent-21, score-0.305]

10 Ultimately, sample is just another word for subset, and in both Bayesian and classical inference, appropriate generalization from sample to population depends on a model for the sampling or selection process. [sent-27, score-0.287]

11 I have no problem with Kass’s use of sampling as a framework for inference, and I think this will work even better if he emphasizes the generalization from real samples to real populations–not just mathematical constructs–that are central to so much of our applied inferences. [sent-28, score-0.465]

12 The only two statements in Kass’s article that I clearly disagree with are the following two claims: “the only solid foundation for Bayesianism is subjective,” and “the most fundamental belief of any scientist is that the theoretical and real worlds are aligned. [sent-30, score-0.343]

13 Claims of the subjectivity of Bayesian inference have been much debated, and I am under no illusion that I can resolve them here. [sent-32, score-0.367]

14 To put it another way, I will accept the idea of subjective Bayesianism when this same subjectivity is acknowledged for other methods of inference. [sent-36, score-0.349]

15 ” I agree with Kass that scientists and statisticians can and should feel free to make assumptions without falling into a “solipsistic quagmire. [sent-38, score-0.336]

16 ” Finally, I am surprised to see Kass write that scientists believe that the theoretical and real worlds are aligned. [sent-39, score-0.222]

17 It is from acknowledging the discrepancies between these worlds that we can (a) feel free to make assumptions without being paralyzed by fear of making mistakes, and (b) feel free to check the fit of our models (those hypothesis tests again! [sent-40, score-0.682]

18 I assume that Kass is using the word “aligned” in a loose sense, to imply that scientists believe that their models are appropriate to reality even if not fully correct. [sent-43, score-0.24]

19 Often in my own applied work I have used models that have clear flaws, models that are at best “phenomenological” in the sense of fitting the data rather than corresponding to underlying processes of interest–and often such models don’t fit the data so well either. [sent-45, score-0.503]

20 Ideas of sampling, inference, and model checking are important in many different statistical traditions and we are lucky to have so many different ideas on which to draw for inspiration in our applied and methodological research. [sent-50, score-0.361]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('kass', 0.554), ('subjectivity', 0.237), ('confidence', 0.171), ('sampling', 0.155), ('rob', 0.15), ('probability', 0.146), ('inference', 0.13), ('models', 0.128), ('statements', 0.125), ('hypothesis', 0.124), ('assumptions', 0.112), ('subjective', 0.112), ('intervals', 0.109), ('calibration', 0.108), ('bayesian', 0.106), ('worlds', 0.106), ('anchored', 0.096), ('significance', 0.091), ('statistical', 0.089), ('coin', 0.081), ('physical', 0.079), ('randomization', 0.074), ('bayesianism', 0.074), ('different', 0.071), ('generalization', 0.071), ('applied', 0.069), ('foundations', 0.066), ('environmental', 0.065), ('repeat', 0.061), ('model', 0.061), ('agree', 0.06), ('discuss', 0.058), ('real', 0.058), ('scientists', 0.058), ('frequentist', 0.058), ('free', 0.056), ('steps', 0.055), ('even', 0.054), ('developed', 0.054), ('belief', 0.054), ('theory', 0.052), ('merely', 0.052), ('schools', 0.052), ('reference', 0.052), ('flips', 0.051), ('intolerant', 0.051), ('isomorphic', 0.051), ('make', 0.05), ('aspects', 0.05), ('fit', 0.05)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 662 andrew gelman stats-2011-04-15-Bayesian statistical pragmatism

Introduction: Rob Kass’s article on statistical pragmatism is scheduled to appear in Statistical Science along with some discussions. Here are my comments. I agree with Rob Kass’s point that we can and should make use of statistical methods developed under different philosophies, and I am happy to take the opportunity to elaborate on some of his arguments. I’ll discuss the following: - Foundations of probability - Confidence intervals and hypothesis tests - Sampling - Subjectivity and belief - Different schools of statistics Foundations of probability. Kass describes probability theory as anchored upon physical randomization (coin flips, die rolls and the like) but being useful more generally as a mathematical model. I completely agree but would also add another anchoring point: calibration. Calibration of probability assessments is an objective, not subjective process, although some subjectivity (or scientific judgment) is necessarily involved in the choice of events used

2 0.79483277 317 andrew gelman stats-2010-10-04-Rob Kass on statistical pragmatism, and my reactions

Introduction: Rob Kass writes : Statistics has moved beyond the frequentist-Bayesian controversies of the past. Where does this leave our ability to interpret results? I [Kass] suggest that a philosophy compatible with statistical practice, labeled here statistical pragmatism, serves as a foundation for inference. Statistical pragmatism is inclusive and emphasizes the assumptions that connect statistical models with observed data. I argue that introductory courses often mis-characterize the process of statistical inference and I propose an alternative “big picture” depiction. In my comments , I pretty much agree with everything Rob says, with a few points of elaboration: Kass describes probability theory as anchored upon physical randomization (coin flips, die rolls and the like) but being useful more generally as a mathematical model. I completely agree but would also add another anchoring point: calibration. Calibration of probability assessments is an objective, not subjective proce

3 0.20460956 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

Introduction: Robert Bell pointed me to this post by Brad De Long on Bayesian statistics, and then I also noticed this from Noah Smith, who wrote: My impression is that although the Bayesian/Frequentist debate is interesting and intellectually fun, there’s really not much “there” there… despite being so-hip-right-now, Bayesian is not the Statistical Jesus. I’m happy to see the discussion going in this direction. Twenty-five years ago or so, when I got into this biz, there were some serious anti-Bayesian attitudes floating around in mainstream statistics. Discussions in the journals sometimes devolved into debates of the form, “Bayesians: knaves or fools?”. You’d get all sorts of free-floating skepticism about any prior distribution at all, even while people were accepting without question (and doing theory on) logistic regressions, proportional hazards models, and all sorts of strong strong models. (In the subfield of survey sampling, various prominent researchers would refuse to mode

4 0.20342052 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

Introduction: I sent Deborah Mayo a link to my paper with Cosma Shalizi on the philosophy of statistics, and she sent me the link to this conference which unfortunately already occurred. (It’s too bad, because I’d have liked to have been there.) I summarized my philosophy as follows: I am highly sympathetic to the approach of Lakatos (or of Popper, if you consider Lakatos’s “Popper_2″ to be a reasonable simulation of the true Popperism), in that (a) I view statistical models as being built within theoretical structures, and (b) I see the checking and refutation of models to be a key part of scientific progress. A big problem I have with mainstream Bayesianism is its “inductivist” view that science can operate completely smoothly with posterior updates: the idea that new data causes us to increase the posterior probability of good models and decrease the posterior probability of bad models. I don’t buy that: I see models as ever-changing entities that are flexible and can be patched and ex

5 0.19331534 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

Introduction: Prasanta Bandyopadhyay and Gordon Brittan write : We introduce a distinction, unnoticed in the literature, between four varieties of objective Bayesianism. What we call ‘strong objective Bayesianism’ is characterized by two claims, that all scientific inference is ‘logical’ and that, given the same background information two agents will ascribe a unique probability to their priors. We think that neither of these claims can be sustained; in this sense, they are ‘dogmatic’. The first fails to recognize that some scientific inference, in particular that concerning evidential relations, is not (in the appropriate sense) logical, the second fails to provide a non-question-begging account of ‘same background information’. We urge that a suitably objective Bayesian account of scientific inference does not require either of the claims. Finally, we argue that Bayesianism needs to be fine-grained in the same way that Bayesians fine-grain their beliefs. I have not read their paper in detai

6 0.18984312 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

7 0.18892057 774 andrew gelman stats-2011-06-20-The pervasive twoishness of statistics; in particular, the “sampling distribution” and the “likelihood” are two different models, and that’s a good thing

8 0.18876445 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

9 0.1860294 870 andrew gelman stats-2011-08-25-Why it doesn’t make sense in general to form confidence intervals by inverting hypothesis tests

10 0.18451029 320 andrew gelman stats-2010-10-05-Does posterior predictive model checking fit with the operational subjective approach?

11 0.18363446 1913 andrew gelman stats-2013-06-24-Why it doesn’t make sense in general to form confidence intervals by inverting hypothesis tests

12 0.18188307 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

13 0.17928679 1719 andrew gelman stats-2013-02-11-Why waste time philosophizing?

14 0.1787506 1610 andrew gelman stats-2012-12-06-Yes, checking calibration of probability forecasts is part of Bayesian statistics

15 0.17189571 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

16 0.17131762 1469 andrew gelman stats-2012-08-25-Ways of knowing

17 0.1667586 811 andrew gelman stats-2011-07-20-Kind of Bayesian

18 0.16154593 480 andrew gelman stats-2010-12-21-Instead of “confidence interval,” let’s say “uncertainty interval”

19 0.1608772 1418 andrew gelman stats-2012-07-16-Long discussion about causal inference and the use of hierarchical models to bridge between different inferential settings

20 0.15980619 1095 andrew gelman stats-2012-01-01-Martin and Liu: Probabilistic inference based on consistency of model with data


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.283), (1, 0.209), (2, -0.072), (3, -0.026), (4, -0.136), (5, 0.001), (6, -0.113), (7, 0.096), (8, 0.07), (9, -0.12), (10, -0.083), (11, -0.015), (12, -0.036), (13, -0.03), (14, -0.046), (15, -0.019), (16, -0.025), (17, -0.036), (18, 0.002), (19, -0.056), (20, 0.045), (21, -0.046), (22, 0.003), (23, 0.03), (24, 0.034), (25, -0.028), (26, 0.01), (27, 0.01), (28, -0.042), (29, 0.056), (30, -0.028), (31, -0.043), (32, -0.018), (33, 0.005), (34, -0.062), (35, 0.025), (36, 0.014), (37, 0.039), (38, -0.007), (39, 0.055), (40, 0.026), (41, -0.036), (42, 0.1), (43, -0.026), (44, 0.009), (45, -0.029), (46, -0.07), (47, 0.024), (48, -0.058), (49, -0.055)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96897227 662 andrew gelman stats-2011-04-15-Bayesian statistical pragmatism

Introduction: Rob Kass’s article on statistical pragmatism is scheduled to appear in Statistical Science along with some discussions. Here are my comments. I agree with Rob Kass’s point that we can and should make use of statistical methods developed under different philosophies, and I am happy to take the opportunity to elaborate on some of his arguments. I’ll discuss the following: - Foundations of probability - Confidence intervals and hypothesis tests - Sampling - Subjectivity and belief - Different schools of statistics Foundations of probability. Kass describes probability theory as anchored upon physical randomization (coin flips, die rolls and the like) but being useful more generally as a mathematical model. I completely agree but would also add another anchoring point: calibration. Calibration of probability assessments is an objective, not subjective process, although some subjectivity (or scientific judgment) is necessarily involved in the choice of events used

2 0.95626658 317 andrew gelman stats-2010-10-04-Rob Kass on statistical pragmatism, and my reactions

Introduction: Rob Kass writes : Statistics has moved beyond the frequentist-Bayesian controversies of the past. Where does this leave our ability to interpret results? I [Kass] suggest that a philosophy compatible with statistical practice, labeled here statistical pragmatism, serves as a foundation for inference. Statistical pragmatism is inclusive and emphasizes the assumptions that connect statistical models with observed data. I argue that introductory courses often mis-characterize the process of statistical inference and I propose an alternative “big picture” depiction. In my comments , I pretty much agree with everything Rob says, with a few points of elaboration: Kass describes probability theory as anchored upon physical randomization (coin flips, die rolls and the like) but being useful more generally as a mathematical model. I completely agree but would also add another anchoring point: calibration. Calibration of probability assessments is an objective, not subjective proce

3 0.84470946 1095 andrew gelman stats-2012-01-01-Martin and Liu: Probabilistic inference based on consistency of model with data

Introduction: What better way to start then new year than with some hard-core statistical theory? Ryan Martin and Chuanhai Liu send along a new paper on inferential models: Probability is a useful tool for describing uncertainty, so it is natural to strive for a system of statistical inference based on probabilities for or against various hypotheses. But existing probabilistic inference methods struggle to provide a meaningful interpretation of the probabilities across experiments in sufficient generality. In this paper we further develop a promising new approach based on what are called inferential models (IMs). The fundamental idea behind IMs is that there is an unobservable auxiliary variable that itself describes the inherent uncertainty about the parameter of interest, and that posterior probabilistic inference can be accomplished by predicting this unobserved quantity. We describe a simple and intuitive three-step construction of a random set of candidate parameter values, each being co

4 0.78518057 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

Introduction: I sent Deborah Mayo a link to my paper with Cosma Shalizi on the philosophy of statistics, and she sent me the link to this conference which unfortunately already occurred. (It’s too bad, because I’d have liked to have been there.) I summarized my philosophy as follows: I am highly sympathetic to the approach of Lakatos (or of Popper, if you consider Lakatos’s “Popper_2″ to be a reasonable simulation of the true Popperism), in that (a) I view statistical models as being built within theoretical structures, and (b) I see the checking and refutation of models to be a key part of scientific progress. A big problem I have with mainstream Bayesianism is its “inductivist” view that science can operate completely smoothly with posterior updates: the idea that new data causes us to increase the posterior probability of good models and decrease the posterior probability of bad models. I don’t buy that: I see models as ever-changing entities that are flexible and can be patched and ex

5 0.75417316 1898 andrew gelman stats-2013-06-14-Progress! (on the understanding of the role of randomization in Bayesian inference)

Introduction: Leading theoretical statistician Larry Wassserman in 2008 : Some of the greatest contributions of statistics to science involve adding additional randomness and leveraging that randomness. Examples are randomized experiments, permutation tests, cross-validation and data-splitting. These are unabashedly frequentist ideas and, while one can strain to fit them into a Bayesian framework, they don’t really have a place in Bayesian inference. The fact that Bayesian methods do not naturally accommodate such a powerful set of statistical ideas seems like a serious deficiency. To which I responded on the second-to-last paragraph of page 8 here . Larry Wasserman in 2013 : Some people say that there is no role for randomization in Bayesian inference. In other words, the randomization mechanism plays no role in Bayes’ theorem. But this is not really true. Without randomization, we can indeed derive a posterior for theta but it is highly sensitive to the prior. This is just a restat

6 0.74746293 1165 andrew gelman stats-2012-02-13-Philosophy of Bayesian statistics: my reactions to Wasserman

7 0.73397982 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

8 0.72599685 1572 andrew gelman stats-2012-11-10-I don’t like this cartoon

9 0.71484065 320 andrew gelman stats-2010-10-05-Does posterior predictive model checking fit with the operational subjective approach?

10 0.71351409 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

11 0.71333838 1469 andrew gelman stats-2012-08-25-Ways of knowing

12 0.70402288 1713 andrew gelman stats-2013-02-08-P-values and statistical practice

13 0.70367497 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

14 0.69394779 811 andrew gelman stats-2011-07-20-Kind of Bayesian

15 0.6914109 2263 andrew gelman stats-2014-03-24-Empirical implications of Empirical Implications of Theoretical Models

16 0.68883246 2027 andrew gelman stats-2013-09-17-Christian Robert on the Jeffreys-Lindley paradox; more generally, it’s good news when philosophical arguments can be transformed into technical modeling issues

17 0.68735468 1208 andrew gelman stats-2012-03-11-Gelman on Hennig on Gelman on Bayes

18 0.68374449 1355 andrew gelman stats-2012-05-31-Lindley’s paradox

19 0.68283594 1719 andrew gelman stats-2013-02-11-Why waste time philosophizing?

20 0.68122464 244 andrew gelman stats-2010-08-30-Useful models, model checking, and external validation: a mini-discussion


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(8, 0.083), (9, 0.024), (15, 0.026), (16, 0.076), (17, 0.018), (18, 0.013), (20, 0.025), (21, 0.017), (24, 0.176), (35, 0.034), (84, 0.027), (86, 0.054), (99, 0.287)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.98355305 317 andrew gelman stats-2010-10-04-Rob Kass on statistical pragmatism, and my reactions

Introduction: Rob Kass writes : Statistics has moved beyond the frequentist-Bayesian controversies of the past. Where does this leave our ability to interpret results? I [Kass] suggest that a philosophy compatible with statistical practice, labeled here statistical pragmatism, serves as a foundation for inference. Statistical pragmatism is inclusive and emphasizes the assumptions that connect statistical models with observed data. I argue that introductory courses often mis-characterize the process of statistical inference and I propose an alternative “big picture” depiction. In my comments , I pretty much agree with everything Rob says, with a few points of elaboration: Kass describes probability theory as anchored upon physical randomization (coin flips, die rolls and the like) but being useful more generally as a mathematical model. I completely agree but would also add another anchoring point: calibration. Calibration of probability assessments is an objective, not subjective proce

same-blog 2 0.97191089 662 andrew gelman stats-2011-04-15-Bayesian statistical pragmatism

Introduction: Rob Kass’s article on statistical pragmatism is scheduled to appear in Statistical Science along with some discussions. Here are my comments. I agree with Rob Kass’s point that we can and should make use of statistical methods developed under different philosophies, and I am happy to take the opportunity to elaborate on some of his arguments. I’ll discuss the following: - Foundations of probability - Confidence intervals and hypothesis tests - Sampling - Subjectivity and belief - Different schools of statistics Foundations of probability. Kass describes probability theory as anchored upon physical randomization (coin flips, die rolls and the like) but being useful more generally as a mathematical model. I completely agree but would also add another anchoring point: calibration. Calibration of probability assessments is an objective, not subjective process, although some subjectivity (or scientific judgment) is necessarily involved in the choice of events used

3 0.96900487 916 andrew gelman stats-2011-09-18-Multimodality in hierarchical models

Introduction: Jim Hodges posted a note to the Bugs mailing list that I thought could be of more general interest: Is multi-modality a common experience? I [Hodges] think the answer is “nobody knows in any generality”. Here are some examples of bimodality that certainly do *not* involve the kind of labeling problems that arise in mixture models. The only systematic study of multimodality I know of is Liu J, Hodges JS (2003). Posterior bimodality in the balanced one-way random effects model. J.~Royal Stat.~Soc., Ser.~B, 65:247-255. The surprise of this paper is that in the simplest possible hierarchical model (analyzed using the standard inverse-gamma priors for the two variances), bimodality occurs quite readily, although it is much less common to have two modes that are big enough so that you’d actually get a noticeable fraction of MCMC draws from both of them. Because the restricted likelihood (= the marginal posterior for the two variances, if you’ve put flat priors on them) is

4 0.96434259 1133 andrew gelman stats-2012-01-21-Judea Pearl on why he is “only a half-Bayesian”

Introduction: In an article published in 2001, Pearl wrote: I [Pearl] turned Bayesian in 1971, as soon as I began reading Savage’s monograph The Foundations of Statistical Inference [Savage, 1962]. The arguments were unassailable: (i) It is plain silly to ignore what we know, (ii) It is natural and useful to cast what we know in the language of probabilities, and (iii) If our subjective probabilities are erroneous, their impact will get washed out in due time, as the number of observations increases. Thirty years later, I [Pearl] am still a devout Bayesian in the sense of (i), but I now doubt the wisdom of (ii) and I know that, in general, (iii) is false. He elaborates: The bulk of human knowledge is organized around causal, not probabilistic relationships, and the grammar of probability calculus is insufficient for capturing those relationships. Specifically, the building blocks of our scientific and everyday knowledge are elementary facts such as “mud does not cause rain” and “symptom

5 0.96283424 1128 andrew gelman stats-2012-01-19-Sharon Begley: Worse than Stephen Jay Gould?

Introduction: Commenter Tggp links to a criticism of science journalist Sharon Begley by science journalist Matthew Hutson. I learned of this dispute after reporting that Begley had received the American Statistical Association’s Excellence in Statistical Reporting Award, a completely undeserved honor, if Hutson is to believed. The two journalists have somewhat similar profiles: Begley was science editor at Newsweek (she’s now at Reuters) and author of “Train Your Mind, Change Your Brain: How a New Science Reveals Our Extraordinary Potential to Transform Ourselves,” and Hutson was news editor at Psychology Today and wrote the similarly self-helpy-titled, “The 7 Laws of Magical Thinking: How Irrational Beliefs Keep Us Happy, Healthy, and Sane.” Hutson writes : Psychological Science recently published a fascinating new study on jealousy. I was interested to read Newsweek’s 1300-word article covering the research by their science editor, Sharon Begley. But part-way through the article, I

6 0.96263218 994 andrew gelman stats-2011-11-06-Josh Tenenbaum presents . . . a model of folk physics!

7 0.96224797 1355 andrew gelman stats-2012-05-31-Lindley’s paradox

8 0.95966756 2201 andrew gelman stats-2014-02-06-Bootstrap averaging: Examples where it works and where it doesn’t work

9 0.95664871 2299 andrew gelman stats-2014-04-21-Stan Model of the Week: Hierarchical Modeling of Supernovas

10 0.95588905 899 andrew gelman stats-2011-09-10-The statistical significance filter

11 0.95556253 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

12 0.954831 1883 andrew gelman stats-2013-06-04-Interrogating p-values

13 0.95466191 575 andrew gelman stats-2011-02-15-What are the trickiest models to fit?

14 0.95363116 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

15 0.95259273 2183 andrew gelman stats-2014-01-23-Discussion on preregistration of research studies

16 0.95224524 2208 andrew gelman stats-2014-02-12-How to think about “identifiability” in Bayesian inference?

17 0.95214438 2140 andrew gelman stats-2013-12-19-Revised evidence for statistical standards

18 0.95198524 1056 andrew gelman stats-2011-12-13-Drawing to Learn in Science

19 0.95148867 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

20 0.95071965 1206 andrew gelman stats-2012-03-10-95% intervals that I don’t believe, because they’re from a flat prior I don’t believe