andrew_gelman_stats andrew_gelman_stats-2012 andrew_gelman_stats-2012-1205 knowledge-graph by maker-knowledge-mining

1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics


meta infos for this blog

Source: html

Introduction: Deborah Mayo collected some reactions to my recent article , Induction and Deduction in Bayesian Data Analysis. I’m pleased that that everybody (philosopher Mayo, applied statistician Stephen Senn, and theoretical statistician Larry Wasserman) is so positive about my article and that nobody’s defending the sort of hard-core inductivism that’s featured on the Bayesian inference wikipedia page. Here’s the Wikipedia definition, which I disagree with: Bayesian inference uses aspects of the scientific method, which involves collecting evidence that is meant to be consistent or inconsistent with a given hypothesis. As evidence accumulates, the degree of belief in a hypothesis ought to change. With enough evidence, it should become very high or very low. . . . Bayesian inference uses a numerical estimate of the degree of belief in a hypothesis before evidence has been observed and calculates a numerical estimate of the degree of belief in the hypothesis after evidence has been obse


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Deborah Mayo collected some reactions to my recent article , Induction and Deduction in Bayesian Data Analysis. [sent-1, score-0.254]

2 I’m pleased that that everybody (philosopher Mayo, applied statistician Stephen Senn, and theoretical statistician Larry Wasserman) is so positive about my article and that nobody’s defending the sort of hard-core inductivism that’s featured on the Bayesian inference wikipedia page. [sent-2, score-0.805]

3 Here’s the Wikipedia definition, which I disagree with: Bayesian inference uses aspects of the scientific method, which involves collecting evidence that is meant to be consistent or inconsistent with a given hypothesis. [sent-3, score-0.4]

4 As evidence accumulates, the degree of belief in a hypothesis ought to change. [sent-4, score-0.548]

5 Bayesian inference uses a numerical estimate of the degree of belief in a hypothesis before evidence has been observed and calculates a numerical estimate of the degree of belief in the hypothesis after evidence has been observed. [sent-9, score-1.595]

6 Bayesian inference usually relies on degrees of belief, or subjective probabilities, in the induction process and does not necessarily claim to provide an objective method of induction. [sent-13, score-0.349]

7 As I write in my article, the above does not describe what I do in my applied work. [sent-15, score-0.113]

8 I do go through models, sometimes starting with something simple and building up from there, other times starting with my first guess at a full model and then trimming it down until I can understand it in the context of data. [sent-16, score-0.386]

9 And in any reasonably large problem I will at some point discard a model and replace it with something new. [sent-17, score-0.149]

10 I’m unhappy when people identify “Bayesian statistics” with a set of procedures that I don’t actually do. [sent-18, score-0.092]

11 Now here are my reactions to the reactions to my article: Mayo, the philosopher , asks about the connections between Bayesian model checking and the severe testing described by her neo-Popperian philosophy. [sent-19, score-0.713]

12 ) I don’t know the answer to this one, but on some conceptual level I think her approach and mine are similar to each other—and very different from what is described in that Wikipedia excerpt. [sent-21, score-0.078]

13 ” This echoes a line from page 77 of my article: “My point here is not to say that my preferred methods are better than others but rather to couple my admission of philosophical incoherence with a reminder that there is no available coherent alternative. [sent-25, score-0.146]

14 ” Wasserman, the theoretical statistician , writes that “a pragmatic Bayesian will temporarily embrace the Bayesian viewpoint as a way to frame their analysis. [sent-26, score-0.427]

15 ” I agree, with the slight modification that a goodness-of-fit test can itself be Bayesian (as I discuss briefly in sections 4 and 5 of the article under discussion, and in more detail in chapter 6 of Bayesian Data Analysis). [sent-28, score-0.114]

16 Bayes means different things to different people, but to me it’s all about conditional probabilities of unknowns given model and data, and goodness-of-fit tests definitely fit into this framework. [sent-29, score-0.312]

17 ” I’m sure he’s right about that; see footnotes 1 and 4 of my article. [sent-31, score-0.084]

18 As I wrote in my article, any given Bayesian method can be interpreted as a classical estimator or testing procedure and its frequency properties evaluated; conversely, non-Bayesian procedures can typically be reformulated as approximate Bayesian inferences under suitable choices of model. [sent-33, score-0.666]

19 These processes of translation are valuable for their own sake and not just for communication purposes. [sent-34, score-0.194]

20 These acts of translation represent just one of the ways in which statistical theory can be relevant for applied statistics. [sent-36, score-0.305]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('bayesian', 0.371), ('belief', 0.177), ('mayo', 0.166), ('model', 0.149), ('reactions', 0.14), ('senn', 0.138), ('wikipedia', 0.136), ('degree', 0.135), ('evidence', 0.134), ('statistician', 0.132), ('philosopher', 0.13), ('translation', 0.125), ('method', 0.122), ('wasserman', 0.12), ('induction', 0.12), ('numerical', 0.114), ('article', 0.114), ('applied', 0.113), ('inference', 0.107), ('properties', 0.106), ('frequency', 0.105), ('hypothesis', 0.102), ('larry', 0.095), ('procedures', 0.092), ('understanding', 0.091), ('procedure', 0.087), ('probabilities', 0.084), ('footnotes', 0.084), ('accumulates', 0.084), ('calculates', 0.084), ('viewpoint', 0.084), ('approaches', 0.083), ('uses', 0.08), ('disagree', 0.079), ('trimming', 0.079), ('unknowns', 0.079), ('starting', 0.079), ('classical', 0.078), ('described', 0.078), ('testing', 0.076), ('goodness', 0.076), ('echoes', 0.073), ('methods', 0.073), ('theoretical', 0.071), ('temporarily', 0.071), ('deductive', 0.071), ('approaching', 0.071), ('sake', 0.069), ('embrace', 0.069), ('acts', 0.067)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.9999997 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

Introduction: Deborah Mayo collected some reactions to my recent article , Induction and Deduction in Bayesian Data Analysis. I’m pleased that that everybody (philosopher Mayo, applied statistician Stephen Senn, and theoretical statistician Larry Wasserman) is so positive about my article and that nobody’s defending the sort of hard-core inductivism that’s featured on the Bayesian inference wikipedia page. Here’s the Wikipedia definition, which I disagree with: Bayesian inference uses aspects of the scientific method, which involves collecting evidence that is meant to be consistent or inconsistent with a given hypothesis. As evidence accumulates, the degree of belief in a hypothesis ought to change. With enough evidence, it should become very high or very low. . . . Bayesian inference uses a numerical estimate of the degree of belief in a hypothesis before evidence has been observed and calculates a numerical estimate of the degree of belief in the hypothesis after evidence has been obse

2 0.26579565 1554 andrew gelman stats-2012-10-31-It not necessary that Bayesian methods conform to the likelihood principle

Introduction: Bayesian inference, conditional on the model and data, conforms to the likelihood principle. But there is more to Bayesian methods than Bayesian inference. See chapters 6 and 7 of Bayesian Data Analysis for much discussion of this point. It saddens me to see that people are still confused on this issue.

3 0.26548913 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

Introduction: Prasanta Bandyopadhyay and Gordon Brittan write : We introduce a distinction, unnoticed in the literature, between four varieties of objective Bayesianism. What we call ‘strong objective Bayesianism’ is characterized by two claims, that all scientific inference is ‘logical’ and that, given the same background information two agents will ascribe a unique probability to their priors. We think that neither of these claims can be sustained; in this sense, they are ‘dogmatic’. The first fails to recognize that some scientific inference, in particular that concerning evidential relations, is not (in the appropriate sense) logical, the second fails to provide a non-question-begging account of ‘same background information’. We urge that a suitably objective Bayesian account of scientific inference does not require either of the claims. Finally, we argue that Bayesianism needs to be fine-grained in the same way that Bayesians fine-grain their beliefs. I have not read their paper in detai

4 0.26511806 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

Introduction: I sent Deborah Mayo a link to my paper with Cosma Shalizi on the philosophy of statistics, and she sent me the link to this conference which unfortunately already occurred. (It’s too bad, because I’d have liked to have been there.) I summarized my philosophy as follows: I am highly sympathetic to the approach of Lakatos (or of Popper, if you consider Lakatos’s “Popper_2″ to be a reasonable simulation of the true Popperism), in that (a) I view statistical models as being built within theoretical structures, and (b) I see the checking and refutation of models to be a key part of scientific progress. A big problem I have with mainstream Bayesianism is its “inductivist” view that science can operate completely smoothly with posterior updates: the idea that new data causes us to increase the posterior probability of good models and decrease the posterior probability of bad models. I don’t buy that: I see models as ever-changing entities that are flexible and can be patched and ex

5 0.26371762 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

Introduction: I’ve been writing a lot about my philosophy of Bayesian statistics and how it fits into Popper’s ideas about falsification and Kuhn’s ideas about scientific revolutions. Here’s my long, somewhat technical paper with Cosma Shalizi. Here’s our shorter overview for the volume on the philosophy of social science. Here’s my latest try (for an online symposium), focusing on the key issues. I’m pretty happy with my approach–the familiar idea that Bayesian data analysis iterates the three steps of model building, inference, and model checking–but it does have some unresolved (maybe unresolvable) problems. Here are a couple mentioned in the third of the above links. Consider a simple model with independent data y_1, y_2, .., y_10 ~ N(θ,σ^2), with a prior distribution θ ~ N(0,10^2) and σ known and taking on some value of approximately 10. Inference about μ is straightforward, as is model checking, whether based on graphs or numerical summaries such as the sample variance and skewn

6 0.2525081 1151 andrew gelman stats-2012-02-03-Philosophy of Bayesian statistics: my reactions to Senn

7 0.24668628 932 andrew gelman stats-2011-09-30-Articles on the philosophy of Bayesian statistics by Cox, Mayo, Senn, and others!

8 0.24381267 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

9 0.24008477 1719 andrew gelman stats-2013-02-11-Why waste time philosophizing?

10 0.23783608 1469 andrew gelman stats-2012-08-25-Ways of knowing

11 0.23678666 1610 andrew gelman stats-2012-12-06-Yes, checking calibration of probability forecasts is part of Bayesian statistics

12 0.23209819 1712 andrew gelman stats-2013-02-07-Philosophy and the practice of Bayesian statistics (with all the discussions!)

13 0.21388747 890 andrew gelman stats-2011-09-05-Error statistics

14 0.20923683 811 andrew gelman stats-2011-07-20-Kind of Bayesian

15 0.20830277 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

16 0.20821029 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

17 0.20638116 1438 andrew gelman stats-2012-07-31-What is a Bayesian?

18 0.20502247 244 andrew gelman stats-2010-08-30-Useful models, model checking, and external validation: a mini-discussion

19 0.20006672 2368 andrew gelman stats-2014-06-11-Bayes in the research conversation

20 0.19378236 1898 andrew gelman stats-2013-06-14-Progress! (on the understanding of the role of randomization in Bayesian inference)


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.289), (1, 0.27), (2, -0.138), (3, 0.066), (4, -0.214), (5, 0.014), (6, -0.129), (7, 0.118), (8, 0.119), (9, -0.12), (10, -0.018), (11, -0.065), (12, -0.04), (13, 0.048), (14, 0.045), (15, 0.056), (16, 0.069), (17, 0.01), (18, -0.049), (19, 0.05), (20, -0.006), (21, 0.06), (22, -0.012), (23, -0.015), (24, 0.014), (25, -0.061), (26, 0.003), (27, -0.003), (28, -0.008), (29, 0.018), (30, 0.046), (31, 0.018), (32, 0.053), (33, 0.014), (34, 0.019), (35, 0.006), (36, -0.01), (37, -0.012), (38, 0.041), (39, -0.008), (40, 0.026), (41, 0.005), (42, 0.0), (43, 0.054), (44, 0.005), (45, 0.0), (46, -0.002), (47, -0.016), (48, -0.034), (49, 0.02)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98820567 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

Introduction: Deborah Mayo collected some reactions to my recent article , Induction and Deduction in Bayesian Data Analysis. I’m pleased that that everybody (philosopher Mayo, applied statistician Stephen Senn, and theoretical statistician Larry Wasserman) is so positive about my article and that nobody’s defending the sort of hard-core inductivism that’s featured on the Bayesian inference wikipedia page. Here’s the Wikipedia definition, which I disagree with: Bayesian inference uses aspects of the scientific method, which involves collecting evidence that is meant to be consistent or inconsistent with a given hypothesis. As evidence accumulates, the degree of belief in a hypothesis ought to change. With enough evidence, it should become very high or very low. . . . Bayesian inference uses a numerical estimate of the degree of belief in a hypothesis before evidence has been observed and calculates a numerical estimate of the degree of belief in the hypothesis after evidence has been obse

2 0.94059139 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

Introduction: I sent Deborah Mayo a link to my paper with Cosma Shalizi on the philosophy of statistics, and she sent me the link to this conference which unfortunately already occurred. (It’s too bad, because I’d have liked to have been there.) I summarized my philosophy as follows: I am highly sympathetic to the approach of Lakatos (or of Popper, if you consider Lakatos’s “Popper_2″ to be a reasonable simulation of the true Popperism), in that (a) I view statistical models as being built within theoretical structures, and (b) I see the checking and refutation of models to be a key part of scientific progress. A big problem I have with mainstream Bayesianism is its “inductivist” view that science can operate completely smoothly with posterior updates: the idea that new data causes us to increase the posterior probability of good models and decrease the posterior probability of bad models. I don’t buy that: I see models as ever-changing entities that are flexible and can be patched and ex

3 0.92409563 117 andrew gelman stats-2010-06-29-Ya don’t know Bayes, Jack

Introduction: I came across this article on the philosophy of statistics by University of Michigan economist John DiNardo. I don’t have much to say about the substance of the article because most of it is an argument against something called “Bayesian methods” that doesn’t have much in common with the Bayesian data analysis that I do. If an quantitative, empirically-minded economist at a top university doesn’t know about modern Bayesian methods, then it’s a pretty good guess that confusion holds in many other quarters as well, so I thought I’d try to clear a couple of things up. (See also here .) In the short term, I know I have some readers at the University of Michigan, so maybe a couple of you could go over to Prof. DiNardo’s office and discuss this with him? For the rest of you, please spread the word. My point here is not to claim that DiNardo should be using Bayesian methods or to claim that he’s doing anything wrong in his applied work. It’s just that he’s fighting against a bu

4 0.92408204 1719 andrew gelman stats-2013-02-11-Why waste time philosophizing?

Introduction: I’ll answer the above question after first sharing some background and history on the the philosophy of Bayesian statistics, which appeared at the end of our rejoinder to the discussion to which I linked the other day: When we were beginning our statistical educations, the word ‘Bayesian’ conveyed membership in an obscure cult. Statisticians who were outside the charmed circle could ignore the Bayesian subfield, while Bayesians themselves tended to be either apologetic or brazenly defiant. These two extremes manifested themselves in ever more elaborate proposals for non-informative priors, on the one hand, and declarations of the purity of subjective probability, on the other. Much has changed in the past 30 years. ‘Bayesian’ is now often used in casual scientific parlance as a synonym for ‘rational’, the anti-Bayesians have mostly disappeared, and non-Bayesian statisticians feel the need to keep up with developments in Bayesian modelling and computation. Bayesians themselves

5 0.92032117 1438 andrew gelman stats-2012-07-31-What is a Bayesian?

Introduction: Deborah Mayo recommended that I consider coming up with a new name for the statistical methods that I used, given that the term “Bayesian” has all sorts of associations that I dislike (as discussed, for example, in section 1 of this article ). I replied that I agree on Bayesian, I never liked the term and always wanted something better, but I couldn’t think of any convenient alternative. Also, I was finding that Bayesians (even the Bayesians I disagreed with) were reading my research articles, while non-Bayesians were simply ignoring them. So I thought it was best to identify with, and communicate with, those people who were willing to engage with me. More formally, I’m happy defining “Bayesian” as “using inference from the posterior distribution, p(theta|y)”. This says nothing about where the probability distributions come from (thus, no requirement to be “subjective” or “objective”) and it says nothing about the models (thus, no requirement to use the discrete models that hav

6 0.91957611 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

7 0.8992995 1712 andrew gelman stats-2013-02-07-Philosophy and the practice of Bayesian statistics (with all the discussions!)

8 0.896285 1469 andrew gelman stats-2012-08-25-Ways of knowing

9 0.88455617 1554 andrew gelman stats-2012-10-31-It not necessary that Bayesian methods conform to the likelihood principle

10 0.88283992 1571 andrew gelman stats-2012-11-09-The anti-Bayesian moment and its passing

11 0.8768363 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

12 0.86362922 114 andrew gelman stats-2010-06-28-More on Bayesian deduction-induction

13 0.86123759 1610 andrew gelman stats-2012-12-06-Yes, checking calibration of probability forecasts is part of Bayesian statistics

14 0.86083704 2254 andrew gelman stats-2014-03-18-Those wacky anti-Bayesians used to be intimidating, but now they’re just pathetic

15 0.85333186 110 andrew gelman stats-2010-06-26-Philosophy and the practice of Bayesian statistics

16 0.85059649 932 andrew gelman stats-2011-09-30-Articles on the philosophy of Bayesian statistics by Cox, Mayo, Senn, and others!

17 0.84781921 1781 andrew gelman stats-2013-03-29-Another Feller theory

18 0.84750402 1262 andrew gelman stats-2012-04-12-“Not only defended but also applied”: The perceived absurdity of Bayesian inference

19 0.84541416 811 andrew gelman stats-2011-07-20-Kind of Bayesian

20 0.84423476 746 andrew gelman stats-2011-06-05-An unexpected benefit of Arrow’s other theorem


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(5, 0.012), (15, 0.061), (16, 0.093), (21, 0.023), (24, 0.143), (25, 0.011), (28, 0.037), (42, 0.013), (45, 0.015), (53, 0.01), (61, 0.01), (84, 0.073), (86, 0.064), (95, 0.013), (99, 0.292)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.98516285 2004 andrew gelman stats-2013-09-01-Post-publication peer review: How it (sometimes) really works

Introduction: In an ideal world, research articles would be open to criticism and discussion in the same place where they are published, in a sort of non-corrupt version of Yelp. What is happening now is that the occasional paper or research area gets lots of press coverage, and this inspires reactions on science-focused blogs. The trouble here is that it’s easier to give off-the-cuff comments than detailed criticisms. Here’s an example. It starts a couple years ago with this article by Ryota Kanai, Tom Feilden, Colin Firth, and Geraint Rees, on brain size and political orientation: In a large sample of young adults, we related self-reported political attitudes to gray matter volume using structural MRI. We found that greater liberalism was associated with increased gray matter volume in the anterior cingulate cortex, whereas greater conservatism was associated with increased volume of the right amygdala. These results were replicated in an independent sample of additional participants. Ou

same-blog 2 0.97281235 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

Introduction: Deborah Mayo collected some reactions to my recent article , Induction and Deduction in Bayesian Data Analysis. I’m pleased that that everybody (philosopher Mayo, applied statistician Stephen Senn, and theoretical statistician Larry Wasserman) is so positive about my article and that nobody’s defending the sort of hard-core inductivism that’s featured on the Bayesian inference wikipedia page. Here’s the Wikipedia definition, which I disagree with: Bayesian inference uses aspects of the scientific method, which involves collecting evidence that is meant to be consistent or inconsistent with a given hypothesis. As evidence accumulates, the degree of belief in a hypothesis ought to change. With enough evidence, it should become very high or very low. . . . Bayesian inference uses a numerical estimate of the degree of belief in a hypothesis before evidence has been observed and calculates a numerical estimate of the degree of belief in the hypothesis after evidence has been obse

3 0.968238 1877 andrew gelman stats-2013-05-30-Infill asymptotics and sprawl asymptotics

Introduction: Anirban Bhattacharya, Debdeep Pati, Natesh Pillai, and David Dunson write : Penalized regression methods, such as L1 regularization, are routinely used in high-dimensional applications, and there is a rich literature on optimality properties under sparsity assumptions. In the Bayesian paradigm, sparsity is routinely induced through two-component mixture priors having a probability mass at zero, but such priors encounter daunting computational problems in high dimensions. This has motivated an amazing variety of continuous shrinkage priors, which can be expressed as global-local scale mixtures of Gaussians, facilitating computation. In sharp contrast to the corresponding frequentist literature, very little is known about the properties of such priors. Focusing on a broad class of shrinkage priors, we provide precise results on prior and posterior concentration. Interestingly, we demonstrate that most commonly used shrinkage priors, including the Bayesian Lasso, are suboptimal in hig

4 0.96792507 1774 andrew gelman stats-2013-03-22-Likelihood Ratio ≠ 1 Journal

Introduction: Dan Kahan writes : The basic idea . . . is to promote identification of study designs that scholars who disagree about a proposition would agree would generate evidence relevant to their competing conjectures—regardless of what studies based on such designs actually find. Articles proposing designs of this sort would be selected for publication and only then be carried out, by the proposing researchers with funding from the journal, which would publish the results too. Now I [Kahan] am aware of a set of real journals that have a similar motivation. One is the Journal of Articles in Support of the Null Hypothesis, which as its title implies publishes papers reporting studies that fail to “reject” the null. Like JASNH, LR ≠1J would try to offset the “file drawer” bias and like bad consequences associated with the convention of publishing only findings that are “significant at p < 0.05." But it would try to do more. By publishing studies that are deemed to have valid designs an

5 0.96615595 2299 andrew gelman stats-2014-04-21-Stan Model of the Week: Hierarchical Modeling of Supernovas

Introduction: The Stan Model of the Week showcases research using Stan to push the limits of applied statistics.  If you have a model that you would like to submit for a future post then send us an email . Our inaugural post comes from Nathan Sanders, a graduate student finishing up his thesis on astrophysics at Harvard. Nathan writes, “Core-collapse supernovae, the luminous explosions of massive stars, exhibit an expansive and meaningful diversity of behavior in their brightness evolution over time (their “light curves”). Our group discovers and monitors these events using the Pan-STARRS1 telescope in Hawaii, and we’ve collected a dataset of about 20,000 individual photometric observations of about 80 Type IIP supernovae, the class my work has focused on. While this dataset provides one of the best available tools to infer the explosion properties of these supernovae, due to the nature of extragalactic astronomy (observing from distances 1 billion light years), these light curves typicall

6 0.96604276 1883 andrew gelman stats-2013-06-04-Interrogating p-values

7 0.96201611 2201 andrew gelman stats-2014-02-06-Bootstrap averaging: Examples where it works and where it doesn’t work

8 0.96075511 187 andrew gelman stats-2010-08-05-Update on state size and governors’ popularity

9 0.95976609 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

10 0.95909864 186 andrew gelman stats-2010-08-04-“To find out what happens when you change something, it is necessary to change it.”

11 0.95877546 1910 andrew gelman stats-2013-06-22-Struggles over the criticism of the “cannabis users and IQ change” paper

12 0.95786995 42 andrew gelman stats-2010-05-19-Updated solutions to Bayesian Data Analysis homeworks

13 0.95786667 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

14 0.95782769 360 andrew gelman stats-2010-10-21-Forensic bioinformatics, or, Don’t believe everything you read in the (scientific) papers

15 0.95700133 1165 andrew gelman stats-2012-02-13-Philosophy of Bayesian statistics: my reactions to Wasserman

16 0.95664108 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

17 0.95638514 1848 andrew gelman stats-2013-05-09-A tale of two discussion papers

18 0.956258 2140 andrew gelman stats-2013-12-19-Revised evidence for statistical standards

19 0.95617902 1266 andrew gelman stats-2012-04-16-Another day, another plagiarist

20 0.95584118 1117 andrew gelman stats-2012-01-13-What are the important issues in ethics and statistics? I’m looking for your input!