andrew_gelman_stats andrew_gelman_stats-2012 andrew_gelman_stats-2012-1247 knowledge-graph by maker-knowledge-mining

1247 andrew gelman stats-2012-04-05-More philosophy of Bayes


meta infos for this blog

Source: html

Introduction: Konrad Scheffler writes: I was interested by your paper “Induction and deduction in Bayesian data analysis” and was wondering if you would entertain a few questions: – Under the banner of objective Bayesianism, I would posit something like this as a description of Bayesian inference: “Objective Bayesian probability is not a degree of belief (which would necessarily be subjective) but a measure of the plausibility of a hypothesis, conditional on a formally specified information state. One way of specifying a formal information state is to specify a model, which involves specifying both a prior distribution (typically for a set of unobserved variables) and a likelihood function (typically for a set of observed variables, conditioned on the values of the unobserved variables). Bayesian inference involves calculating the objective degree of plausibility of a hypothesis (typically the truth value of the hypothesis is a function of the variables mentioned above) given such a


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 One way of specifying a formal information state is to specify a model, which involves specifying both a prior distribution (typically for a set of unobserved variables) and a likelihood function (typically for a set of observed variables, conditioned on the values of the unobserved variables). [sent-2, score-1.231]

2 Bayesian inference involves calculating the objective degree of plausibility of a hypothesis (typically the truth value of the hypothesis is a function of the variables mentioned above) given such an information state. [sent-3, score-0.96]

3 We are free to calculate probabilities conditioned on different information states and use these to argue that one information state corresponds more closely than another to a given real-world (i. [sent-4, score-0.811]

4 Alternatively we may calculate p-values conditional on an information state (via posterior predictive checking) and use them to draw conclusions about the degree to which the information state is informative about the real world. [sent-8, score-0.709]

5 ” I would not have thought this type of description should be particularly controversial, but as you point out the popular view seems to focus exclusively on subjective Bayesianism and I’m not sure where I would even find a similar description of the objective Bayesian viewpoint. [sent-12, score-0.637]

6 - Regarding your question on the continuous/discrete distinction, doesn’t it make more sense to instead distinguish between numerical and categorical variables (where the former are defined on an ordered set and the latter on an unordered set)? [sent-18, score-0.564]

7 But when a numerical variable can be replaced with a categorical variable without changing the model (i. [sent-20, score-0.367]

8 - A technical point: you claim that no prior distribution can completely reflect prior knowledge. [sent-24, score-0.474]

9 The marginal probability of data given model, p(y|M), typically depends strongly on aspects of the prior distribution that have essentially no impact on posterior inferences given the model. [sent-32, score-0.992]

10 Here’s an example of incoherence: we model some data with a normal distribution but if the rate of outliers exceeds some threshold, we switch to a t distribution. [sent-38, score-0.431]

11 The coherent thing would’ve been to start with the t distribution (if necessary, with some prior distribution that favored a large number of degrees of freedom). [sent-40, score-0.554]

12 Ultimately it comes down to setting up a reasonable joint prior distribution on the parameters at different levels of the model. [sent-46, score-0.406]

13 Scheffler responds: On item 1 above, I’m not quite sure what your point is here – the prior (which I think is best considered to be part of the model) may or may not have a strong effect on a given posterior inference. [sent-51, score-0.611]

14 This is why I think it’s important to emphasize that posterior probabilities are conditioned on the model (this seems not to be emphasized in subjective Bayesianism). [sent-52, score-0.649]

15 Regarding item 2, I agree that analysing a single data set with different distributions applied to different points is incoherent, but I haven’t seen anyone do this. [sent-55, score-0.64]

16 I don’t think it’s incoherent to use different distributions for different data sets, unless you are assuming that they are sampled from the same underlying distribution (in which case you could equally well consider them to be part of the same data set). [sent-56, score-0.788]

17 I also don’t think it’s incoherent to switch to a better model after discovering that it is better, provided you analyse the full data set with that model. [sent-57, score-0.528]

18 , that come from most statistical analyses because they depend on aspects of the model that that have essentially no impact on posterior inferences given the model. [sent-60, score-0.576]

19 Regarding item 2, my example did not involve analyzing a single data set with different distributions applied to different points. [sent-63, score-0.64]

20 I was talking about the very common procedure of using a single model for all the data points, but choosing or rejecting the model based on how it fits the data. [sent-64, score-0.572]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('plausibility', 0.289), ('jaynes', 0.239), ('model', 0.211), ('objective', 0.183), ('robot', 0.176), ('prior', 0.163), ('distribution', 0.148), ('information', 0.14), ('conditioned', 0.139), ('posterior', 0.132), ('practice', 0.126), ('incoherent', 0.124), ('set', 0.121), ('coherence', 0.12), ('scheffler', 0.117), ('variables', 0.117), ('bayesianism', 0.116), ('bayesian', 0.111), ('continuous', 0.108), ('description', 0.103), ('averaging', 0.1), ('unordered', 0.096), ('coherent', 0.095), ('different', 0.095), ('distributions', 0.095), ('exclusively', 0.093), ('discrete', 0.089), ('subjective', 0.087), ('part', 0.087), ('degree', 0.086), ('probability', 0.085), ('item', 0.084), ('unobserved', 0.084), ('categorical', 0.083), ('typically', 0.082), ('probabilities', 0.08), ('specifying', 0.08), ('single', 0.078), ('aspects', 0.078), ('inferences', 0.078), ('given', 0.077), ('ordered', 0.074), ('numerical', 0.073), ('data', 0.072), ('state', 0.071), ('formally', 0.071), ('calculate', 0.069), ('specified', 0.069), ('point', 0.068), ('inference', 0.068)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999976 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

Introduction: Konrad Scheffler writes: I was interested by your paper “Induction and deduction in Bayesian data analysis” and was wondering if you would entertain a few questions: – Under the banner of objective Bayesianism, I would posit something like this as a description of Bayesian inference: “Objective Bayesian probability is not a degree of belief (which would necessarily be subjective) but a measure of the plausibility of a hypothesis, conditional on a formally specified information state. One way of specifying a formal information state is to specify a model, which involves specifying both a prior distribution (typically for a set of unobserved variables) and a likelihood function (typically for a set of observed variables, conditioned on the values of the unobserved variables). Bayesian inference involves calculating the objective degree of plausibility of a hypothesis (typically the truth value of the hypothesis is a function of the variables mentioned above) given such a

2 0.31681457 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

Introduction: I’ve been writing a lot about my philosophy of Bayesian statistics and how it fits into Popper’s ideas about falsification and Kuhn’s ideas about scientific revolutions. Here’s my long, somewhat technical paper with Cosma Shalizi. Here’s our shorter overview for the volume on the philosophy of social science. Here’s my latest try (for an online symposium), focusing on the key issues. I’m pretty happy with my approach–the familiar idea that Bayesian data analysis iterates the three steps of model building, inference, and model checking–but it does have some unresolved (maybe unresolvable) problems. Here are a couple mentioned in the third of the above links. Consider a simple model with independent data y_1, y_2, .., y_10 ~ N(θ,σ^2), with a prior distribution θ ~ N(0,10^2) and σ known and taking on some value of approximately 10. Inference about μ is straightforward, as is model checking, whether based on graphs or numerical summaries such as the sample variance and skewn

3 0.29118949 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

Introduction: Prasanta Bandyopadhyay and Gordon Brittan write : We introduce a distinction, unnoticed in the literature, between four varieties of objective Bayesianism. What we call ‘strong objective Bayesianism’ is characterized by two claims, that all scientific inference is ‘logical’ and that, given the same background information two agents will ascribe a unique probability to their priors. We think that neither of these claims can be sustained; in this sense, they are ‘dogmatic’. The first fails to recognize that some scientific inference, in particular that concerning evidential relations, is not (in the appropriate sense) logical, the second fails to provide a non-question-begging account of ‘same background information’. We urge that a suitably objective Bayesian account of scientific inference does not require either of the claims. Finally, we argue that Bayesianism needs to be fine-grained in the same way that Bayesians fine-grain their beliefs. I have not read their paper in detai

4 0.26926115 1155 andrew gelman stats-2012-02-05-What is a prior distribution?

Introduction: Some recent blog discussion revealed some confusion that I’ll try to resolve here. I wrote that I’m not a big fan of subjective priors. Various commenters had difficulty with this point, and I think the issue was most clearly stated by Bill Jeff re erys, who wrote : It seems to me that your prior has to reflect your subjective information before you look at the data. How can it not? But this does not mean that the (subjective) prior that you choose is irrefutable; Surely a prior that reflects prior information just does not have to be inconsistent with that information. But that still leaves a range of priors that are consistent with it, the sort of priors that one would use in a sensitivity analysis, for example. I think I see what Bill is getting at. A prior represents your subjective belief, or some approximation to your subjective belief, even if it’s not perfect. That sounds reasonable but I don’t think it works. Or, at least, it often doesn’t work. Let’s start

5 0.26620191 754 andrew gelman stats-2011-06-09-Difficulties with Bayesian model averaging

Introduction: In response to this article by Cosma Shalizi and myself on the philosophy of Bayesian statistics, David Hogg writes: I [Hogg] agree–even in physics and astronomy–that the models are not “True” in the God-like sense of being absolute reality (that is, I am not a realist); and I have argued (a philosophically very naive paper, but hey, I was new to all this) that for pretty fundamental reasons we could never arrive at the True (with a capital “T”) model of the Universe. The goal of inference is to find the “best” model, where “best” might have something to do with prediction, or explanation, or message length, or (horror!) our utility. Needless to say, most of my physics friends *are* realists, even in the face of “effective theories” as Newtonian mechanics is an effective theory of GR and GR is an effective theory of “quantum gravity” (this plays to your point, because if you think any theory is possibly an effective theory, how could you ever find Truth?). I also liked the i

6 0.2534053 811 andrew gelman stats-2011-07-20-Kind of Bayesian

7 0.24381267 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

8 0.23675339 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

9 0.23327431 1510 andrew gelman stats-2012-09-25-Incoherence of Bayesian data analysis

10 0.22823048 2109 andrew gelman stats-2013-11-21-Hidden dangers of noninformative priors

11 0.22620063 1941 andrew gelman stats-2013-07-16-Priors

12 0.22388805 320 andrew gelman stats-2010-10-05-Does posterior predictive model checking fit with the operational subjective approach?

13 0.21660931 1149 andrew gelman stats-2012-02-01-Philosophy of Bayesian statistics: my reactions to Cox and Mayo

14 0.21645908 2007 andrew gelman stats-2013-09-03-Popper and Jaynes

15 0.21547055 1092 andrew gelman stats-2011-12-29-More by Berger and me on weakly informative priors

16 0.21382521 2200 andrew gelman stats-2014-02-05-Prior distribution for a predicted probability

17 0.21260214 1431 andrew gelman stats-2012-07-27-Overfitting

18 0.20833953 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

19 0.20608823 1972 andrew gelman stats-2013-08-07-When you’re planning on fitting a model, build up to it by fitting simpler models first. Then, once you have a model you like, check the hell out of it

20 0.20584552 1474 andrew gelman stats-2012-08-29-More on scaled-inverse Wishart and prior independence


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.319), (1, 0.319), (2, 0.001), (3, 0.095), (4, -0.112), (5, -0.067), (6, 0.042), (7, 0.068), (8, -0.017), (9, 0.029), (10, -0.024), (11, 0.027), (12, -0.031), (13, -0.008), (14, -0.049), (15, 0.022), (16, 0.062), (17, -0.032), (18, 0.018), (19, 0.01), (20, 0.021), (21, -0.026), (22, -0.021), (23, -0.047), (24, -0.037), (25, 0.046), (26, 0.06), (27, -0.005), (28, 0.019), (29, 0.011), (30, 0.014), (31, -0.019), (32, 0.014), (33, 0.082), (34, -0.027), (35, 0.055), (36, 0.036), (37, -0.006), (38, -0.015), (39, -0.003), (40, 0.001), (41, -0.1), (42, 0.041), (43, -0.008), (44, 0.026), (45, -0.009), (46, 0.01), (47, -0.026), (48, -0.017), (49, 0.031)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98154855 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

Introduction: Konrad Scheffler writes: I was interested by your paper “Induction and deduction in Bayesian data analysis” and was wondering if you would entertain a few questions: – Under the banner of objective Bayesianism, I would posit something like this as a description of Bayesian inference: “Objective Bayesian probability is not a degree of belief (which would necessarily be subjective) but a measure of the plausibility of a hypothesis, conditional on a formally specified information state. One way of specifying a formal information state is to specify a model, which involves specifying both a prior distribution (typically for a set of unobserved variables) and a likelihood function (typically for a set of observed variables, conditioned on the values of the unobserved variables). Bayesian inference involves calculating the objective degree of plausibility of a hypothesis (typically the truth value of the hypothesis is a function of the variables mentioned above) given such a

2 0.88458312 1208 andrew gelman stats-2012-03-11-Gelman on Hennig on Gelman on Bayes

Introduction: Deborah Mayo pointed me to this discussion by Christian Hennig of my recent article on Induction and Deduction in Bayesian Data Analysis. A couple days ago I responded to comments by Mayo, Stephen Senn, and Larry Wasserman. I will respond to Hennig by pulling out paragraphs from his discussion and then replying. Hennig: for me the terms “frequentist” and “subjective Bayes” point to interpretations of probability, and not to specific methods of inference. The frequentist one refers to the idea that there is an underlying data generating process that repeatedly throws out data and would approximate the assumed distribution if one could only repeat it infinitely often. Hennig makes the good point that, if this is the way you would define “frequentist” (it’s not how I’d define the term myself, but I’ll use Hennig’s definition here), then it makes sense to be a frequentist in some settings but not others. Dice really can be rolled over and over again; a sample survey of 15

3 0.87366581 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

Introduction: I’ve been writing a lot about my philosophy of Bayesian statistics and how it fits into Popper’s ideas about falsification and Kuhn’s ideas about scientific revolutions. Here’s my long, somewhat technical paper with Cosma Shalizi. Here’s our shorter overview for the volume on the philosophy of social science. Here’s my latest try (for an online symposium), focusing on the key issues. I’m pretty happy with my approach–the familiar idea that Bayesian data analysis iterates the three steps of model building, inference, and model checking–but it does have some unresolved (maybe unresolvable) problems. Here are a couple mentioned in the third of the above links. Consider a simple model with independent data y_1, y_2, .., y_10 ~ N(θ,σ^2), with a prior distribution θ ~ N(0,10^2) and σ known and taking on some value of approximately 10. Inference about μ is straightforward, as is model checking, whether based on graphs or numerical summaries such as the sample variance and skewn

4 0.86952752 2027 andrew gelman stats-2013-09-17-Christian Robert on the Jeffreys-Lindley paradox; more generally, it’s good news when philosophical arguments can be transformed into technical modeling issues

Introduction: X writes : This paper discusses the dual interpretation of the Jeffreys– Lindley’s paradox associated with Bayesian posterior probabilities and Bayes factors, both as a differentiation between frequentist and Bayesian statistics and as a pointer to the difficulty of using improper priors while testing. We stress the considerable impact of this paradox on the foundations of both classical and Bayesian statistics. I like this paper in that he is transforming what is often seen as a philosophical argument into a technical issue, in this case a question of priors. Certain conventional priors (the so-called spike and slab) have poor statistical properties in settings such as model comparison (in addition to not making sense as prior distributions of any realistic state of knowledge). This reminds me of the way that we nowadays think about hierarchical models. In the old days there was much thoughtful debate about exchangeability and the so-called Stein paradox that partial pooling

5 0.85879868 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

Introduction: Ryan Ickert writes: I was wondering if you’d seen this post , by a particle physicist with some degree of influence. Dr. Dorigo works at CERN and Fermilab. The penultimate paragraph is: From the above expression, the Frequentist researcher concludes that the tracker is indeed biased, and rejects the null hypothesis H0, since there is a less-than-2% probability (P’<α) that a result as the one observed could arise by chance! A Frequentist thus draws, strongly, the opposite conclusion than a Bayesian from the same set of data. How to solve the riddle? He goes on to not solve the riddle. Perhaps you can? Surely with the large sample size they have (n=10^6), the precision on the frequentist p-value is pretty good, is it not? My reply: The first comment on the site (by Anonymous [who, just to be clear, is not me; I have no idea who wrote that comment], 22 Feb 2012, 21:27pm) pretty much nails it: In setting up the Bayesian model, Dorigo assumed a silly distribution on th

6 0.85782307 1510 andrew gelman stats-2012-09-25-Incoherence of Bayesian data analysis

7 0.85191488 754 andrew gelman stats-2011-06-09-Difficulties with Bayesian model averaging

8 0.84881067 811 andrew gelman stats-2011-07-20-Kind of Bayesian

9 0.84300745 320 andrew gelman stats-2010-10-05-Does posterior predictive model checking fit with the operational subjective approach?

10 0.83943266 2208 andrew gelman stats-2014-02-12-How to think about “identifiability” in Bayesian inference?

11 0.83780378 1723 andrew gelman stats-2013-02-15-Wacky priors can work well?

12 0.82321227 1999 andrew gelman stats-2013-08-27-Bayesian model averaging or fitting a larger model

13 0.81524736 1041 andrew gelman stats-2011-12-04-David MacKay and Occam’s Razor

14 0.81410503 1332 andrew gelman stats-2012-05-20-Problemen met het boek

15 0.80558872 2029 andrew gelman stats-2013-09-18-Understanding posterior p-values

16 0.80506778 1713 andrew gelman stats-2013-02-08-P-values and statistical practice

17 0.79984635 2182 andrew gelman stats-2014-01-22-Spell-checking example demonstrates key aspects of Bayesian data analysis

18 0.79361624 1946 andrew gelman stats-2013-07-19-Prior distributions on derived quantities rather than on parameters themselves

19 0.79230404 1518 andrew gelman stats-2012-10-02-Fighting a losing battle

20 0.79215544 2342 andrew gelman stats-2014-05-21-Models with constraints


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(9, 0.025), (15, 0.042), (16, 0.082), (21, 0.033), (24, 0.191), (45, 0.012), (59, 0.011), (63, 0.011), (65, 0.01), (74, 0.013), (77, 0.091), (84, 0.027), (86, 0.059), (99, 0.291)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.97929478 1604 andrew gelman stats-2012-12-04-An epithet I can live with

Introduction: Here . Indeed, I’d much rather be a legend than a myth. I just want to clarify one thing. Walter Hickey writes: [Antony Unwin and Andrew Gelman] collaborated on this presentation where they take a hard look at what’s wrong with the recent trends of data visualization and infographics. The takeaway is that while there have been great leaps in visualization technology, some of the visualizations that have garnered the highest praises have actually been lacking in a number of key areas. Specifically, the pair does a takedown of the top visualizations of 2008 as decided by the popular statistics blog Flowing Data. This is a fair summary, but I want to emphasize that, although our dislike of some award-winning visualizations is central to our argument, it is only the first part of our story. As Antony and I worked more on our paper, and especially after seeing the discussions by Robert Kosara, Stephen Few, Hadley Wickham, and Paul Murrell (all to appear in Journal of Computati

2 0.97614741 1438 andrew gelman stats-2012-07-31-What is a Bayesian?

Introduction: Deborah Mayo recommended that I consider coming up with a new name for the statistical methods that I used, given that the term “Bayesian” has all sorts of associations that I dislike (as discussed, for example, in section 1 of this article ). I replied that I agree on Bayesian, I never liked the term and always wanted something better, but I couldn’t think of any convenient alternative. Also, I was finding that Bayesians (even the Bayesians I disagreed with) were reading my research articles, while non-Bayesians were simply ignoring them. So I thought it was best to identify with, and communicate with, those people who were willing to engage with me. More formally, I’m happy defining “Bayesian” as “using inference from the posterior distribution, p(theta|y)”. This says nothing about where the probability distributions come from (thus, no requirement to be “subjective” or “objective”) and it says nothing about the models (thus, no requirement to use the discrete models that hav

same-blog 3 0.97587848 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

Introduction: Konrad Scheffler writes: I was interested by your paper “Induction and deduction in Bayesian data analysis” and was wondering if you would entertain a few questions: – Under the banner of objective Bayesianism, I would posit something like this as a description of Bayesian inference: “Objective Bayesian probability is not a degree of belief (which would necessarily be subjective) but a measure of the plausibility of a hypothesis, conditional on a formally specified information state. One way of specifying a formal information state is to specify a model, which involves specifying both a prior distribution (typically for a set of unobserved variables) and a likelihood function (typically for a set of observed variables, conditioned on the values of the unobserved variables). Bayesian inference involves calculating the objective degree of plausibility of a hypothesis (typically the truth value of the hypothesis is a function of the variables mentioned above) given such a

4 0.9730196 562 andrew gelman stats-2011-02-06-Statistician cracks Toronto lottery

Introduction: Christian points me to this amusing story by Jonah Lehrer about Mohan Srivastava, (perhaps the same person as R. Mohan Srivastava, coauthor of a book called Applied Geostatistics) who discovered a flaw in a scratch-off game in which he could figure out which tickets were likely to win based on partial information visible on the ticket. It appears that scratch-off lotteries elsewhere have similar flaws in their design. The obvious question is, why doesn’t the lottery create the patterns on the tickets (including which “teaser” numbers to reveal) completely at random? It shouldn’t be hard to design this so that zero information is supplied from the outside. in which case Srivastava’s trick would be impossible. So why not put down the numbers randomly? Lehrer quotes Srivastava as saying: The tickets are clearly mass-produced, which means there must be some computer program that lays down the numbers. Of course, it would be really nice if the computer could just spit out random

5 0.97065473 401 andrew gelman stats-2010-11-08-Silly old chi-square!

Introduction: Brian Mulford writes: I [Mulford] ran across this blog post and found myself questioning the relevance of the test used. I’d think Chi-Square would be inappropriate for trying to measure significance of choice in the manner presented here; irrespective of the cute hamster. Since this is a common test for marketers and website developers – I’d be interested in which techniques you might suggest? For tests of this nature, I typically measure a variety of variables (image placement, size, type, page speed, “page feel” as expressed in a factor, etc) and use LOGIT, Cluster and possibly a simple Bayesian model to determine which variables were most significant (chosen). Pearson Chi-squared may be used to express relationships between variables and outcome but I’ve typically not used it to simply judge a 0/1 choice as statistically significant or not. My reply: I like the decision-theoretic way that the blogger (Jason Cohen, according to the webpage) starts: If you wait too

6 0.96863008 2299 andrew gelman stats-2014-04-21-Stan Model of the Week: Hierarchical Modeling of Supernovas

7 0.96739358 1980 andrew gelman stats-2013-08-13-Test scores and grades predict job performance (but maybe not at Google)

8 0.96453303 1976 andrew gelman stats-2013-08-10-The birthday problem

9 0.96247411 207 andrew gelman stats-2010-08-14-Pourquoi Google search est devenu plus raisonnable?

10 0.96088731 1684 andrew gelman stats-2013-01-20-Ugly ugly ugly

11 0.95926356 1784 andrew gelman stats-2013-04-01-Wolfram on Mandelbrot

12 0.95904732 1296 andrew gelman stats-2012-05-03-Google Translate for code, and an R help-list bot

13 0.95775014 878 andrew gelman stats-2011-08-29-Infovis, infographics, and data visualization: Where I’m coming from, and where I’d like to go

14 0.95675921 2201 andrew gelman stats-2014-02-06-Bootstrap averaging: Examples where it works and where it doesn’t work

15 0.95558578 1883 andrew gelman stats-2013-06-04-Interrogating p-values

16 0.95484215 2297 andrew gelman stats-2014-04-20-Fooled by randomness

17 0.9544003 788 andrew gelman stats-2011-07-06-Early stopping and penalized likelihood

18 0.95417213 1788 andrew gelman stats-2013-04-04-When is there “hidden structure in data” to be discovered?

19 0.95408344 1792 andrew gelman stats-2013-04-07-X on JLP

20 0.9530009 1713 andrew gelman stats-2013-02-08-P-values and statistical practice