andrew_gelman_stats andrew_gelman_stats-2013 andrew_gelman_stats-2013-1955 knowledge-graph by maker-knowledge-mining

1955 andrew gelman stats-2013-07-25-Bayes-respecting experimental design and other things


meta infos for this blog

Source: html

Introduction: Dan Lakeland writes: I have some questions about some basic statistical ideas and would like your opinion on them: 1) Parameters that manifestly DON’T exist: It makes good sense to me to think about Bayesian statistics as narrowing in on the value of parameters based on a model and some data. But there are cases where “the parameter” simply doesn’t make sense as an actual thing. Yet, it’s not really a complete fiction, like unicorns either, it’s some kind of “effective” thing maybe. Here’s an example of what I mean. I did a simple toy experiment where we dropped crumpled up balls of paper and timed their fall times. (see here: http://models.street-artists.org/?s=falling+ball ) It was pretty instructive actually, and I did it to figure out how to in a practical way use an ODE to get a likelihood in MCMC procedures. One of the parameters in the model is the radius of the spherical ball of paper. But the ball of paper isn’t a sphere, not even approximately. There’s no single valu


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 I did a simple toy experiment where we dropped crumpled up balls of paper and timed their fall times. [sent-5, score-0.697]

2 One of the parameters in the model is the radius of the spherical ball of paper. [sent-10, score-0.465]

3 In what sense does it make sense to talk about the posterior distribution of this parameter r? [sent-15, score-0.492]

4 I’m sure in social sciences it’s frequently the case that your model is obviously wrong from the start, so zeroing on the “true” value of the parameter is in some sense meaningless, and yet it can’t be meaningless in every sense, or we wouldn’t do mathematical modeling at all! [sent-17, score-0.448]

5 2) Bayesian “Design of Experiments” In what sense do classical design of experiments concepts apply to an experiment that will be analyzed in a Bayesian way especially with multilevel models? [sent-18, score-0.522]

6 For example suppose you’re dropping a ball without air resistance, Newton’s laws say the fall time for height h will be t = sqrt(2*h/g). [sent-20, score-0.52]

7 There is no way to determine whether your clock reads consistently say 10% too long of a time, or if 1/sqrt(g) is 10% bigger than some nominal value that someone told you, both will generate data where t is on average 10% bigger than the prediction. [sent-22, score-0.444]

8 On the other hand, when we put an informative prior around the g value, say it is normally distributed with about 1% error around the nominal value you were told, suddenly you can much better calibrate your watch! [sent-23, score-0.468]

9 But design of experiments is all about deciding to collect certain kinds of data, so about deciding what values of the certain portions of the data you will see in your analysis. [sent-27, score-0.59]

10 I doubt very much that this procedure would look like classical design of experiments. [sent-30, score-0.35]

11 In your example, radius is not defined, but one can use the diameter of the smallest sphere that circumscribes the object. [sent-33, score-0.48]

12 But that doesn’t yet solve the problem, in that if you model the crumpled ball as a solid sphere of that size, you’ll get the wrong answer. [sent-34, score-0.611]

13 You could, however, include the physical diameter as data and then estimate a calibration function (actually a joint distribution) relating the physical diameter to the “diameter” parameter estimated from your air-resistance model. [sent-35, score-0.496]

14 I actually took 3 measurements for each ball, and could use those to estimate a joint distribution of measurements and “actual” r in a more rigorous way, but there *is no* actual r. [sent-51, score-0.354]

15 If it really were a perfect sphere, I would expect that after a large number of drops the posterior probability would be peaked around r* which would be very close to the value I would get from a micrometer measurement. [sent-54, score-0.837]

16 But if I drop my single crumpled ball of paper say 10^6 times I doubt very much the posterior would be more peaked than if I dropped it say 10^4 times. [sent-55, score-0.825]

17 The asymptotic standard deviation of r would be some value that reflects the fact that in each drop the r that most accurately captured the aerodynamics would be a true random variable, changing from experiment to experiment a little. [sent-56, score-0.555]

18 Also, drops from higher heights would have similar r values, as more of the time would be spent in falling at the “terminal velocity”. [sent-57, score-0.4]

19 (In fact, the specifics of this dropping balls situation is not very interesting to me, since it’s just a toy problem, but toy problems are good when they get at some fundamental issues in a simple to understand way). [sent-61, score-0.433]

20 Also, in such an analysis we MUST use informative priors (just as classical statisticians must guess at the variability that will be observed). [sent-68, score-0.378]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('ball', 0.286), ('diameter', 0.207), ('value', 0.207), ('sphere', 0.17), ('crumpled', 0.155), ('design', 0.153), ('balls', 0.152), ('collect', 0.123), ('bayesian', 0.122), ('posterior', 0.116), ('distribution', 0.114), ('classical', 0.109), ('informative', 0.108), ('toy', 0.107), ('narwhals', 0.103), ('radius', 0.103), ('dropped', 0.1), ('fall', 0.097), ('variability', 0.095), ('unicorns', 0.094), ('sense', 0.09), ('designs', 0.089), ('lakeland', 0.089), ('would', 0.088), ('cases', 0.086), ('experiment', 0.086), ('experiments', 0.084), ('drops', 0.082), ('bigger', 0.082), ('measurements', 0.082), ('parameter', 0.082), ('studies', 0.08), ('prior', 0.08), ('peaked', 0.08), ('values', 0.078), ('certain', 0.076), ('parameters', 0.076), ('actual', 0.076), ('achieving', 0.074), ('confounding', 0.074), ('heights', 0.074), ('museum', 0.074), ('nominal', 0.073), ('suppose', 0.07), ('awareness', 0.069), ('meaningless', 0.069), ('falling', 0.068), ('dropping', 0.067), ('vs', 0.067), ('priors', 0.066)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000007 1955 andrew gelman stats-2013-07-25-Bayes-respecting experimental design and other things

Introduction: Dan Lakeland writes: I have some questions about some basic statistical ideas and would like your opinion on them: 1) Parameters that manifestly DON’T exist: It makes good sense to me to think about Bayesian statistics as narrowing in on the value of parameters based on a model and some data. But there are cases where “the parameter” simply doesn’t make sense as an actual thing. Yet, it’s not really a complete fiction, like unicorns either, it’s some kind of “effective” thing maybe. Here’s an example of what I mean. I did a simple toy experiment where we dropped crumpled up balls of paper and timed their fall times. (see here: http://models.street-artists.org/?s=falling+ball ) It was pretty instructive actually, and I did it to figure out how to in a practical way use an ODE to get a likelihood in MCMC procedures. One of the parameters in the model is the radius of the spherical ball of paper. But the ball of paper isn’t a sphere, not even approximately. There’s no single valu

2 0.21174924 1941 andrew gelman stats-2013-07-16-Priors

Introduction: Nick Firoozye writes: While I am absolutely sympathetic to the Bayesian agenda I am often troubled by the requirement of having priors. We must have priors on the parameter of an infinite number of model we have never seen before and I find this troubling. There is a similarly troubling problem in economics of utility theory. Utility is on consumables. To be complete a consumer must assign utility to all sorts of things they never would have encountered. More recent versions of utility theory instead make consumption goods a portfolio of attributes. Cadillacs are x many units of luxury y of transport etc etc. And we can automatically have personal utilities to all these attributes. I don’t ever see parameters. Some model have few and some have hundreds. Instead, I see data. So I don’t know how to have an opinion on parameters themselves. Rather I think it far more natural to have opinions on the behavior of models. The prior predictive density is a good and sensible notion. Also

3 0.20276274 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

Introduction: Robert Bell pointed me to this post by Brad De Long on Bayesian statistics, and then I also noticed this from Noah Smith, who wrote: My impression is that although the Bayesian/Frequentist debate is interesting and intellectually fun, there’s really not much “there” there… despite being so-hip-right-now, Bayesian is not the Statistical Jesus. I’m happy to see the discussion going in this direction. Twenty-five years ago or so, when I got into this biz, there were some serious anti-Bayesian attitudes floating around in mainstream statistics. Discussions in the journals sometimes devolved into debates of the form, “Bayesians: knaves or fools?”. You’d get all sorts of free-floating skepticism about any prior distribution at all, even while people were accepting without question (and doing theory on) logistic regressions, proportional hazards models, and all sorts of strong strong models. (In the subfield of survey sampling, various prominent researchers would refuse to mode

4 0.18629555 2109 andrew gelman stats-2013-11-21-Hidden dangers of noninformative priors

Introduction: Following up on Christian’s post [link fixed] on the topic, I’d like to offer a few thoughts of my own. In BDA, we express the idea that a noninformative prior is a placeholder: you can use the noninformative prior to get the analysis started, then if your posterior distribution is less informative than you would like, or if it does not make sense, you can go back and add prior information. Same thing for the data model (the “likelihood”), for that matter: it often makes sense to start with something simple and conventional and then go from there. So, in that sense, noninformative priors are no big deal, they’re just a way to get started. Just don’t take them too seriously. Traditionally in statistics we’ve worked with the paradigm of a single highly informative dataset with only weak external information. But if the data are sparse and prior information is strong, we have to think differently. And, when you increase the dimensionality of a problem, both these things hap

5 0.17285185 1469 andrew gelman stats-2012-08-25-Ways of knowing

Introduction: In this discussion from last month, computer science student and Judea Pearl collaborator Elias Barenboim expressed an attitude that hierarchical Bayesian methods might be fine in practice but that they lack theory, that Bayesians can’t succeed in toy problems. I posted a P.S. there which might not have been noticed so I will put it here: I now realize that there is some disagreement about what constitutes a “guarantee.” In one of his comments, Barenboim writes, “the assurance we have that the result must hold as long as the assumptions in the model are correct should be regarded as a guarantee.” In that sense, yes, we have guarantees! It is fundamental to Bayesian inference that the result must hold if the assumptions in the model are correct. We have lots of that in Bayesian Data Analysis (particularly in the first four chapters but implicitly elsewhere as well), and this is also covered in the classic books by Lindley, Jaynes, and others. This sort of guarantee is indeed p

6 0.17264608 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

7 0.16834345 1092 andrew gelman stats-2011-12-29-More by Berger and me on weakly informative priors

8 0.16754998 1209 andrew gelman stats-2012-03-12-As a Bayesian I want scientists to report their data non-Bayesianly

9 0.1667936 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

10 0.16545637 1474 andrew gelman stats-2012-08-29-More on scaled-inverse Wishart and prior independence

11 0.16406512 494 andrew gelman stats-2010-12-31-Type S error rates for classical and Bayesian single and multiple comparison procedures

12 0.15666717 779 andrew gelman stats-2011-06-25-Avoiding boundary estimates using a prior distribution as regularization

13 0.15567356 1713 andrew gelman stats-2013-02-08-P-values and statistical practice

14 0.15471178 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

15 0.15256645 1465 andrew gelman stats-2012-08-21-D. Buggin

16 0.15222536 1418 andrew gelman stats-2012-07-16-Long discussion about causal inference and the use of hierarchical models to bridge between different inferential settings

17 0.15168205 2208 andrew gelman stats-2014-02-12-How to think about “identifiability” in Bayesian inference?

18 0.15163958 2200 andrew gelman stats-2014-02-05-Prior distribution for a predicted probability

19 0.15141788 754 andrew gelman stats-2011-06-09-Difficulties with Bayesian model averaging

20 0.14889868 1155 andrew gelman stats-2012-02-05-What is a prior distribution?


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.346), (1, 0.178), (2, 0.032), (3, -0.024), (4, -0.018), (5, -0.047), (6, 0.093), (7, 0.044), (8, -0.049), (9, -0.012), (10, -0.033), (11, -0.017), (12, 0.032), (13, -0.02), (14, -0.001), (15, -0.009), (16, 0.025), (17, -0.007), (18, 0.032), (19, 0.016), (20, -0.012), (21, 0.007), (22, -0.018), (23, 0.032), (24, -0.036), (25, 0.01), (26, -0.021), (27, 0.02), (28, 0.016), (29, 0.008), (30, -0.019), (31, -0.023), (32, -0.018), (33, 0.004), (34, 0.001), (35, 0.006), (36, -0.035), (37, -0.024), (38, 0.008), (39, 0.004), (40, 0.022), (41, -0.014), (42, -0.046), (43, 0.034), (44, -0.026), (45, -0.041), (46, 0.076), (47, 0.063), (48, 0.018), (49, 0.033)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97274166 1955 andrew gelman stats-2013-07-25-Bayes-respecting experimental design and other things

Introduction: Dan Lakeland writes: I have some questions about some basic statistical ideas and would like your opinion on them: 1) Parameters that manifestly DON’T exist: It makes good sense to me to think about Bayesian statistics as narrowing in on the value of parameters based on a model and some data. But there are cases where “the parameter” simply doesn’t make sense as an actual thing. Yet, it’s not really a complete fiction, like unicorns either, it’s some kind of “effective” thing maybe. Here’s an example of what I mean. I did a simple toy experiment where we dropped crumpled up balls of paper and timed their fall times. (see here: http://models.street-artists.org/?s=falling+ball ) It was pretty instructive actually, and I did it to figure out how to in a practical way use an ODE to get a likelihood in MCMC procedures. One of the parameters in the model is the radius of the spherical ball of paper. But the ball of paper isn’t a sphere, not even approximately. There’s no single valu

2 0.84535468 1941 andrew gelman stats-2013-07-16-Priors

Introduction: Nick Firoozye writes: While I am absolutely sympathetic to the Bayesian agenda I am often troubled by the requirement of having priors. We must have priors on the parameter of an infinite number of model we have never seen before and I find this troubling. There is a similarly troubling problem in economics of utility theory. Utility is on consumables. To be complete a consumer must assign utility to all sorts of things they never would have encountered. More recent versions of utility theory instead make consumption goods a portfolio of attributes. Cadillacs are x many units of luxury y of transport etc etc. And we can automatically have personal utilities to all these attributes. I don’t ever see parameters. Some model have few and some have hundreds. Instead, I see data. So I don’t know how to have an opinion on parameters themselves. Rather I think it far more natural to have opinions on the behavior of models. The prior predictive density is a good and sensible notion. Also

3 0.83638763 804 andrew gelman stats-2011-07-15-Static sensitivity analysis

Introduction: This is one of my favorite ideas. I used it in an application but have never formally studied it or written it up as a general method. Sensitivity analysis is when you check how inferences change when you vary fit several different models or when you vary inputs within a model. Sensitivity analysis is often recommended but is typically difficult to do, what with the hassle of carrying around all these different estimates. In Bayesian inference, sensitivity analysis is associated with varying the prior distribution, which irritates me: why not consider sensitivity to the likelihood, as that’s typically just as arbitrary as the prior while having a much larger effect on the inferences. So we came up with static sensitivity analysis , which is a way to assess sensitivity to assumptions while fitting only one model. The idea is that Bayesian posterior simulation gives you a range of parameter values, and from these you can learn about sensitivity directly. The published exampl

4 0.82147413 2208 andrew gelman stats-2014-02-12-How to think about “identifiability” in Bayesian inference?

Introduction: We had some questions on the Stan list regarding identification. The topic arose because people were fitting models with improper posterior distributions, the kind of model where there’s a ridge in the likelihood and the parameters are not otherwise constrained. I tried to help by writing something on Bayesian identifiability for the Stan list. Then Ben Goodrich came along and cleaned up what I wrote. I think this might be of interest to many of you so I’ll repeat the discussion here. Here’s what I wrote: Identification is actually a tricky concept and is not so clearly defined. In the broadest sense, a Bayesian model is identified if the posterior distribution is proper. Then one can do Bayesian inference and that’s that. No need to require a finite variance or even a finite mean, all that’s needed is a finite integral of the probability distribution. That said, there are some reasons why a stronger definition can be useful: 1. Weak identification. Suppose that, wit

5 0.81973529 2129 andrew gelman stats-2013-12-10-Cross-validation and Bayesian estimation of tuning parameters

Introduction: Ilya Lipkovich writes: I read with great interest your 2008 paper [with Aleks Jakulin, Grazia Pittau, and Yu-Sung Su] on weakly informative priors for logistic regression and also followed an interesting discussion on your blog. This discussion was within Bayesian community in relation to the validity of priors. However i would like to approach it rather from a more broad perspective on predictive modeling bringing in the ideas from machine/statistical learning approach”. Actually you were the first to bring it up by mentioning in your paper “borrowing ideas from computer science” on cross-validation when comparing predictive ability of your proposed priors with other choices. However, using cross-validation for comparing method performance is not the only or primary use of CV in machine-learning. Most of machine learning methods have some “meta” or complexity parameters and use cross-validation to tune them up. For example, one of your comparison methods is BBR which actually

6 0.81702155 2180 andrew gelman stats-2014-01-21-Everything I need to know about Bayesian statistics, I learned in eight schools.

7 0.81573713 2099 andrew gelman stats-2013-11-13-“What are some situations in which the classical approach (or a naive implementation of it, based on cookbook recipes) gives worse results than a Bayesian approach, results that actually impeded the science?”

8 0.81301874 810 andrew gelman stats-2011-07-20-Adding more information can make the variance go up (depending on your model)

9 0.80175596 960 andrew gelman stats-2011-10-15-The bias-variance tradeoff

10 0.79731888 650 andrew gelman stats-2011-04-05-Monitor the efficiency of your Markov chain sampler using expected squared jumped distance!

11 0.79721189 1946 andrew gelman stats-2013-07-19-Prior distributions on derived quantities rather than on parameters themselves

12 0.79649073 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

13 0.79475307 669 andrew gelman stats-2011-04-19-The mysterious Gamma (1.4, 0.4)

14 0.79143494 246 andrew gelman stats-2010-08-31-Somewhat Bayesian multilevel modeling

15 0.79131716 446 andrew gelman stats-2010-12-03-Is 0.05 too strict as a p-value threshold?

16 0.79050088 2342 andrew gelman stats-2014-05-21-Models with constraints

17 0.78666198 1713 andrew gelman stats-2013-02-08-P-values and statistical practice

18 0.78640682 788 andrew gelman stats-2011-07-06-Early stopping and penalized likelihood

19 0.78485721 368 andrew gelman stats-2010-10-25-Is instrumental variables analysis particularly susceptible to Type M errors?

20 0.78427762 2140 andrew gelman stats-2013-12-19-Revised evidence for statistical standards


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(15, 0.016), (16, 0.038), (21, 0.026), (24, 0.203), (41, 0.099), (56, 0.013), (63, 0.017), (76, 0.014), (84, 0.016), (86, 0.026), (89, 0.016), (99, 0.349)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.99263227 1214 andrew gelman stats-2012-03-15-Of forecasts and graph theory and characterizing a statistical method by the information it uses

Introduction: Wayne Folta points me to “EigenBracket 2012: Using Graph Theory to Predict NCAA March Madness Basketball” and writes, “I [Folta] have got to believe that he’s simply re-invented a statistical method in a graph-ish context, but don’t know enough to judge.” I have not looked in detail at the method being presented here—I’m not much of college basketball fan—but I’d like to use this as an excuse to make one of my favorite general point, which is that a good way to characterize any statistical method is by what information it uses. The basketball ranking method here uses score differentials between teams in the past season. On the plus side, that is better than simply using one-loss records (which (a) discards score differentials and (b) discards information on who played whom). On the minus side, the method appears to be discretizing the scores (thus throwing away information on the exact score differential) and doesn’t use any external information such as external ratings. A

2 0.98019481 2311 andrew gelman stats-2014-04-29-Bayesian Uncertainty Quantification for Differential Equations!

Introduction: Mark Girolami points us to this paper and software (with Oksana Chkrebtii, David Campbell, and Ben Calderhead). They write: We develop a general methodology for the probabilistic integration of differential equations via model based updating of a joint prior measure on the space of functions and their temporal and spatial derivatives. This results in a posterior measure over functions reflecting how well they satisfy the system of differential equations and corresponding initial and boundary values. We show how this posterior measure can be naturally incorporated within the Kennedy and O’Hagan framework for uncertainty quantification and provides a fully Bayesian approach to model calibration. . . . A broad variety of examples are provided to illustrate the potential of this framework for characterising discretization uncertainty, including initial value, delay, and boundary value differential equations, as well as partial differential equations. We also demonstrate our methodolo

same-blog 3 0.97821373 1955 andrew gelman stats-2013-07-25-Bayes-respecting experimental design and other things

Introduction: Dan Lakeland writes: I have some questions about some basic statistical ideas and would like your opinion on them: 1) Parameters that manifestly DON’T exist: It makes good sense to me to think about Bayesian statistics as narrowing in on the value of parameters based on a model and some data. But there are cases where “the parameter” simply doesn’t make sense as an actual thing. Yet, it’s not really a complete fiction, like unicorns either, it’s some kind of “effective” thing maybe. Here’s an example of what I mean. I did a simple toy experiment where we dropped crumpled up balls of paper and timed their fall times. (see here: http://models.street-artists.org/?s=falling+ball ) It was pretty instructive actually, and I did it to figure out how to in a practical way use an ODE to get a likelihood in MCMC procedures. One of the parameters in the model is the radius of the spherical ball of paper. But the ball of paper isn’t a sphere, not even approximately. There’s no single valu

4 0.97513998 447 andrew gelman stats-2010-12-03-Reinventing the wheel, only more so.

Introduction: Posted by Phil Price: A blogger (can’t find his name anywhere on his blog) points to an article in the medical literature in 1994 that is…well, it’s shocking, is what it is. This is from the abstract: In Tai’s Model, the total area under a curve is computed by dividing the area under the curve between two designated values on the X-axis (abscissas) into small segments (rectangles and triangles) whose areas can be accurately calculated from their respective geometrical formulas. The total sum of these individual areas thus represents the total area under the curve. Validity of the model is established by comparing total areas obtained from this model to these same areas obtained from graphic method (less than +/- 0.4%). Other formulas widely applied by researchers under- or overestimated total area under a metabolic curve by a great margin Yes, that’s right, this guy has rediscovered the trapezoidal rule. You know, that thing most readers of this blog were taught back in 1

5 0.97473156 516 andrew gelman stats-2011-01-14-A new idea for a science core course based entirely on computer simulation

Introduction: Columbia College has for many years had a Core Curriculum, in which students read classics such as Plato (in translation) etc. A few years ago they created a Science core course. There was always some confusion about this idea: On one hand, how much would college freshmen really learn about science by reading the classic writings of Galileo, Laplace, Darwin, Einstein, etc.? And they certainly wouldn’t get much out by puzzling over the latest issues of Nature, Cell, and Physical Review Letters. On the other hand, what’s the point of having them read Dawkins, Gould, or even Brian Greene? These sorts of popularizations give you a sense of modern science (even to the extent of conveying some of the debates in these fields), but reading them might not give the same intellectual engagement that you’d get from wrestling with the Bible or Shakespeare. I have a different idea. What about structuring the entire course around computer programming and simulation? Start with a few weeks t

6 0.97367817 2224 andrew gelman stats-2014-02-25-Basketball Stats: Don’t model the probability of win, model the expected score differential.

7 0.96699578 778 andrew gelman stats-2011-06-24-New ideas on DIC from Martyn Plummer and Sumio Watanabe

8 0.96572554 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

9 0.9655956 2090 andrew gelman stats-2013-11-05-How much do we trust a new claim that early childhood stimulation raised earnings by 42%?

10 0.9655937 970 andrew gelman stats-2011-10-24-Bell Labs

11 0.96556318 1150 andrew gelman stats-2012-02-02-The inevitable problems with statistical significance and 95% intervals

12 0.96548331 2200 andrew gelman stats-2014-02-05-Prior distribution for a predicted probability

13 0.96525723 454 andrew gelman stats-2010-12-07-Diabetes stops at the state line?

14 0.96520317 1605 andrew gelman stats-2012-12-04-Write This Book

15 0.9649781 1941 andrew gelman stats-2013-07-16-Priors

16 0.96472102 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

17 0.96456766 1626 andrew gelman stats-2012-12-16-The lamest, grudgingest, non-retraction retraction ever

18 0.96456373 1465 andrew gelman stats-2012-08-21-D. Buggin

19 0.964531 303 andrew gelman stats-2010-09-28-“Genomics” vs. genetics

20 0.96444052 1357 andrew gelman stats-2012-06-01-Halloween-Valentine’s update