andrew_gelman_stats andrew_gelman_stats-2011 andrew_gelman_stats-2011-566 knowledge-graph by maker-knowledge-mining

566 andrew gelman stats-2011-02-09-The boxer, the wrestler, and the coin flip, again


meta infos for this blog

Source: html

Introduction: Mike Grosskopf writes: I came across your blog the other day and noticed your paper about “The Boxer, the Wrestler, and the Coin Flip” . . . I do not understand the objection to the robust Bayesian inference for conditioning on X=Y in the problem as you describe in the paper. The paper talks about how using Robust Bayes when conditioning on X=Y “degrades our inference about the coin flip” and “has led us to the claim that we can say nothing at all about the coin flip”. Does that have to be the case however, because while conditioning on X=Y does mean that p({X=1}|{X=Y}I) = p({Y=1}|{X=Y}I), I don’t see why it has to mean that both have the same π-distribution where Pr(Y = 1) = π. Which type of inference is being done about Y in the problem? If you are trying to make an inference on the results of the fight between the boxer and the wrestler that has already happened, in which your friend tells you that either the boxer won and he flipped heads with a coin or the boxer lost a


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Mike Grosskopf writes: I came across your blog the other day and noticed your paper about “The Boxer, the Wrestler, and the Coin Flip” . [sent-1, score-0.037]

2 I do not understand the objection to the robust Bayesian inference for conditioning on X=Y in the problem as you describe in the paper. [sent-4, score-0.608]

3 The paper talks about how using Robust Bayes when conditioning on X=Y “degrades our inference about the coin flip” and “has led us to the claim that we can say nothing at all about the coin flip”. [sent-5, score-1.105]

4 Does that have to be the case however, because while conditioning on X=Y does mean that p({X=1}|{X=Y}I) = p({Y=1}|{X=Y}I), I don’t see why it has to mean that both have the same π-distribution where Pr(Y = 1) = π. [sent-6, score-0.322]

5 Which type of inference is being done about Y in the problem? [sent-7, score-0.131]

6 The distribution of π’ defined as (Pr(Y=1|{X=Y},I) = Pr(X=1|{X=Y},I) ~ a delta function at 0. [sent-9, score-0.252]

7 π’ ≠ p(π|{X=Y},I) however because p(π|{X=Y},I) = p(p(Y=1|I)|{X=Y},I), which is basically saying how does conditioning on X=Y effect the distribution of possible probabilities for the prior of Y. [sent-12, score-0.519]

8 If you are trying to better understand what chance the boxer had going into the fight after conditioning on the information that X=Y, then p(π|{X=Y}I) is the relevant inference instead of π’. [sent-13, score-1.473]

9 This is where you would expect no change in uncertainty in π when conditioning on the coin flip. [sent-14, score-0.707]

10 As I laid out in the first email (though in a particularly messy and illegible format), this inference is not changed by conditioning on the coin flip in Bayesian analysis. [sent-15, score-1.069]

11 We are still just as uncertain about what the boxer’s chances were (π). [sent-16, score-0.12]

12 I would think that Bayesian analysis gives the correct analysis in both cases. [sent-17, score-0.059]

13 A better way to clarify what I was thinking is considering where, instead of conditioning on the result of the coin flip (Y=X), condition on the result of something essentially certain, like the sun will rise tomorrow (Y=A). [sent-18, score-1.218]

14 If someone presented you with the information that “the boxer won just as sure as the sun will rise tomorrow” you would give the same inferential status as certain (p()=1 for both) and the distribution of π” = Pr(Y=1|{Y=A},I) would be basically a delta function at 1. [sent-19, score-1.51]

15 However, if you were doing inference on what the boxer’s chances were going into the fight, p(π|{Y=A},I), you would not be certain if the boxer was unlikely to win and pulled an upset or if the boxer was a heavy favorite and easily followed through. [sent-20, score-2.01]

16 Your distribution for π would be updated to be p(π|{Y=A},I) = 2*π (from the first email). [sent-21, score-0.18]

17 All you would really be certain of is that the boxer had some chance, and you would feel that it was now more likely that he had a good chance to win instead of being the underdog. [sent-22, score-1.11]

18 This again seems to work out fine using Bayesian with an uninformative prior. [sent-23, score-0.049]

19 My reply: I’m too tired to think about this, but I’ll post it and then maybe others will have some thoughts. [sent-24, score-0.042]

20 The one thing I can tell you is that it’s an old example–I came up with it in discussions with Augustine Kong back around 1988. [sent-25, score-0.037]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('boxer', 0.713), ('coin', 0.326), ('conditioning', 0.322), ('pr', 0.155), ('flip', 0.155), ('inference', 0.131), ('wrestler', 0.118), ('delta', 0.118), ('fight', 0.115), ('certain', 0.096), ('flipped', 0.093), ('sun', 0.082), ('distribution', 0.082), ('tomorrow', 0.081), ('chances', 0.079), ('inferential', 0.077), ('rise', 0.074), ('robust', 0.067), ('chance', 0.066), ('bayesian', 0.061), ('win', 0.06), ('however', 0.06), ('would', 0.059), ('instead', 0.057), ('ip', 0.056), ('kong', 0.056), ('basically', 0.055), ('objection', 0.053), ('function', 0.052), ('heads', 0.051), ('uninformative', 0.049), ('email', 0.049), ('tails', 0.046), ('messy', 0.044), ('pulled', 0.044), ('won', 0.043), ('laid', 0.042), ('tired', 0.042), ('result', 0.042), ('uncertain', 0.041), ('upset', 0.041), ('format', 0.039), ('updated', 0.039), ('heavy', 0.038), ('condition', 0.037), ('came', 0.037), ('unlikely', 0.036), ('understand', 0.035), ('mike', 0.035), ('trying', 0.034)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 566 andrew gelman stats-2011-02-09-The boxer, the wrestler, and the coin flip, again

Introduction: Mike Grosskopf writes: I came across your blog the other day and noticed your paper about “The Boxer, the Wrestler, and the Coin Flip” . . . I do not understand the objection to the robust Bayesian inference for conditioning on X=Y in the problem as you describe in the paper. The paper talks about how using Robust Bayes when conditioning on X=Y “degrades our inference about the coin flip” and “has led us to the claim that we can say nothing at all about the coin flip”. Does that have to be the case however, because while conditioning on X=Y does mean that p({X=1}|{X=Y}I) = p({Y=1}|{X=Y}I), I don’t see why it has to mean that both have the same π-distribution where Pr(Y = 1) = π. Which type of inference is being done about Y in the problem? If you are trying to make an inference on the results of the fight between the boxer and the wrestler that has already happened, in which your friend tells you that either the boxer won and he flipped heads with a coin or the boxer lost a

2 0.16305691 1692 andrew gelman stats-2013-01-25-Freakonomics Experiments

Introduction: Stephen Dubner writes : Freakonomics Experiments is a set of simple experiments about complex issues—whether to break up with your significant other, quit your job, or start a diet, just to name a few. . . . a collaboration between researchers at the University of Chicago, Freakonomics, and—we hope!—you. Steve Levitt and John List, of the University of Chicago, run the experimental and statistical side of things. Stephen Dubner, Steve Levitt, and the Freakonomics staff have given these experiments the Freakonomics twist you’re used to. Once you flip the coin, you become a member of the most important part of the collaboration, the Freakonomics Experiments team. Without your participation, we couldn’t complete any of this research. . . . You’ll choose a question that you are facing today, such as whether to quit your job or buy a house. Then you’ll provide us some background information about yourself. After that, you’ll flip the coin to find out what you should do in your situati

3 0.095133446 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

Introduction: Benedict Carey writes a follow-up article on ESP studies and Bayesian statistics. ( See here for my previous thoughts on the topic.) Everything Carey writes is fine, and he even uses an example I recommended: The statistical approach that has dominated the social sciences for almost a century is called significance testing. The idea is straightforward. A finding from any well-designed study — say, a correlation between a personality trait and the risk of depression — is considered “significant” if its probability of occurring by chance is less than 5 percent. This arbitrary cutoff makes sense when the effect being studied is a large one — for example, when measuring the so-called Stroop effect. This effect predicts that naming the color of a word is faster and more accurate when the word and color match (“red” in red letters) than when they do not (“red” in blue letters), and is very strong in almost everyone. “But if the true effect of what you are measuring is small,” sai

4 0.088228084 341 andrew gelman stats-2010-10-14-Confusion about continuous probability densities

Introduction: I had the following email exchange with a reader of Bayesian Data Analysis. My correspondent wrote: Exercise 1(b) involves evaluating the normal pdf at a single point. But p(Y=y|mu,sigma) = 0 (and is not simply N(y|mu,sigma)), since the normal distribution is continuous. So it seems that part (b) of the exercise is inappropriate. The solution does actually evaluate the probability as the value of the pdf at the single point, which is wrong. The probabilities should all be 0, so the answer to (b) is undefined. I replied: The pdf is the probability density function, which for a continuous distribution is defined as the derivative of the cumulative density function. The notation in BDA is rigorous but we do not spell out all the details, so I can see how confusion is possible. My correspondent: I agree that the pdf is the derivative of the cdf. But to compute P(a .lt. Y .lt. b) for a continuous distribution (with support in the real line) requires integrating over t

5 0.086175174 343 andrew gelman stats-2010-10-15-?

Introduction: How am I supposed to handle this sort of thing? (See below.) I just stuck it one of my email folders without responding, but then I wondered . . . what’s it all about? Is there some sort of Glengarry Glen Ross-like parallel world where down-on-their-luck Jack Lemmons of public relations world send out electronic cold calls? More than anything else, this sort of thing makes me glad I have a steady job. Here’s the (unsolicited) email, which came with the subject line “Please help a reporter do his job”: Dear Andrew, As an Editor for the Bulldog Reporter (www.bulldogreporter.com/dailydog), a media relations trade publication, my job is to help ensure that my readers have accurate info about you and send you the best quality pitches. By taking five minutes or less to answer my questions (pasted below), you’ll receive targeted PR pitches from our client base that will match your beat and interests. Any help or direction is appreciated. Here are my questions. We have you listed

6 0.074047402 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

7 0.068712234 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

8 0.067936331 2286 andrew gelman stats-2014-04-08-Understanding Simpson’s paradox using a graph

9 0.067428008 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

10 0.066894189 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

11 0.066104524 1941 andrew gelman stats-2013-07-16-Priors

12 0.063905001 2224 andrew gelman stats-2014-02-25-Basketball Stats: Don’t model the probability of win, model the expected score differential.

13 0.063430138 171 andrew gelman stats-2010-07-30-Silly baseball example illustrates a couple of key ideas they don’t usually teach you in statistics class

14 0.063283592 1946 andrew gelman stats-2013-07-19-Prior distributions on derived quantities rather than on parameters themselves

15 0.063028172 1095 andrew gelman stats-2012-01-01-Martin and Liu: Probabilistic inference based on consistency of model with data

16 0.062454257 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

17 0.061935186 662 andrew gelman stats-2011-04-15-Bayesian statistical pragmatism

18 0.061662268 1469 andrew gelman stats-2012-08-25-Ways of knowing

19 0.060524378 559 andrew gelman stats-2011-02-06-Bidding for the kickoff

20 0.060106877 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.109), (1, 0.047), (2, -0.014), (3, 0.02), (4, -0.032), (5, -0.017), (6, 0.018), (7, 0.006), (8, 0.013), (9, -0.049), (10, -0.011), (11, 0.017), (12, 0.03), (13, -0.001), (14, 0.002), (15, 0.019), (16, 0.024), (17, 0.008), (18, 0.007), (19, 0.018), (20, -0.012), (21, 0.025), (22, 0.039), (23, -0.004), (24, 0.034), (25, 0.019), (26, 0.024), (27, 0.007), (28, 0.005), (29, 0.009), (30, 0.038), (31, -0.019), (32, -0.028), (33, 0.004), (34, 0.004), (35, -0.019), (36, 0.004), (37, -0.005), (38, -0.025), (39, 0.028), (40, 0.018), (41, -0.003), (42, -0.012), (43, -0.029), (44, 0.007), (45, -0.007), (46, 0.018), (47, 0.045), (48, 0.005), (49, -0.011)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.9527393 566 andrew gelman stats-2011-02-09-The boxer, the wrestler, and the coin flip, again

Introduction: Mike Grosskopf writes: I came across your blog the other day and noticed your paper about “The Boxer, the Wrestler, and the Coin Flip” . . . I do not understand the objection to the robust Bayesian inference for conditioning on X=Y in the problem as you describe in the paper. The paper talks about how using Robust Bayes when conditioning on X=Y “degrades our inference about the coin flip” and “has led us to the claim that we can say nothing at all about the coin flip”. Does that have to be the case however, because while conditioning on X=Y does mean that p({X=1}|{X=Y}I) = p({Y=1}|{X=Y}I), I don’t see why it has to mean that both have the same π-distribution where Pr(Y = 1) = π. Which type of inference is being done about Y in the problem? If you are trying to make an inference on the results of the fight between the boxer and the wrestler that has already happened, in which your friend tells you that either the boxer won and he flipped heads with a coin or the boxer lost a

2 0.73725086 1332 andrew gelman stats-2012-05-20-Problemen met het boek

Introduction: Regarding the so-called Dutch Book argument for Bayesian inference (the idea that, if your inferences do not correspond to a Bayesian posterior distribution, you can be forced to make incoherent bets and ultimately become a money pump), I wrote: I have never found this argument appealing, because a bet is a game not a decision. A bet requires 2 players, and one player has to offer the bets. I do agree that in some bounded settings (for example, betting on win place show in a horse race), I’d want my bets to be coherent; if they are incoherent (e.g., if my bets correspond to P(A|B)*P(B) not being equal to P(A,B)), then I should be able to do better by examining the incoherence. But in an “open system” (to borrow some physics jargon), I don’t think coherence is possible. There is always new information coming in, and there is always additional prior information in reserve that hasn’t entered the model.

3 0.7097739 1560 andrew gelman stats-2012-11-03-Statistical methods that work in some settings but not others

Introduction: David Hogg pointed me to this post by Larry Wasserman: 1. The Horwitz-Thompson estimator    satisfies the following condition: for every   , where   — the parameter space — is the set of all functions  . (There are practical improvements to the Horwitz-Thompson estimator that we discussed in our earlier posts but we won’t revisit those here.) 2. A Bayes estimator requires a prior   for  . In general, if   is not a function of   then (1) will not hold. . . . 3. If you let   be a function if  , (1) still, in general, does not hold. 4. If you make   a function if   in just the right way, then (1) will hold. . . . There is nothing wrong with doing this, but in our opinion this is not in the spirit of Bayesian inference. . . . 7. This example is only meant to show that Bayesian estimators do not necessarily have good frequentist properties. This should not be surprising. There is no reason why we should in general expect a Bayesian method to have a frequentist property

4 0.69179094 1133 andrew gelman stats-2012-01-21-Judea Pearl on why he is “only a half-Bayesian”

Introduction: In an article published in 2001, Pearl wrote: I [Pearl] turned Bayesian in 1971, as soon as I began reading Savage’s monograph The Foundations of Statistical Inference [Savage, 1962]. The arguments were unassailable: (i) It is plain silly to ignore what we know, (ii) It is natural and useful to cast what we know in the language of probabilities, and (iii) If our subjective probabilities are erroneous, their impact will get washed out in due time, as the number of observations increases. Thirty years later, I [Pearl] am still a devout Bayesian in the sense of (i), but I now doubt the wisdom of (ii) and I know that, in general, (iii) is false. He elaborates: The bulk of human knowledge is organized around causal, not probabilistic relationships, and the grammar of probability calculus is insufficient for capturing those relationships. Specifically, the building blocks of our scientific and everyday knowledge are elementary facts such as “mud does not cause rain” and “symptom

5 0.69057405 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

Introduction: Ryan Ickert writes: I was wondering if you’d seen this post , by a particle physicist with some degree of influence. Dr. Dorigo works at CERN and Fermilab. The penultimate paragraph is: From the above expression, the Frequentist researcher concludes that the tracker is indeed biased, and rejects the null hypothesis H0, since there is a less-than-2% probability (P’<α) that a result as the one observed could arise by chance! A Frequentist thus draws, strongly, the opposite conclusion than a Bayesian from the same set of data. How to solve the riddle? He goes on to not solve the riddle. Perhaps you can? Surely with the large sample size they have (n=10^6), the precision on the frequentist p-value is pretty good, is it not? My reply: The first comment on the site (by Anonymous [who, just to be clear, is not me; I have no idea who wrote that comment], 22 Feb 2012, 21:27pm) pretty much nails it: In setting up the Bayesian model, Dorigo assumed a silly distribution on th

6 0.68631929 248 andrew gelman stats-2010-09-01-Ratios where the numerator and denominator both change signs

7 0.67721456 1610 andrew gelman stats-2012-12-06-Yes, checking calibration of probability forecasts is part of Bayesian statistics

8 0.66957247 2293 andrew gelman stats-2014-04-16-Looking for Bayesian expertise in India, for the purpose of analysis of sarcoma trials

9 0.6665644 1438 andrew gelman stats-2012-07-31-What is a Bayesian?

10 0.66580701 1955 andrew gelman stats-2013-07-25-Bayes-respecting experimental design and other things

11 0.66556001 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

12 0.6650148 804 andrew gelman stats-2011-07-15-Static sensitivity analysis

13 0.66476846 1098 andrew gelman stats-2012-01-04-Bayesian Page Rank?

14 0.66386408 1418 andrew gelman stats-2012-07-16-Long discussion about causal inference and the use of hierarchical models to bridge between different inferential settings

15 0.66332638 341 andrew gelman stats-2010-10-14-Confusion about continuous probability densities

16 0.65712523 2368 andrew gelman stats-2014-06-11-Bayes in the research conversation

17 0.65575522 1091 andrew gelman stats-2011-12-29-Bayes in astronomy

18 0.65200067 1336 andrew gelman stats-2012-05-22-Battle of the Repo Man quotes: Reid Hastie’s turn

19 0.65101916 2078 andrew gelman stats-2013-10-26-“The Bayesian approach to forensic evidence”

20 0.64676088 792 andrew gelman stats-2011-07-08-The virtues of incoherence?


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(16, 0.037), (21, 0.029), (24, 0.11), (35, 0.052), (53, 0.026), (57, 0.011), (72, 0.012), (89, 0.262), (95, 0.011), (99, 0.276)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.97538495 1215 andrew gelman stats-2012-03-16-The “hot hand” and problems with hypothesis testing

Introduction: Gur Yaari writes : Anyone who has ever watched a sports competition is familiar with expressions like “on fire”, “in the zone”, “on a roll”, “momentum” and so on. But what do these expressions really mean? In 1985 when Thomas Gilovich, Robert Vallone and Amos Tversky studied this phenomenon for the first time, they defined it as: “. . . these phrases express a belief that the performance of a player during a particular period is significantly better than expected on the basis of the player’s overall record”. Their conclusion was that what people tend to perceive as a “hot hand” is essentially a cognitive illusion caused by a misperception of random sequences. Until recently there was little, if any, evidence to rule out their conclusion. Increased computing power and new data availability from various sports now provide surprising evidence of this phenomenon, thus reigniting the debate. Yaari goes on to some studies that have found time dependence in basketball, baseball, voll

2 0.9698981 1756 andrew gelman stats-2013-03-10-He said he was sorry

Introduction: Yes, it can be done : Hereby I contact you to clarify the situation that occurred with the publication of the article entitled *** which was published in Volume 11, Issue 3 of *** and I made the mistake of declaring as an author. This chapter is a plagiarism of . . . I wish to express and acknowledge that I am solely responsible for this . . . I recognize the gravity of the offense committed, since there is no justification for so doing. Therefore, and as a sign of shame and regret I feel in this situation, I will publish this letter, in order to set an example for other researchers do not engage in a similar error. No more, and to please accept my apologies, Sincerely, *** P.S. Since we’re on Retraction Watch already, I’ll point you to this unrelated story featuring a hilarious photo of a fraudster, who in this case was a grad student in psychology who faked his data and “has agreed to submit to a three-year supervisory period for any work involving funding from the

3 0.96731317 2243 andrew gelman stats-2014-03-11-The myth of the myth of the myth of the hot hand

Introduction: Phil pointed me to this paper so I thought I probably better repeat what I wrote a couple years ago: 1. The effects are certainly not zero. We are not machines, and anything that can affect our expectations (for example, our success in previous tries) should affect our performance. 2. The effects I’ve seen are small, on the order of 2 percentage points (for example, the probability of a success in some sports task might be 45% if you’re “hot” and 43% otherwise). 3. There’s a huge amount of variation, not just between but also among players. Sometimes if you succeed you will stay relaxed and focused, other times you can succeed and get overconfidence. 4. Whatever the latest results on particular sports, I can’t see anyone overturning the basic finding of Gilovich, Vallone, and Tversky that players and spectators alike will perceive the hot hand even when it does not exist and dramatically overestimate the magnitude and consistency of any hot-hand phenomenon that does exist.

4 0.96219701 459 andrew gelman stats-2010-12-09-Solve mazes by starting at the exit

Introduction: It worked on this one . Good maze designers know this trick and are careful to design multiple branches in each direction. Back when I was in junior high, I used to make huge mazes, and the basic idea was to anticipate what the solver might try to do and to make the maze difficult by postponing the point at which he would realize a path was going nowhere. For example, you might have 6 branches: one dead end, two pairs that form loops going back to the start, and one that is the correct solution. You do this from both directions and add some twists and turns, and there you are. But the maze designer aiming for the naive solver–the sap who starts from the entrance and goes toward the exit–can simplify matters by just having 6 branches: five dead ends and one winner. This sort of thing is easy to solve in the reverse direction. I’m surprised the Times didn’t do better for their special puzzle issue.

5 0.96161377 1685 andrew gelman stats-2013-01-21-Class on computational social science this semester, Fridays, 1:00-3:40pm

Introduction: Sharad Goel, Jake Hofman, and Sergei Vassilvitskii are teaching this awesome class on computational social science this semester in the applied math department at Columbia. Here’s the course info . You should take this course. These guys are amazing.

6 0.94000626 1160 andrew gelman stats-2012-02-09-Familial Linkage between Neuropsychiatric Disorders and Intellectual Interests

7 0.93942899 833 andrew gelman stats-2011-07-31-Untunable Metropolis

8 0.92984879 1708 andrew gelman stats-2013-02-05-Wouldn’t it be cool if Glenn Hubbard were consulting for Herbalife and I were on the other side?

9 0.92477578 1855 andrew gelman stats-2013-05-13-Stan!

10 0.92165315 1477 andrew gelman stats-2012-08-30-Visualizing Distributions of Covariance Matrices

11 0.90457177 407 andrew gelman stats-2010-11-11-Data Visualization vs. Statistical Graphics

same-blog 12 0.90099567 566 andrew gelman stats-2011-02-09-The boxer, the wrestler, and the coin flip, again

13 0.8832804 623 andrew gelman stats-2011-03-21-Baseball’s greatest fielders

14 0.87956864 1320 andrew gelman stats-2012-05-14-Question 4 of my final exam for Design and Analysis of Sample Surveys

15 0.87747365 1783 andrew gelman stats-2013-03-31-He’s getting ready to write a book

16 0.87507051 1991 andrew gelman stats-2013-08-21-BDA3 table of contents (also a new paper on visualization)

17 0.86993575 1628 andrew gelman stats-2012-12-17-Statistics in a world where nothing is random

18 0.86776245 850 andrew gelman stats-2011-08-11-Understanding how estimates change when you move to a multilevel model

19 0.86755586 231 andrew gelman stats-2010-08-24-Yet another Bayesian job opportunity

20 0.85517019 1953 andrew gelman stats-2013-07-24-Recently in the sister blog