andrew_gelman_stats andrew_gelman_stats-2011 andrew_gelman_stats-2011-979 knowledge-graph by maker-knowledge-mining

979 andrew gelman stats-2011-10-29-Bayesian inference for the parameter of a uniform distribution


meta infos for this blog

Source: html

Introduction: Subhash Lele writes: I was wondering if you might know some good references to Bayesian treatment of parameter estimation for U(0,b) type distributions. I am looking for cases where the parameter is on the boundary. I would appreciate any help and advice you could provide. I am, in particular, looking for an MCMC (preferably in WinBUGS) based approach. I figured out the WinBUGS part but I am still curious about the theoretical papers, asymptotics etc. I actually can’t think of any examples! But maybe you, the readers, can. We also should think of the best way to implement this model in Stan. We like to transform to avoid hard boundary constraints, but it seems a bit tacky to do a data-based transformation (which itself would not work if there are latent variables). P.S. I actually saw Lele speak at a statistics conference around 20 years ago. There was a lively exchange between Lele and an older guy who was working on similar problems using a different method. The oth


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Subhash Lele writes: I was wondering if you might know some good references to Bayesian treatment of parameter estimation for U(0,b) type distributions. [sent-1, score-0.382]

2 I am looking for cases where the parameter is on the boundary. [sent-2, score-0.18]

3 I would appreciate any help and advice you could provide. [sent-3, score-0.108]

4 I am, in particular, looking for an MCMC (preferably in WinBUGS) based approach. [sent-4, score-0.077]

5 I figured out the WinBUGS part but I am still curious about the theoretical papers, asymptotics etc. [sent-5, score-0.292]

6 We also should think of the best way to implement this model in Stan. [sent-8, score-0.074]

7 We like to transform to avoid hard boundary constraints, but it seems a bit tacky to do a data-based transformation (which itself would not work if there are latent variables). [sent-9, score-0.485]

8 I actually saw Lele speak at a statistics conference around 20 years ago. [sent-12, score-0.372]

9 There was a lively exchange between Lele and an older guy who was working on similar problems using a different method. [sent-13, score-0.472]

10 The other guy couldn’t stand what Lele was doing and was upset that at the conference organizers for not disavowing Lele’s talk. [sent-14, score-0.482]

11 In retrospect it all seems pretty silly but I imagine it was upsetting to Lele at the time. [sent-16, score-0.228]

12 I speak by analogy to my own very disturbing experience having my research loudly denounced by people who worked on similar problems and seemed to think I was a complete idiot. [sent-17, score-0.476]

13 (I’m not speaking, by the way, of the people who didn’t want to tenure me at Berkeley. [sent-18, score-0.081]

14 Oddly enough, one of them told me they all agreed I was “brilliant. [sent-19, score-0.07]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('lele', 0.785), ('winbugs', 0.201), ('conference', 0.143), ('speak', 0.123), ('asymptotics', 0.104), ('parameter', 0.103), ('upsetting', 0.101), ('organizers', 0.098), ('preferably', 0.098), ('guy', 0.096), ('transformation', 0.094), ('tacky', 0.094), ('lively', 0.089), ('disturbing', 0.088), ('transform', 0.085), ('boundary', 0.085), ('upset', 0.083), ('tenure', 0.081), ('oddly', 0.08), ('similar', 0.08), ('figured', 0.077), ('looking', 0.077), ('implement', 0.074), ('retrospect', 0.074), ('older', 0.072), ('latent', 0.072), ('constraints', 0.071), ('mcmc', 0.071), ('odd', 0.07), ('agreed', 0.07), ('exchange', 0.07), ('problems', 0.065), ('references', 0.065), ('analogy', 0.062), ('stand', 0.062), ('curious', 0.06), ('speaking', 0.059), ('complete', 0.058), ('appreciate', 0.057), ('wondering', 0.056), ('estimation', 0.056), ('avoid', 0.055), ('couldn', 0.054), ('saw', 0.054), ('silly', 0.053), ('actually', 0.052), ('advice', 0.051), ('type', 0.051), ('treatment', 0.051), ('theoretical', 0.051)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999988 979 andrew gelman stats-2011-10-29-Bayesian inference for the parameter of a uniform distribution

Introduction: Subhash Lele writes: I was wondering if you might know some good references to Bayesian treatment of parameter estimation for U(0,b) type distributions. I am looking for cases where the parameter is on the boundary. I would appreciate any help and advice you could provide. I am, in particular, looking for an MCMC (preferably in WinBUGS) based approach. I figured out the WinBUGS part but I am still curious about the theoretical papers, asymptotics etc. I actually can’t think of any examples! But maybe you, the readers, can. We also should think of the best way to implement this model in Stan. We like to transform to avoid hard boundary constraints, but it seems a bit tacky to do a data-based transformation (which itself would not work if there are latent variables). P.S. I actually saw Lele speak at a statistics conference around 20 years ago. There was a lively exchange between Lele and an older guy who was working on similar problems using a different method. The oth

2 0.091116048 1757 andrew gelman stats-2013-03-11-My problem with the Lindley paradox

Introduction: From a couple years ago but still relevant, I think: To me, the Lindley paradox falls apart because of its noninformative prior distribution on the parameter of interest. If you really think there’s a high probability the parameter is nearly exactly zero, I don’t see the point of the model saying that you have no prior information at all on the parameter. In short: my criticism of so-called Bayesian hypothesis testing is that it’s insufficiently Bayesian. P.S. To clarify (in response to Bill’s comment below): I’m speaking of all the examples I’ve ever worked on in social and environmental science, where in some settings I can imagine a parameter being very close to zero and in other settings I can imagine a parameter taking on just about any value in a wide range, but where I’ve never seen an example where a parameter could be either right at zero or taking on any possible value. But such examples might occur in areas of application that I haven’t worked on.

3 0.089680426 1792 andrew gelman stats-2013-04-07-X on JLP

Introduction: Christian Robert writes on the Jeffreys-Lindley paradox. I have nothing to add to this beyond my recent comments : To me, the Lindley paradox falls apart because of its noninformative prior distribution on the parameter of interest. If you really think there’s a high probability the parameter is nearly exactly zero, I don’t see the point of the model saying that you have no prior information at all on the parameter. In short: my criticism of so-called Bayesian hypothesis testing is that it’s insufficiently Bayesian. To clarify, I’m speaking of all the examples I’ve ever worked on in social and environmental science, where in some settings I can imagine a parameter being very close to zero and in other settings I can imagine a parameter taking on just about any value in a wide range, but where I’ve never seen an example where a parameter could be either right at zero or taking on any possible value. But such examples might occur in areas of application that I haven’t worked on.

4 0.071217865 72 andrew gelman stats-2010-06-07-Valencia: Summer of 1991

Introduction: With the completion of the last edition of Jose Bernardo’s Valencia (Spain) conference on Bayesian statistics–I didn’t attend, but many of my friends were there–I thought I’d share my strongest memory of the Valencia conference that I attended in 1991. I contributed a poster and a discussion, both on the topic of inference from iterative simulation, but what I remember most vividly, and what bothered me the most, was how little interest there was in checking model fit. Not only had people mostly not checked the fit of their models to data, and not only did they seem uninterested in such checks, even worse was that many of these Bayesians felt that it was basically illegal to check model fit. I don’t want to get too down on Bayesians for this. Lots of non-Bayesian statisticians go around not checking their models too. With Bayes, though, model checking seems particularly important because Bayesians rely on their models so strongly, not just as a way of getting point estimates bu

5 0.068616547 788 andrew gelman stats-2011-07-06-Early stopping and penalized likelihood

Introduction: Maximum likelihood gives the beat fit to the training data but in general overfits, yielding overly-noisy parameter estimates that don’t perform so well when predicting new data. A popular solution to this overfitting problem takes advantage of the iterative nature of most maximum likelihood algorithms by stopping early. In general, an iterative optimization algorithm goes from a starting point to the maximum of some objective function. If the starting point has some good properties, then early stopping can work well, keeping some of the virtues of the starting point while respecting the data. This trick can be performed the other way, too, starting with the data and then processing it to move it toward a model. That’s how the iterative proportional fitting algorithm of Deming and Stephan (1940) works to fit multivariate categorical data to known margins. In any case, the trick is to stop at the right point–not so soon that you’re ignoring the data but not so late that you en

6 0.06622161 970 andrew gelman stats-2011-10-24-Bell Labs

7 0.062986463 2070 andrew gelman stats-2013-10-20-The institution of tenure

8 0.062151626 604 andrew gelman stats-2011-03-08-More on the missing conservative psychology researchers

9 0.061316699 494 andrew gelman stats-2010-12-31-Type S error rates for classical and Bayesian single and multiple comparison procedures

10 0.060967155 427 andrew gelman stats-2010-11-23-Bayesian adaptive methods for clinical trials

11 0.059825554 254 andrew gelman stats-2010-09-04-Bayesian inference viewed as a computational approximation to classical calculations

12 0.05804633 2245 andrew gelman stats-2014-03-12-More on publishing in journals

13 0.056959763 1911 andrew gelman stats-2013-06-23-AI Stats conference on Stan etc.

14 0.05663937 1848 andrew gelman stats-2013-05-09-A tale of two discussion papers

15 0.056161325 1877 andrew gelman stats-2013-05-30-Infill asymptotics and sprawl asymptotics

16 0.055244397 1451 andrew gelman stats-2012-08-08-Robert Kosara reviews Ed Tufte’s short course

17 0.054136828 1389 andrew gelman stats-2012-06-23-Larry Wasserman’s statistics blog

18 0.0538279 453 andrew gelman stats-2010-12-07-Biostatistics via Pragmatic and Perceptive Bayes.

19 0.053671323 241 andrew gelman stats-2010-08-29-Ethics and statistics in development research

20 0.053580105 2143 andrew gelman stats-2013-12-22-The kluges of today are the textbook solutions of tomorrow.


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.114), (1, 0.013), (2, -0.031), (3, 0.02), (4, 0.001), (5, 0.006), (6, 0.029), (7, -0.009), (8, 0.009), (9, 0.016), (10, -0.01), (11, -0.014), (12, 0.015), (13, -0.016), (14, -0.003), (15, -0.007), (16, 0.004), (17, -0.02), (18, -0.004), (19, 0.01), (20, -0.012), (21, -0.005), (22, 0.01), (23, -0.003), (24, -0.011), (25, -0.008), (26, -0.044), (27, -0.017), (28, 0.001), (29, 0.004), (30, 0.026), (31, 0.025), (32, -0.003), (33, 0.004), (34, -0.019), (35, -0.053), (36, -0.002), (37, 0.03), (38, -0.019), (39, 0.027), (40, 0.003), (41, 0.002), (42, -0.005), (43, 0.048), (44, -0.037), (45, -0.02), (46, 0.03), (47, 0.031), (48, -0.017), (49, 0.027)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.94368255 979 andrew gelman stats-2011-10-29-Bayesian inference for the parameter of a uniform distribution

Introduction: Subhash Lele writes: I was wondering if you might know some good references to Bayesian treatment of parameter estimation for U(0,b) type distributions. I am looking for cases where the parameter is on the boundary. I would appreciate any help and advice you could provide. I am, in particular, looking for an MCMC (preferably in WinBUGS) based approach. I figured out the WinBUGS part but I am still curious about the theoretical papers, asymptotics etc. I actually can’t think of any examples! But maybe you, the readers, can. We also should think of the best way to implement this model in Stan. We like to transform to avoid hard boundary constraints, but it seems a bit tacky to do a data-based transformation (which itself would not work if there are latent variables). P.S. I actually saw Lele speak at a statistics conference around 20 years ago. There was a lively exchange between Lele and an older guy who was working on similar problems using a different method. The oth

2 0.76532102 1639 andrew gelman stats-2012-12-26-Impersonators

Introduction: This story of a Cindy Sherman impersonator reminded me of some graffiti I saw in a bathroom of the Whitney Museum many years ago. My friend Kenny and I had gone there for the Biennial which had an exhibit featuring Keith Haring and others of the neo-taggers (or whatever they were called). The bathroom walls were all painted over by Kenny Scharf [no relation to my friend] in his characteristically irritating doodle style. On top of the ugly stylized graffiti was a Sharpie’d scrawl: “Kenny Scharf is a pretentious asshole.” I suspected this last bit was added by someone else, but maybe it was Scharf himself? Ira Glass is a bigshot and can get Cindy Sherman on the phone, but I was just some guy, all I could do was write Scharf a letter, c/o the Whitney Museum. I described the situation and asked if he was the one who had written, “Kenny Scharf is a pretentious asshole.” He did not reply.

3 0.76403272 970 andrew gelman stats-2011-10-24-Bell Labs

Introduction: Sining Chen told me they’re hiring in the statistics group at Bell Labs . I’ll do my bit for economic stimulus by announcing this job (see below). I love Bell Labs. I worked there for three summers, in a physics lab in 1985-86 under the supervision of Loren Pfeiffer, and by myself in the statistics group in 1990. I learned a lot working for Loren. He was a really smart and driven guy. His lab was a small set of rooms—in Bell Labs, everything’s in a small room, as they value the positive externality of close physical proximity of different labs, which you get by making each lab compact—and it was Loren, his assistant (a guy named Ken West who kept everything running in the lab), and three summer students: me, Gowton Achaibar, and a girl whose name I’ve forgotten. Gowtan and I had a lot of fun chatting in the lab. One day I made a silly comment about Gowton’s accent—he was from Guyana and pronounced “three” as “tree”—and then I apologized and said: Hey, here I am making fun o

4 0.73779362 1597 andrew gelman stats-2012-11-29-What is expected of a consultant

Introduction: Robin Hanson writes on paid expert consulting (of the sort that I do sometime, and is common among economists and statisticians). Hanson agrees with Keith Yost, who says: Fellow consultants and associates . . . [said] fifty percent of the job is nodding your head at whatever’s being said, thirty percent of it is just sort of looking good, and the other twenty percent is raising an objection but then if you meet resistance, then dropping it. On the other side is Steven Levitt, who Hanson quotes as saying: My own experience has been that even though I know nothing about an industry, if you give me a week, and you get a bunch of really smart people to explain the industry to me, and to tell me what they do, a lot of times what I’ve learned in economics, what I’ve learned in other places can actually be really helpful in changing the way that they see the world. Perhaps unsurprisingly given my Bayesian attitudes and my preference for continuity , I’m inclined to split the d

5 0.73576331 2045 andrew gelman stats-2013-09-30-Using the aggregate of the outcome variable as a group-level predictor in a hierarchical model

Introduction: When I was a kid I took a writing class, and one of the assignments was to write a 1-to-2 page story. I can’t remember what I wrote, but I do remember the following story from one of the other kids. In its entirety: I snuck into this pay toilet and I can’t get out! In the discussion period, the kid explained that his original idea was a story explaining the character’s situation, how he got into this predicament and how he got stuck. But then he (the author) realized that the one sentence captured the whole story, there was really no need to elaborate. (To understand the above story, you have to know the following historical fact: Pay toilets in the U.S., decades ago, were not the high-security objects shown (for example) in the picture above. Rather, they were implemented via coin-operated locks on individual toilet stalls. So it really would be possible to sneak into certain pay toilets, if you were willing to crawl under the door or climb over it.) Anyway, this is

6 0.72294843 995 andrew gelman stats-2011-11-06-Statistical models and actual models

7 0.7201333 507 andrew gelman stats-2011-01-07-Small world: MIT, asymptotic behavior of differential-difference equations, Susan Assmann, subgroup analysis, multilevel modeling

8 0.71848989 1039 andrew gelman stats-2011-12-02-I just flew in from the econ seminar, and boy are my arms tired

9 0.70860726 763 andrew gelman stats-2011-06-13-Inventor of Connect Four dies at 91

10 0.70555681 594 andrew gelman stats-2011-02-28-Behavioral economics doesn’t seem to have much to say about marriage

11 0.70297366 2347 andrew gelman stats-2014-05-25-Why I decided not to be a physicist

12 0.69935459 833 andrew gelman stats-2011-07-31-Untunable Metropolis

13 0.69642925 430 andrew gelman stats-2010-11-25-The von Neumann paradox

14 0.69291461 1707 andrew gelman stats-2013-02-05-Glenn Hubbard and I were on opposite sides of a court case and I didn’t even know it!

15 0.69205159 626 andrew gelman stats-2011-03-23-Physics is hard

16 0.68909049 1882 andrew gelman stats-2013-06-03-The statistical properties of smart chains (and referral chains more generally)

17 0.6878624 835 andrew gelman stats-2011-08-02-“The sky is the limit” isn’t such a good thing

18 0.6861459 1769 andrew gelman stats-2013-03-18-Tibshirani announces new research result: A significance test for the lasso

19 0.68329954 1831 andrew gelman stats-2013-04-29-The Great Race

20 0.68165499 895 andrew gelman stats-2011-09-08-How to solve the Post Office’s problems?


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(1, 0.014), (9, 0.055), (16, 0.066), (24, 0.168), (34, 0.021), (35, 0.023), (42, 0.031), (45, 0.013), (55, 0.01), (58, 0.147), (61, 0.013), (77, 0.023), (83, 0.023), (99, 0.259)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.93553793 119 andrew gelman stats-2010-06-30-Why is George Apley overrated?

Introduction: A comment by Mark Palko reminded me that, while I’m a huge Marquand fan, I think The Late George Apley is way overrated. My theory is that Marquand’s best books don’t fit into the modernist way of looking about literature, and that the gatekeepers of the 1930s and 1940s, when judging Marquand by these standards, conveniently labeled Apley has his best book because it had a form–Edith-Wharton-style satire–that they could handle. In contrast, Point of No Return and all the other classics are a mixture of seriousness and satire that left critics uneasy. Perhaps there’s a way to study this sort of thing more systematically?

same-blog 2 0.92313349 979 andrew gelman stats-2011-10-29-Bayesian inference for the parameter of a uniform distribution

Introduction: Subhash Lele writes: I was wondering if you might know some good references to Bayesian treatment of parameter estimation for U(0,b) type distributions. I am looking for cases where the parameter is on the boundary. I would appreciate any help and advice you could provide. I am, in particular, looking for an MCMC (preferably in WinBUGS) based approach. I figured out the WinBUGS part but I am still curious about the theoretical papers, asymptotics etc. I actually can’t think of any examples! But maybe you, the readers, can. We also should think of the best way to implement this model in Stan. We like to transform to avoid hard boundary constraints, but it seems a bit tacky to do a data-based transformation (which itself would not work if there are latent variables). P.S. I actually saw Lele speak at a statistics conference around 20 years ago. There was a lively exchange between Lele and an older guy who was working on similar problems using a different method. The oth

3 0.91389287 1886 andrew gelman stats-2013-06-07-Robust logistic regression

Introduction: Corey Yanofsky writes: In your work, you’ve robustificated logistic regression by having the logit function saturate at, e.g., 0.01 and 0.99, instead of 0 and 1. Do you have any thoughts on a sensible setting for the saturation values? My intuition suggests that it has something to do with proportion of outliers expected in the data (assuming a reasonable model fit). It would be desirable to have them fit in the model, but my intuition is that integrability of the posterior distribution might become an issue. My reply: it should be no problem to put these saturation values in the model, I bet it would work fine in Stan if you give them uniform (0,.1) priors or something like that. Or you could just fit the robit model. And this reminds me . . . I’ve been told that when Stan’s on its optimization setting, it fits generalized linear models just about as fast as regular glm or bayesglm in R. This suggests to me that we should have some precompiled regression models in Stan,

4 0.91170782 815 andrew gelman stats-2011-07-22-Statistical inference based on the minimum description length principle

Introduction: Tom Ball writes: Here’s another query to add to the stats backlog…Minimum Description Length (MDL). I’m attaching a 2002 Psych Rev paper on same. Basically, it’s an approach to model selection that replaces goodness of fit with generalizability or complexity. Would be great to get your response to this approach. My reply: I’ve heard about the minimum description length principle for a long time but have never really understood it. So I have nothing to say! Anyone who has anything useful to say on the topic, feel free to add in the comments. The rest of you might wonder why I posted this. I just thought it would be good for you to have some sense of the boundaries of my knowledge.

5 0.91005492 574 andrew gelman stats-2011-02-14-“The best data visualizations should stand on their own”? I don’t think so.

Introduction: Jimmy pointed me to this blog by Drew Conway on word clouds. I don’t have much to say about Conway’s specifics–word clouds aren’t really my thing, but I’m glad that people are thinking about how to do them better–but I did notice one phrase of his that I’ll dispute. Conway writes The best data visualizations should stand on their own . . . I disagree. I prefer the saying, “A picture plus 1000 words is better than two pictures or 2000 words.” That is, I see a positive interaction between words and pictures or, to put it another way, diminishing returns for words or pictures on their own. I don’t have any big theory for this, but I think, when expressed as a joint value function, my idea makes sense. Also, I live this suggestion in my own work. I typically accompany my graphs with long captions and I try to accompany my words with pictures (although I’m not doing it here, because with the software I use, it’s much easier to type more words than to find, scale, and insert i

6 0.90821725 1966 andrew gelman stats-2013-08-03-Uncertainty in parameter estimates using multilevel models

7 0.89359939 1167 andrew gelman stats-2012-02-14-Extra babies on Valentine’s Day, fewer on Halloween?

8 0.88794011 1428 andrew gelman stats-2012-07-25-The problem with realistic advice?

9 0.88499606 970 andrew gelman stats-2011-10-24-Bell Labs

10 0.88298559 103 andrew gelman stats-2010-06-22-Beach reads, Proust, and income tax

11 0.88254255 1371 andrew gelman stats-2012-06-07-Question 28 of my final exam for Design and Analysis of Sample Surveys

12 0.88208008 2305 andrew gelman stats-2014-04-25-Revised statistical standards for evidence (comments to Val Johnson’s comments on our comments on Val’s comments on p-values)

13 0.88142502 249 andrew gelman stats-2010-09-01-References on predicting elections

14 0.88128579 1176 andrew gelman stats-2012-02-19-Standardized writing styles and standardized graphing styles

15 0.88013804 252 andrew gelman stats-2010-09-02-R needs a good function to make line plots

16 0.88011748 187 andrew gelman stats-2010-08-05-Update on state size and governors’ popularity

17 0.87927759 1644 andrew gelman stats-2012-12-30-Fixed effects, followed by Bayes shrinkage?

18 0.87925673 807 andrew gelman stats-2011-07-17-Macro causality

19 0.8791582 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

20 0.87903422 560 andrew gelman stats-2011-02-06-Education and Poverty