andrew_gelman_stats andrew_gelman_stats-2012 andrew_gelman_stats-2012-1586 knowledge-graph by maker-knowledge-mining

1586 andrew gelman stats-2012-11-21-Readings for a two-week segment on Bayesian modeling?


meta infos for this blog

Source: html

Introduction: Michael Landy writes: I’m in Psych and Center for Neural Science and I’m teaching a doctoral course this term in methods in psychophysics (never mind the details) at the tail end of which I’m planning on at least 2 lectures on Bayesian parameter estimation and Bayesian model comparison. So far, all the readings I have are a bit too obscure and either glancing (bits of machine-learning books: Bishop, MacKay) or too low-level. The only useful reference I’ve got is an application of these methods (a methods article of mine in a Neuroscience Methods journal). The idea is to give them a decent idea of both estimation (Jeffries priors, marginals of the posterior over the parameters) and model comparison (cross-validation, AIC, BIC, full-blown Bayesian model posterior comparisons, Bayes factor, Occam factor, blah blah blah). So: have you any suggestions for articles or chapters that might be suitable (yes, I’m aware you have an entire book that’s obviously relevant)? In the class topic


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Michael Landy writes: I’m in Psych and Center for Neural Science and I’m teaching a doctoral course this term in methods in psychophysics (never mind the details) at the tail end of which I’m planning on at least 2 lectures on Bayesian parameter estimation and Bayesian model comparison. [sent-1, score-1.104]

2 So far, all the readings I have are a bit too obscure and either glancing (bits of machine-learning books: Bishop, MacKay) or too low-level. [sent-2, score-0.313]

3 The only useful reference I’ve got is an application of these methods (a methods article of mine in a Neuroscience Methods journal). [sent-3, score-0.447]

4 The idea is to give them a decent idea of both estimation (Jeffries priors, marginals of the posterior over the parameters) and model comparison (cross-validation, AIC, BIC, full-blown Bayesian model posterior comparisons, Bayes factor, Occam factor, blah blah blah). [sent-4, score-1.379]

5 So: have you any suggestions for articles or chapters that might be suitable (yes, I’m aware you have an entire book that’s obviously relevant)? [sent-5, score-0.165]

6 In the class topic (psychophysics), the data being modeled are typically choice data (binomial data), but my methods paper happens to be on data from measuring movement errors (continuous data), not that any of that matters. [sent-6, score-1.083]

7 I recommend you (and your students) take a look at my 1995 article with Rubin in Sociological Methodology (if you go to my home page, go to published papers, and search, you’ll find that paper) for a thorough discussion of what we hate about this. [sent-8, score-0.293]

8 I also think Jeffreys priors are a waste of time. [sent-9, score-0.202]

9 I wouldn’t spend one moment on that in your course if I were you. [sent-10, score-0.073]

10 Regarding choice models, you could take a look at the section on choice models in chapter 6 of my book with Jennifer Hill. [sent-11, score-0.642]

11 I also have a paper in Technometrics, Multilevel modeling: What it can and cannot do . [sent-12, score-0.081]

12 Regarding the topic of model checking, you could take a look at chapter 6 of Bayesian Data Analysis. [sent-13, score-0.496]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('blah', 0.306), ('psychophysics', 0.259), ('factor', 0.244), ('occam', 0.234), ('bic', 0.225), ('methods', 0.188), ('choice', 0.167), ('bayesian', 0.135), ('technometrics', 0.129), ('glancing', 0.129), ('marginals', 0.129), ('estimation', 0.122), ('priors', 0.121), ('jeffreys', 0.117), ('bayes', 0.114), ('chapter', 0.109), ('mackay', 0.109), ('bishop', 0.107), ('doctoral', 0.107), ('posterior', 0.106), ('data', 0.105), ('model', 0.102), ('readings', 0.102), ('neural', 0.102), ('psych', 0.102), ('aic', 0.102), ('look', 0.1), ('decent', 0.1), ('regarding', 0.1), ('take', 0.099), ('neuroscience', 0.097), ('tail', 0.094), ('thorough', 0.094), ('binomial', 0.091), ('suitable', 0.09), ('lectures', 0.086), ('topic', 0.086), ('bits', 0.085), ('movement', 0.085), ('sociological', 0.085), ('modeled', 0.083), ('obscure', 0.082), ('paper', 0.081), ('waste', 0.081), ('measuring', 0.078), ('chapters', 0.075), ('methodology', 0.074), ('course', 0.073), ('planning', 0.073), ('mine', 0.071)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000002 1586 andrew gelman stats-2012-11-21-Readings for a two-week segment on Bayesian modeling?

Introduction: Michael Landy writes: I’m in Psych and Center for Neural Science and I’m teaching a doctoral course this term in methods in psychophysics (never mind the details) at the tail end of which I’m planning on at least 2 lectures on Bayesian parameter estimation and Bayesian model comparison. So far, all the readings I have are a bit too obscure and either glancing (bits of machine-learning books: Bishop, MacKay) or too low-level. The only useful reference I’ve got is an application of these methods (a methods article of mine in a Neuroscience Methods journal). The idea is to give them a decent idea of both estimation (Jeffries priors, marginals of the posterior over the parameters) and model comparison (cross-validation, AIC, BIC, full-blown Bayesian model posterior comparisons, Bayes factor, Occam factor, blah blah blah). So: have you any suggestions for articles or chapters that might be suitable (yes, I’m aware you have an entire book that’s obviously relevant)? In the class topic

2 0.26888272 1041 andrew gelman stats-2011-12-04-David MacKay and Occam’s Razor

Introduction: In my comments on David MacKay’s 2003 book on Bayesian inference, I wrote that I hate all the Occam-factor stuff that MacKay talks about, and I linked to this quote from Radford Neal: Sometimes a simple model will outperform a more complex model . . . Nevertheless, I believe that deliberately limiting the complexity of the model is not fruitful when the problem is evidently complex. Instead, if a simple model is found that outperforms some particular complex model, the appropriate response is to define a different complex model that captures whatever aspect of the problem led to the simple model performing well. MacKay replied as follows: When you said you disagree with me on Occam factors I think what you meant was that you agree with me on them. I’ve read your post on the topic and completely agreed with you (and Radford) that we should be using models the size of a house, models that we believe in, and that anyone who thinks it is a good idea to bias the model toward

3 0.19010767 1392 andrew gelman stats-2012-06-26-Occam

Introduction: Cosma Shalizi and Larry Wasserman discuss some papers from a conference on Ockham’s Razor. I don’t have anything new to add on this so let me link to past blog entries on the topic and repost the following from 2004 : A lot has been written in statistics about “parsimony”—that is, the desire to explain phenomena using fewer parameters–but I’ve never seen any good general justification for parsimony. (I don’t count “Occam’s Razor,” or “Ockham’s Razor,” or whatever, as a justification. You gotta do better than digging up a 700-year-old quote.) Maybe it’s because I work in social science, but my feeling is: if you can approximate reality with just a few parameters, fine. If you can use more parameters to fold in more information, that’s even better. In practice, I often use simple models—because they are less effort to fit and, especially, to understand. But I don’t kid myself that they’re better than more complicated efforts! My favorite quote on this comes from Rad

4 0.17372176 1972 andrew gelman stats-2013-08-07-When you’re planning on fitting a model, build up to it by fitting simpler models first. Then, once you have a model you like, check the hell out of it

Introduction: In response to my remarks on his online book, Think Bayes, Allen Downey wrote: I [Downey] have a question about one of your comments: My [Gelman's] main criticism with both books is that they talk a lot about inference but not so much about model building or model checking (recall the three steps of Bayesian data analysis). I think it’s ok for an introductory book to focus on inference, which of course is central to the data-analytic process—but I’d like them to at least mention that Bayesian ideas arise in model building and model checking as well. This sounds like something I agree with, and one of the things I tried to do in the book is to put modeling decisions front and center. But the word “modeling” is used in lots of ways, so I want to see if we are talking about the same thing. For example, in many chapters, I start with a simple model of the scenario, do some analysis, then check whether the model is good enough, and iterate. Here’s the discussion of modeling

5 0.15350284 1554 andrew gelman stats-2012-10-31-It not necessary that Bayesian methods conform to the likelihood principle

Introduction: Bayesian inference, conditional on the model and data, conforms to the likelihood principle. But there is more to Bayesian methods than Bayesian inference. See chapters 6 and 7 of Bayesian Data Analysis for much discussion of this point. It saddens me to see that people are still confused on this issue.

6 0.1441763 1712 andrew gelman stats-2013-02-07-Philosophy and the practice of Bayesian statistics (with all the discussions!)

7 0.13796134 754 andrew gelman stats-2011-06-09-Difficulties with Bayesian model averaging

8 0.13601734 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

9 0.13243806 1611 andrew gelman stats-2012-12-07-Feedback on my Bayesian Data Analysis class at Columbia

10 0.1312246 244 andrew gelman stats-2010-08-30-Useful models, model checking, and external validation: a mini-discussion

11 0.13048878 1719 andrew gelman stats-2013-02-11-Why waste time philosophizing?

12 0.12697956 2368 andrew gelman stats-2014-06-11-Bayes in the research conversation

13 0.12452625 2140 andrew gelman stats-2013-12-19-Revised evidence for statistical standards

14 0.12385375 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

15 0.12341411 1469 andrew gelman stats-2012-08-25-Ways of knowing

16 0.12329298 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

17 0.12302168 1726 andrew gelman stats-2013-02-18-What to read to catch up on multivariate statistics?

18 0.12091924 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

19 0.11690683 1948 andrew gelman stats-2013-07-21-Bayes related

20 0.11490571 1582 andrew gelman stats-2012-11-18-How to teach methods we don’t like?


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.221), (1, 0.166), (2, -0.097), (3, 0.039), (4, -0.048), (5, 0.049), (6, -0.004), (7, 0.008), (8, 0.029), (9, 0.013), (10, 0.092), (11, 0.024), (12, -0.018), (13, 0.01), (14, 0.102), (15, -0.021), (16, -0.011), (17, 0.022), (18, -0.024), (19, -0.011), (20, 0.009), (21, 0.025), (22, 0.034), (23, -0.01), (24, -0.039), (25, -0.02), (26, -0.01), (27, 0.004), (28, 0.064), (29, -0.015), (30, -0.069), (31, -0.009), (32, 0.026), (33, 0.002), (34, 0.006), (35, 0.033), (36, -0.008), (37, -0.047), (38, 0.015), (39, 0.034), (40, -0.045), (41, 0.021), (42, -0.024), (43, 0.022), (44, 0.011), (45, -0.026), (46, 0.024), (47, 0.02), (48, -0.025), (49, -0.002)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97140402 1586 andrew gelman stats-2012-11-21-Readings for a two-week segment on Bayesian modeling?

Introduction: Michael Landy writes: I’m in Psych and Center for Neural Science and I’m teaching a doctoral course this term in methods in psychophysics (never mind the details) at the tail end of which I’m planning on at least 2 lectures on Bayesian parameter estimation and Bayesian model comparison. So far, all the readings I have are a bit too obscure and either glancing (bits of machine-learning books: Bishop, MacKay) or too low-level. The only useful reference I’ve got is an application of these methods (a methods article of mine in a Neuroscience Methods journal). The idea is to give them a decent idea of both estimation (Jeffries priors, marginals of the posterior over the parameters) and model comparison (cross-validation, AIC, BIC, full-blown Bayesian model posterior comparisons, Bayes factor, Occam factor, blah blah blah). So: have you any suggestions for articles or chapters that might be suitable (yes, I’m aware you have an entire book that’s obviously relevant)? In the class topic

2 0.82708764 427 andrew gelman stats-2010-11-23-Bayesian adaptive methods for clinical trials

Introduction: Scott Berry, Brad Carlin, Jack Lee, and Peter Muller recently came out with a book with the above title. The book packs a lot into its 280 pages and is fun to read as well (even if they do use the word “modalities” in their first paragraph, and later on they use the phrase “DIC criterion,” which upsets my tidy, logical mind). The book starts off fast on page 1 and never lets go. Clinical trials are a big part of statistics and it’s cool to see the topic taken seriously and being treated rigorously. (Here I’m not talking about empty mathematical rigor (or, should I say, “rigor”), so-called optimal designs and all that, but rather the rigor of applied statistics, mapping models to reality.) Also I have a few technical suggestions. 1. The authors fit a lot of models in Bugs, which is fine, but they go overboard on the WinBUGS thing. There’s WinBUGS, OpenBUGS, JAGS: they’re all Bugs recommend running Bugs from R using the clunky BRugs interface rather than the smoother bugs(

3 0.78491551 1948 andrew gelman stats-2013-07-21-Bayes related

Introduction: Dave Decker writes: I’ve seen some Bayes related things recently that might make for interesting fodder on your blog. There are two books, teaching Bayesian analysis from a programming perspective. And also a “web application for data analysis using powerful Bayesian statistical methods.” I took a look. The first book is Think Bayes: Bayesian Statistics Made Simple, by Allen B. Downey . It’s super readable and, amazingly, has approximately zero overlap with Bayesian Data Analysis. Downey discusses lots of little problems in a conversational way. In some ways it’s like an old-style math stat textbook (although with a programming rather than mathematical flavor) in that the examples are designed for simplicity rather than realism. I like it! Our book already exists; it’s good to have something else for people to read, coming from an entirely different perspective. The second book is Probabilistic Programming and Bayesian Methods for Hackers , by Cameron Davidson-P

4 0.77276748 1041 andrew gelman stats-2011-12-04-David MacKay and Occam’s Razor

Introduction: In my comments on David MacKay’s 2003 book on Bayesian inference, I wrote that I hate all the Occam-factor stuff that MacKay talks about, and I linked to this quote from Radford Neal: Sometimes a simple model will outperform a more complex model . . . Nevertheless, I believe that deliberately limiting the complexity of the model is not fruitful when the problem is evidently complex. Instead, if a simple model is found that outperforms some particular complex model, the appropriate response is to define a different complex model that captures whatever aspect of the problem led to the simple model performing well. MacKay replied as follows: When you said you disagree with me on Occam factors I think what you meant was that you agree with me on them. I’ve read your post on the topic and completely agreed with you (and Radford) that we should be using models the size of a house, models that we believe in, and that anyone who thinks it is a good idea to bias the model toward

5 0.7657662 2182 andrew gelman stats-2014-01-22-Spell-checking example demonstrates key aspects of Bayesian data analysis

Introduction: One of the new examples for the third edition of Bayesian Data Analysis is a spell-checking story. Here it is (just start at 2/3 down on the first page, with “Spelling correction”). I like this example—it demonstrates the Bayesian algebra, also gives a sense of the way that probability models (both “likelihood” and “prior”) are constructed from existing assumptions and data. The models aren’t just specified as a mathematical exercise, they represent some statement about reality. And the problem is close enough to our experience that we can consider ways in which the model can be criticized and improved, all in a simple example that has only three possibilities.

6 0.76271832 1571 andrew gelman stats-2012-11-09-The anti-Bayesian moment and its passing

7 0.76141298 2033 andrew gelman stats-2013-09-23-More on Bayesian methods and multilevel modeling

8 0.75566232 1469 andrew gelman stats-2012-08-25-Ways of knowing

9 0.75420552 244 andrew gelman stats-2010-08-30-Useful models, model checking, and external validation: a mini-discussion

10 0.74822986 1877 andrew gelman stats-2013-05-30-Infill asymptotics and sprawl asymptotics

11 0.73930711 964 andrew gelman stats-2011-10-19-An interweaving-transformation strategy for boosting MCMC efficiency

12 0.73487562 2273 andrew gelman stats-2014-03-29-References (with code) for Bayesian hierarchical (multilevel) modeling and structural equation modeling

13 0.73377693 690 andrew gelman stats-2011-05-01-Peter Huber’s reflections on data analysis

14 0.7320416 884 andrew gelman stats-2011-09-01-My course this fall on Bayesian Computation

15 0.72992265 2254 andrew gelman stats-2014-03-18-Those wacky anti-Bayesians used to be intimidating, but now they’re just pathetic

16 0.72943747 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

17 0.72801894 776 andrew gelman stats-2011-06-22-Deviance, DIC, AIC, cross-validation, etc

18 0.72697902 183 andrew gelman stats-2010-08-04-Bayesian models for simultaneous equation systems?

19 0.72277641 1898 andrew gelman stats-2013-06-14-Progress! (on the understanding of the role of randomization in Bayesian inference)

20 0.71964711 1443 andrew gelman stats-2012-08-04-Bayesian Learning via Stochastic Gradient Langevin Dynamics


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(16, 0.087), (21, 0.019), (24, 0.07), (41, 0.011), (42, 0.027), (43, 0.011), (56, 0.011), (63, 0.014), (65, 0.012), (79, 0.023), (81, 0.016), (86, 0.275), (99, 0.305)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.98869121 873 andrew gelman stats-2011-08-26-Luck or knowledge?

Introduction: Joan Ginther has won the Texas lottery four times. First, she won $5.4 million, then a decade later, she won $2million, then two years later $3million and in the summer of 2010, she hit a $10million jackpot. The odds of this has been calculated at one in eighteen septillion and luck like this could only come once every quadrillion years. According to Forbes, the residents of Bishop, Texas, seem to believe God was behind it all. The Texas Lottery Commission told Mr Rich that Ms Ginther must have been ‘born under a lucky star’, and that they don’t suspect foul play. Harper’s reporter Nathanial Rich recently wrote an article about Ms Ginther, which calls the the validity of her ‘luck’ into question. First, he points out, Ms Ginther is a former math professor with a PhD from Stanford University specialising in statistics. More at Daily Mail. [Edited Saturday] In comments, C Ryan King points to the original article at Harper’s and Bill Jefferys to Wired .

2 0.9788276 1530 andrew gelman stats-2012-10-11-Migrating your blog from Movable Type to WordPress

Introduction: Cord Blomquist, who did a great job moving us from horrible Movable Type to nice nice WordPress, writes: I [Cord] wanted to share a little news with you related to the original work we did for you last year. When ReadyMadeWeb converted your Movable Type blog to WordPress, we got a lot of other requestes for the same service, so we started thinking about a bigger market for such a product. After a bit of research, we started work on automating the data conversion, writing rules, and exceptions to the rules, on how Movable Type and TypePad data could be translated to WordPress. After many months of work, we’re getting ready to announce TP2WP.com , a service that converts Movable Type and TypePad export files to WordPress import files, so anyone who wants to migrate to WordPress can do so easily and without losing permalinks, comments, images, or other files. By automating our service, we’ve been able to drop the price to just $99. I recommend it (and, no, Cord is not paying m

3 0.97530591 1427 andrew gelman stats-2012-07-24-More from the sister blog

Introduction: Anthropologist Bruce Mannheim reports that a recent well-publicized study on the genetics of native Americans, which used genetic analysis to find “at least three streams of Asian gene flow,” is in fact a confirmation of a long-known fact. Mannheim writes: This three-way distinction was known linguistically since the 1920s (for example, Sapir 1921). Basically, it’s a division among the Eskimo-Aleut languages, which straddle the Bering Straits even today, the Athabaskan languages (which were discovered to be related to a small Siberian language family only within the last few years, not by Greenberg as Wade suggested), and everything else. This is not to say that the results from genetics are unimportant, but it’s good to see how it fits with other aspects of our understanding.

4 0.97451115 904 andrew gelman stats-2011-09-13-My wikipedia edit

Introduction: The other day someone mentioned my complaint about the Wikipedia article on “Bayesian inference” (see footnote 1 of this article ) and he said I should fix the Wikipedia entry myself. And so I did . I didn’t have the energy to rewrite the whole article–in particular, all of its examples involve discrete parameters, whereas the Bayesian problems I work on generally have continuous parameters, and its “mathematical foundations” section focuses on “independent identically distributed observations x” rather than data y which can have different distributions. It’s just a wacky, unbalanced article. But I altered the first few paragraphs to get rid of the stuff about the posterior probability that a model is true. I much prefer the Scholarpedia article on Bayesian statistics by David Spiegelhalter and Kenneth Rice, but I couldn’t bring myself to simply delete the Wikipedia article and replace it with the Scholarpedia content. Just to be clear: I’m not at all trying to disparage

5 0.9720493 76 andrew gelman stats-2010-06-09-Both R and Stata

Introduction: A student I’m working with writes: I was planning on getting a applied stat text as a desk reference, and for that I’m assuming you’d recommend your own book. Also, being an economics student, I was initially planning on doing my analysis in STATA, but I noticed on your blog that you use R, and apparently so does the rest of the statistics profession. Would you rather I do my programming in R this summer, or does it not matter? It doesn’t look too hard to learn, so just let me know what’s most convenient for you. My reply: Yes, I recommend my book with Jennifer Hill. Also the book by John Fox, An R and S-plus Companion to Applied Regression, is a good way to get into R. I recommend you use both Stata and R. If you’re already familiar with Stata, then stick with it–it’s a great system for working with big datasets. You can grab your data in Stata, do some basic manipulations, then save a smaller dataset to read into R (using R’s read.dta() function). Once you want to make fu

6 0.97007281 1718 andrew gelman stats-2013-02-11-Toward a framework for automatic model building

7 0.96904671 253 andrew gelman stats-2010-09-03-Gladwell vs Pinker

8 0.95849025 558 andrew gelman stats-2011-02-05-Fattening of the world and good use of the alpha channel

9 0.94717312 1547 andrew gelman stats-2012-10-25-College football, voting, and the law of large numbers

10 0.92612231 276 andrew gelman stats-2010-09-14-Don’t look at just one poll number–unless you really know what you’re doing!

11 0.92592192 1552 andrew gelman stats-2012-10-29-“Communication is a central task of statistics, and ideally a state-of-the-art data analysis can have state-of-the-art displays to match”

same-blog 12 0.92415833 1586 andrew gelman stats-2012-11-21-Readings for a two-week segment on Bayesian modeling?

13 0.92304361 1327 andrew gelman stats-2012-05-18-Comments on “A Bayesian approach to complex clinical diagnoses: a case-study in child abuse”

14 0.9212001 759 andrew gelman stats-2011-06-11-“2 level logit with 2 REs & large sample. computational nightmare – please help”

15 0.91500151 2082 andrew gelman stats-2013-10-30-Berri Gladwell Loken football update

16 0.91359818 515 andrew gelman stats-2011-01-13-The Road to a B

17 0.90787148 769 andrew gelman stats-2011-06-15-Mr. P by another name . . . is still great!

18 0.90446806 305 andrew gelman stats-2010-09-29-Decision science vs. social psychology

19 0.90155786 1278 andrew gelman stats-2012-04-23-“Any old map will do” meets “God is in every leaf of every tree”

20 0.8968811 2219 andrew gelman stats-2014-02-21-The world’s most popular languages that the Mac documentation hasn’t been translated into