andrew_gelman_stats andrew_gelman_stats-2011 andrew_gelman_stats-2011-776 knowledge-graph by maker-knowledge-mining

776 andrew gelman stats-2011-06-22-Deviance, DIC, AIC, cross-validation, etc


meta infos for this blog

Source: html

Introduction: The deviance information criterion (or DIC) is an idea of Brad Carlin and others for comparing the fits of models estimated using Bayesian simulation (for more information, see this article by Angelika van der Linde). I don’t really ever know what to make of DIC. On one hand, it seems sensible, it handles uncertainty in inferences within each model, and it does not depend on aspects of the models that don’t affect inferences within each model (unlike Bayes factors; see discussion here ). On the other hand, I don’t really have any idea what I would do with DIC in any real example. In our book we included an example of DIC–people use it and we don’t have any great alternatives–but I had to be pretty careful that the example made sense. Unlike the usual setting where we use a method and that gives us insight into a problem, here we used our insight into the problem to make sure that in this particular case the method gave a reasonable answer. One of my practical problems with D


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 The deviance information criterion (or DIC) is an idea of Brad Carlin and others for comparing the fits of models estimated using Bayesian simulation (for more information, see this article by Angelika van der Linde). [sent-1, score-0.464]

2 On one hand, it seems sensible, it handles uncertainty in inferences within each model, and it does not depend on aspects of the models that don’t affect inferences within each model (unlike Bayes factors; see discussion here ). [sent-3, score-0.391]

3 Unlike the usual setting where we use a method and that gives us insight into a problem, here we used our insight into the problem to make sure that in this particular case the method gave a reasonable answer. [sent-6, score-0.36]

4 Long after we’ve achieved good mixing of the chains and good inference for parameters of interest and we’re ready to go on, it turns out that DIC is still unstable. [sent-8, score-0.192]

5 In the example in our book we ran for a zillion iterations to make sure the DIC was ok. [sent-9, score-0.097]

6 But I’ve always been stuck on the details, maybe because I’ve never really used either measure in any applied problem. [sent-12, score-0.15]

7 While writing this blog I came across an article by Martyn Plummer that gives a sense of the current thinking on DIC and its strengths and limitations. [sent-13, score-0.157]

8 Plummer’s paper begins: The deviance information criterion (DIC) is widely used for Bayesian model comparison, despite the lack of a clear theoretical foundation. [sent-14, score-0.407]

9 DIC is shown to be an approximation to a penalized loss function based on the deviance, with a penalty derived from a cross-validation argument. [sent-15, score-0.401]

10 This approximation is valid only when the effective number of parameters in the model is much smaller than the number of independent observations. [sent-16, score-0.193]

11 In disease mapping, a typical application of DIC, this assumption does not hold and DIC under-penalizes more complex models. [sent-17, score-0.098]

12 Another deviance-based loss function, derived from the same decision-theoretic framework, is applied to mixture models, which have previously been considered an unsuitable application for DIC. [sent-18, score-0.37]

13 Again, I’m not trying knock DIC, I’m just trying to express my current understanding of it. [sent-19, score-0.104]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('dic', 0.833), ('deviance', 0.159), ('plummer', 0.127), ('application', 0.098), ('criterion', 0.093), ('convergence', 0.09), ('approximation', 0.086), ('derived', 0.085), ('insight', 0.08), ('loss', 0.075), ('unlike', 0.075), ('unsuitable', 0.067), ('angelika', 0.067), ('inferences', 0.066), ('linde', 0.064), ('martyn', 0.064), ('der', 0.061), ('handles', 0.061), ('function', 0.059), ('ve', 0.057), ('measure', 0.056), ('knock', 0.054), ('strengths', 0.054), ('parameters', 0.054), ('information', 0.053), ('model', 0.053), ('aic', 0.053), ('gives', 0.053), ('models', 0.051), ('penalized', 0.05), ('current', 0.05), ('method', 0.049), ('quantify', 0.049), ('iterations', 0.049), ('carlin', 0.049), ('used', 0.049), ('zillion', 0.048), ('hand', 0.048), ('brad', 0.048), ('achieved', 0.048), ('within', 0.047), ('van', 0.047), ('folk', 0.047), ('penalty', 0.046), ('mixing', 0.046), ('applied', 0.045), ('finished', 0.045), ('mapping', 0.045), ('chains', 0.044), ('anywhere', 0.044)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999988 776 andrew gelman stats-2011-06-22-Deviance, DIC, AIC, cross-validation, etc

Introduction: The deviance information criterion (or DIC) is an idea of Brad Carlin and others for comparing the fits of models estimated using Bayesian simulation (for more information, see this article by Angelika van der Linde). I don’t really ever know what to make of DIC. On one hand, it seems sensible, it handles uncertainty in inferences within each model, and it does not depend on aspects of the models that don’t affect inferences within each model (unlike Bayes factors; see discussion here ). On the other hand, I don’t really have any idea what I would do with DIC in any real example. In our book we included an example of DIC–people use it and we don’t have any great alternatives–but I had to be pretty careful that the example made sense. Unlike the usual setting where we use a method and that gives us insight into a problem, here we used our insight into the problem to make sure that in this particular case the method gave a reasonable answer. One of my practical problems with D

2 0.5190841 778 andrew gelman stats-2011-06-24-New ideas on DIC from Martyn Plummer and Sumio Watanabe

Introduction: Martyn Plummer replied to my recent blog on DIC with information that was important enough that I thought it deserved its own blog entry. Martyn wrote: DIC has been around for 10 years now and despite being immensely popular with applied statisticians it has generated very little theoretical interest. In fact, the silence has been deafening. I [Martyn] hope my paper added some clarity. As you say, DIC is (an approximation to) a theoretical out-of-sample predictive error. When I finished the paper I was a little embarrassed to see that I had almost perfectly reconstructed the justification of AIC as approximate cross-validation measure by Stone (1977), with a Bayesian spin of course. But even this insight leaves a lot of choices open. You need to choose the right loss function and also which level of the model you want to replicate from. David Spiegelhalter and colleagues called this the “focus”. In practice the focus is limited to the lowest level of the model. You generall

3 0.28915292 1975 andrew gelman stats-2013-08-09-Understanding predictive information criteria for Bayesian models

Introduction: Jessy, Aki, and I write : We review the Akaike, deviance, and Watanabe-Akaike information criteria from a Bayesian perspective, where the goal is to estimate expected out-of-sample-prediction error using a bias-corrected adjustment of within-sample error. We focus on the choices involved in setting up these measures, and we compare them in three simple examples, one theoretical and two applied. The contribution of this review is to put all these information criteria into a Bayesian predictive context and to better understand, through small examples, how these methods can apply in practice. I like this paper. It came about as a result of preparing Chapter 7 for the new BDA . I had difficulty understanding AIC, DIC, WAIC, etc., but I recognized that these methods served a need. My first plan was to just apply DIC and WAIC on a couple of simple examples (a linear regression and the 8 schools) and leave it at that. But when I did the calculations, I couldn’t understand the resu

4 0.23865055 2349 andrew gelman stats-2014-05-26-WAIC and cross-validation in Stan!

Introduction: Aki and I write : The Watanabe-Akaike information criterion (WAIC) and cross-validation are methods for estimating pointwise out-of-sample prediction accuracy from a fitted Bayesian model. WAIC is based on the series expansion of leave-one-out cross-validation (LOO), and asymptotically they are equal. With finite data, WAIC and cross-validation address different predictive questions and thus it is useful to be able to compute both. WAIC and an importance-sampling approximated LOO can be estimated directly using the log-likelihood evaluated at the posterior simulations of the parameter values. We show how to compute WAIC, IS-LOO, K-fold cross-validation, and related diagnostic quantities in the Bayesian inference package Stan as called from R. This is important, I think. One reason the deviance information criterion (DIC) has been so popular is its implementation in Bugs. We think WAIC and cross-validation make more sense than DIC, especially from a Bayesian perspective in whic

5 0.16323702 1983 andrew gelman stats-2013-08-15-More on AIC, WAIC, etc

Introduction: Following up on our discussion from the other day, Angelika van der Linde sends along this paper from 2012 (link to journal here ). And Aki pulls out this great quote from Geisser and Eddy (1979): This discussion makes clear that in the nested case this method, as Akaike’s, is not consistent; i.e., even if $M_k$ is true, it will be rejected with probability $\alpha$ as $N\to\infty$. This point is also made by Schwarz (1978). However, from the point of view of prediction, this is of no great consequence. For large numbers of observations, a prediction based on the falsely assumed $M_k$, will not differ appreciably from one based on the true $M_k$. For example, if we assert that two normal populations have different means when in fact they have the same mean, then the use of the group mean as opposed to the grand mean for predicting a future observation results in predictors which are asymptotically equivalent and whose predictive variances are $\sigma^2[1 + (1/2n)]$ and $\si

6 0.14710924 1648 andrew gelman stats-2013-01-02-A important new survey of Bayesian predictive methods for model assessment, selection and comparison

7 0.11978365 729 andrew gelman stats-2011-05-24-Deviance as a difference

8 0.11503121 427 andrew gelman stats-2010-11-23-Bayesian adaptive methods for clinical trials

9 0.11220983 1221 andrew gelman stats-2012-03-19-Whassup with deviance having a high posterior correlation with a parameter in the model?

10 0.083776616 1841 andrew gelman stats-2013-05-04-The Folk Theorem of Statistical Computing

11 0.077775709 1469 andrew gelman stats-2012-08-25-Ways of knowing

12 0.076306365 1459 andrew gelman stats-2012-08-15-How I think about mixture models

13 0.0754527 754 andrew gelman stats-2011-06-09-Difficulties with Bayesian model averaging

14 0.073184818 1516 andrew gelman stats-2012-09-30-Computational problems with glm etc.

15 0.072659098 1392 andrew gelman stats-2012-06-26-Occam

16 0.072429344 674 andrew gelman stats-2011-04-21-Handbook of Markov Chain Monte Carlo

17 0.071932562 1527 andrew gelman stats-2012-10-10-Another reason why you can get good inferences from a bad model

18 0.07065089 1848 andrew gelman stats-2013-05-09-A tale of two discussion papers

19 0.070197143 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

20 0.067555301 696 andrew gelman stats-2011-05-04-Whassup with glm()?


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.138), (1, 0.08), (2, -0.018), (3, 0.04), (4, 0.004), (5, 0.004), (6, 0.009), (7, -0.009), (8, 0.025), (9, -0.004), (10, 0.011), (11, -0.005), (12, -0.042), (13, -0.008), (14, -0.023), (15, 0.007), (16, 0.015), (17, -0.002), (18, -0.006), (19, 0.002), (20, 0.027), (21, -0.003), (22, 0.048), (23, -0.012), (24, 0.048), (25, 0.017), (26, -0.019), (27, 0.026), (28, 0.032), (29, 0.023), (30, -0.034), (31, 0.034), (32, 0.073), (33, -0.023), (34, 0.007), (35, -0.035), (36, 0.005), (37, -0.039), (38, 0.025), (39, -0.006), (40, -0.044), (41, -0.003), (42, -0.035), (43, 0.011), (44, 0.03), (45, -0.029), (46, 0.025), (47, -0.018), (48, -0.022), (49, 0.021)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.9145087 776 andrew gelman stats-2011-06-22-Deviance, DIC, AIC, cross-validation, etc

Introduction: The deviance information criterion (or DIC) is an idea of Brad Carlin and others for comparing the fits of models estimated using Bayesian simulation (for more information, see this article by Angelika van der Linde). I don’t really ever know what to make of DIC. On one hand, it seems sensible, it handles uncertainty in inferences within each model, and it does not depend on aspects of the models that don’t affect inferences within each model (unlike Bayes factors; see discussion here ). On the other hand, I don’t really have any idea what I would do with DIC in any real example. In our book we included an example of DIC–people use it and we don’t have any great alternatives–but I had to be pretty careful that the example made sense. Unlike the usual setting where we use a method and that gives us insight into a problem, here we used our insight into the problem to make sure that in this particular case the method gave a reasonable answer. One of my practical problems with D

2 0.87571979 778 andrew gelman stats-2011-06-24-New ideas on DIC from Martyn Plummer and Sumio Watanabe

Introduction: Martyn Plummer replied to my recent blog on DIC with information that was important enough that I thought it deserved its own blog entry. Martyn wrote: DIC has been around for 10 years now and despite being immensely popular with applied statisticians it has generated very little theoretical interest. In fact, the silence has been deafening. I [Martyn] hope my paper added some clarity. As you say, DIC is (an approximation to) a theoretical out-of-sample predictive error. When I finished the paper I was a little embarrassed to see that I had almost perfectly reconstructed the justification of AIC as approximate cross-validation measure by Stone (1977), with a Bayesian spin of course. But even this insight leaves a lot of choices open. You need to choose the right loss function and also which level of the model you want to replicate from. David Spiegelhalter and colleagues called this the “focus”. In practice the focus is limited to the lowest level of the model. You generall

3 0.79791552 1975 andrew gelman stats-2013-08-09-Understanding predictive information criteria for Bayesian models

Introduction: Jessy, Aki, and I write : We review the Akaike, deviance, and Watanabe-Akaike information criteria from a Bayesian perspective, where the goal is to estimate expected out-of-sample-prediction error using a bias-corrected adjustment of within-sample error. We focus on the choices involved in setting up these measures, and we compare them in three simple examples, one theoretical and two applied. The contribution of this review is to put all these information criteria into a Bayesian predictive context and to better understand, through small examples, how these methods can apply in practice. I like this paper. It came about as a result of preparing Chapter 7 for the new BDA . I had difficulty understanding AIC, DIC, WAIC, etc., but I recognized that these methods served a need. My first plan was to just apply DIC and WAIC on a couple of simple examples (a linear regression and the 8 schools) and leave it at that. But when I did the calculations, I couldn’t understand the resu

4 0.77446944 2311 andrew gelman stats-2014-04-29-Bayesian Uncertainty Quantification for Differential Equations!

Introduction: Mark Girolami points us to this paper and software (with Oksana Chkrebtii, David Campbell, and Ben Calderhead). They write: We develop a general methodology for the probabilistic integration of differential equations via model based updating of a joint prior measure on the space of functions and their temporal and spatial derivatives. This results in a posterior measure over functions reflecting how well they satisfy the system of differential equations and corresponding initial and boundary values. We show how this posterior measure can be naturally incorporated within the Kennedy and O’Hagan framework for uncertainty quantification and provides a fully Bayesian approach to model calibration. . . . A broad variety of examples are provided to illustrate the potential of this framework for characterising discretization uncertainty, including initial value, delay, and boundary value differential equations, as well as partial differential equations. We also demonstrate our methodolo

5 0.77298498 1648 andrew gelman stats-2013-01-02-A important new survey of Bayesian predictive methods for model assessment, selection and comparison

Introduction: Aki Vehtari and Janne Ojanen just published a long paper that begins: To date, several methods exist in the statistical literature for model assessment, which purport themselves specifically as Bayesian predictive methods. The decision theoretic assumptions on which these methods are based are not always clearly stated in the original articles, however. The aim of this survey is to provide a unified review of Bayesian predictive model assessment and selection methods, and of methods closely related to them. We review the various assumptions that are made in this context and discuss the connections between different approaches, with an emphasis on how each method approximates the expected utility of using a Bayesian model for the purpose of predicting future data. AIC (which Akaike called “An Information Criterion”) is the starting point for all these methods. More recently, Watanabe came up with WAIC (which he called the “Widely Available Information Criterion”). In between t

6 0.74740338 1374 andrew gelman stats-2012-06-11-Convergence Monitoring for Non-Identifiable and Non-Parametric Models

7 0.7201969 674 andrew gelman stats-2011-04-21-Handbook of Markov Chain Monte Carlo

8 0.71615976 1983 andrew gelman stats-2013-08-15-More on AIC, WAIC, etc

9 0.69413275 1041 andrew gelman stats-2011-12-04-David MacKay and Occam’s Razor

10 0.67937011 964 andrew gelman stats-2011-10-19-An interweaving-transformation strategy for boosting MCMC efficiency

11 0.67850786 1739 andrew gelman stats-2013-02-26-An AI can build and try out statistical models using an open-ended generative grammar

12 0.67149019 2349 andrew gelman stats-2014-05-26-WAIC and cross-validation in Stan!

13 0.66206557 1962 andrew gelman stats-2013-07-30-The Roy causal model?

14 0.65764058 1422 andrew gelman stats-2012-07-20-Likelihood thresholds and decisions

15 0.65753996 1221 andrew gelman stats-2012-03-19-Whassup with deviance having a high posterior correlation with a parameter in the model?

16 0.65733594 1309 andrew gelman stats-2012-05-09-The first version of my “inference from iterative simulation using parallel sequences” paper!

17 0.64135051 427 andrew gelman stats-2010-11-23-Bayesian adaptive methods for clinical trials

18 0.6407789 810 andrew gelman stats-2011-07-20-Adding more information can make the variance go up (depending on your model)

19 0.6397146 1406 andrew gelman stats-2012-07-05-Xiao-Li Meng and Xianchao Xie rethink asymptotics

20 0.63853824 984 andrew gelman stats-2011-11-01-David MacKay sez . . . 12??


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(5, 0.032), (9, 0.011), (15, 0.039), (16, 0.041), (21, 0.045), (24, 0.114), (42, 0.013), (56, 0.025), (61, 0.131), (84, 0.023), (86, 0.078), (96, 0.023), (99, 0.261)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.95892459 1028 andrew gelman stats-2011-11-26-Tenure lets you handle students who cheat

Introduction: The other day, a friend of mine who is an untenured professor (not in statistics or political science) was telling me about a class where many of the students seemed to be resubmitting papers that they had already written for previous classes. (The supposition was based on internal evidence of the topics of the submitted papers.) It would be possible to check this and then kick the cheating students out of the program—but why do it? It would be a lot of work, also some of the students who are caught might complain, then word would get around that my friend is a troublemaker. And nobody likes a troublemaker. Once my friend has tenure it would be possible to do the right thing. But . . . here’s the hitch: most college instructors do not have tenure, and one result, I suspect, is a decline in ethical standards. This is something I hadn’t thought of in our earlier discussion of job security for teachers: tenure gives you the freedom to kick out cheating students.

same-blog 2 0.95272535 776 andrew gelman stats-2011-06-22-Deviance, DIC, AIC, cross-validation, etc

Introduction: The deviance information criterion (or DIC) is an idea of Brad Carlin and others for comparing the fits of models estimated using Bayesian simulation (for more information, see this article by Angelika van der Linde). I don’t really ever know what to make of DIC. On one hand, it seems sensible, it handles uncertainty in inferences within each model, and it does not depend on aspects of the models that don’t affect inferences within each model (unlike Bayes factors; see discussion here ). On the other hand, I don’t really have any idea what I would do with DIC in any real example. In our book we included an example of DIC–people use it and we don’t have any great alternatives–but I had to be pretty careful that the example made sense. Unlike the usual setting where we use a method and that gives us insight into a problem, here we used our insight into the problem to make sure that in this particular case the method gave a reasonable answer. One of my practical problems with D

3 0.95236242 827 andrew gelman stats-2011-07-28-Amusing case of self-defeating science writing

Introduction: We’re all familiar with the gee-whiz style of science and technology writing in which hardly a day dawns without a cure for cancer, or a new pollution-free energy source, or some other amazing breakthrough. We don’t always get the privilege of seeing such reporting shot down the moment it hits the presses. Here’s journalist Matthew Philips: What does it take for an idea to spread from one to many? For a minority opinion to become the majority belief? According to a new study by scientists at the Rensselaer Polytechnic Institute, the answer is 10%. Once 10% of a population is committed to an idea, it’s inevitable that it will eventually become the prevailing opinion of the entire group. The key is to remain committed. . . . The research actually validates the entrenched strategy of the handful of House Republicans threatening to sink John Boehner‘s budget proposal. Turns out if you’re in the minority, you have less of an incentive to compromise than the majority does. Because if

4 0.94846398 1370 andrew gelman stats-2012-06-07-Duncan Watts and the Titanic

Introduction: Daniel Mendelsohn recently asked , “Why do we love the Titanic?”, seeking to understand how it has happened that: It may not be true that ‘the three most written-about subjects of all time are Jesus, the Civil War, and the Titanic,’ as one historian has put it, but it’s not much of an exaggeration. . . . The inexhaustible interest suggests that the Titanic’s story taps a vein much deeper than the morbid fascination that has attached to other disasters. The explosion of the Hindenburg, for instance, and even the torpedoing, just three years after the Titanic sank, of the Lusitania, another great liner whose passenger list boasted the rich and the famous, were calamities that shocked the world but have failed to generate an obsessive preoccupation. . . . If the Titanic has gripped our imagination so forcefully for the past century, it must be because of something bigger than any fact of social or political or cultural history. To get to the bottom of why we can’t forget it, yo

5 0.94591928 1975 andrew gelman stats-2013-08-09-Understanding predictive information criteria for Bayesian models

Introduction: Jessy, Aki, and I write : We review the Akaike, deviance, and Watanabe-Akaike information criteria from a Bayesian perspective, where the goal is to estimate expected out-of-sample-prediction error using a bias-corrected adjustment of within-sample error. We focus on the choices involved in setting up these measures, and we compare them in three simple examples, one theoretical and two applied. The contribution of this review is to put all these information criteria into a Bayesian predictive context and to better understand, through small examples, how these methods can apply in practice. I like this paper. It came about as a result of preparing Chapter 7 for the new BDA . I had difficulty understanding AIC, DIC, WAIC, etc., but I recognized that these methods served a need. My first plan was to just apply DIC and WAIC on a couple of simple examples (a linear regression and the 8 schools) and leave it at that. But when I did the calculations, I couldn’t understand the resu

6 0.94324017 2349 andrew gelman stats-2014-05-26-WAIC and cross-validation in Stan!

7 0.94302362 16 andrew gelman stats-2010-05-04-Burgess on Kipling

8 0.93962198 714 andrew gelman stats-2011-05-16-NYT Labs releases Openpaths, a utility for saving your iphone data

9 0.93521583 1662 andrew gelman stats-2013-01-09-The difference between “significant” and “non-significant” is not itself statistically significant

10 0.93421721 1558 andrew gelman stats-2012-11-02-Not so fast on levees and seawalls for NY harbor?

11 0.93009478 21 andrew gelman stats-2010-05-07-Environmentally induced cancer “grossly underestimated”? Doubtful.

12 0.92872953 1739 andrew gelman stats-2013-02-26-An AI can build and try out statistical models using an open-ended generative grammar

13 0.92310667 1714 andrew gelman stats-2013-02-09-Partial least squares path analysis

14 0.92055261 2156 andrew gelman stats-2014-01-01-“Though They May Be Unaware, Newlyweds Implicitly Know Whether Their Marriage Will Be Satisfying”

15 0.92025626 9 andrew gelman stats-2010-04-28-But it all goes to pay for gas, car insurance, and tolls on the turnpike

16 0.91811013 561 andrew gelman stats-2011-02-06-Poverty, educational performance – and can be done about it

17 0.9129765 778 andrew gelman stats-2011-06-24-New ideas on DIC from Martyn Plummer and Sumio Watanabe

18 0.9080714 1910 andrew gelman stats-2013-06-22-Struggles over the criticism of the “cannabis users and IQ change” paper

19 0.90467829 2033 andrew gelman stats-2013-09-23-More on Bayesian methods and multilevel modeling

20 0.90277624 1983 andrew gelman stats-2013-08-15-More on AIC, WAIC, etc