andrew_gelman_stats andrew_gelman_stats-2014 andrew_gelman_stats-2014-2314 knowledge-graph by maker-knowledge-mining

2314 andrew gelman stats-2014-05-01-Heller, Heller, and Gorfine on univariate and multivariate information measures


meta infos for this blog

Source: html

Introduction: Malka Gorfine writes: We noticed that the important topic of association measures and tests came up again in your blog, and we have few comments in this regard. It is useful to distinguish between the univariate and multivariate methods. A consistent multivariate method can recognise dependence between two vectors of random variables, while a univariate method can only loop over pairs of components and check for dependency between them. There are very few consistent multivariate methods. To the best of our knowledge there are three practical methods: 1) HSIC by Gretton et al. (http://www.gatsby.ucl.ac.uk/~gretton/papers/GreBouSmoSch05.pdf) 2) dcov by Szekely et al. (http://projecteuclid.org/euclid.aoas/1267453933) 3) A method we introduced in Heller et al (Biometrika, 2013, 503—510, http://biomet.oxfordjournals.org/content/early/2012/12/04/biomet.ass070.full.pdf+html, and an R package, HHG, is available as well http://cran.r-project.org/web/packages/HHG/index.html). A


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Malka Gorfine writes: We noticed that the important topic of association measures and tests came up again in your blog, and we have few comments in this regard. [sent-1, score-0.051]

2 It is useful to distinguish between the univariate and multivariate methods. [sent-2, score-0.42]

3 A consistent multivariate method can recognise dependence between two vectors of random variables, while a univariate method can only loop over pairs of components and check for dependency between them. [sent-3, score-1.217]

4 There are very few consistent multivariate methods. [sent-4, score-0.358]

5 To the best of our knowledge there are three practical methods: 1) HSIC by Gretton et al. [sent-5, score-0.195]

6 aoas/1267453933) 3) A method we introduced in Heller et al (Biometrika, 2013, 503—510, http://biomet. [sent-14, score-0.414]

7 pdf+html, and an R package, HHG, is available as well http://cran. [sent-19, score-0.062]

8 As to univariate methods, there are many consistent methods, and some of them are: 1) Hoeffding (http://www. [sent-27, score-0.414]

9 2) Various methods based on mutual information estimation. [sent-30, score-0.3]

10 3) Any of the multivariate methods mentioned above. [sent-31, score-0.361]

11 4) A new class of methods we recently developed and currently available at http://arxiv. [sent-32, score-0.241]

12 1559 Regarding MIC, we fully agree with the criticism of Professor Kinney that “there is no good reason to use MIC”. [sent-34, score-0.051]

13 We would also like to add that since MIC requires exponential time to calculate, what actually is used is an approximation. [sent-35, score-0.12]

14 However, this approximation might not be consistent even in the limited cases for which MIC was proven to be consistent. [sent-36, score-0.229]

15 Therefore, MIC is not on the list above of consistent univariate methods. [sent-37, score-0.414]

16 Furthermore, in multiple independent power analyses MIC has been found to have lower power than other methods (Simon and Tibshirani, http://arxiv. [sent-38, score-0.307]

17 Regarding equitability, we again concur with Kinney and Atwal that contrary to its claim, MIC is not equitable and mutual information is an equitable measure (in the sense defined by Kinney and Atwal). [sent-50, score-0.686]

18 However, we agree with Professor Gelman (if we understood him correctly) that being equitable is not necessarily a good thing and therefore this does not mean that MI should be the only method used to test dependence (especially as it is hard to estimate). [sent-51, score-0.566]

19 In fact, perhaps bias towards “simpler” relationships is a good thing. [sent-52, score-0.101]

20 On behalf of Ruth Heller, Yair Heller and Malka Gorfine I have nothing to add here. [sent-54, score-0.118]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('mic', 0.455), ('http', 0.347), ('univariate', 0.238), ('equitable', 0.218), ('kinney', 0.208), ('gorfine', 0.201), ('heller', 0.201), ('et', 0.195), ('multivariate', 0.182), ('methods', 0.179), ('consistent', 0.176), ('malka', 0.154), ('atwal', 0.139), ('mutual', 0.121), ('method', 0.112), ('al', 0.107), ('dependence', 0.107), ('simpler', 0.093), ('therefore', 0.078), ('concur', 0.077), ('hhg', 0.077), ('biometrika', 0.073), ('ruth', 0.069), ('equitability', 0.069), ('professor', 0.066), ('dependency', 0.065), ('power', 0.064), ('html', 0.063), ('vectors', 0.063), ('available', 0.062), ('mi', 0.062), ('loop', 0.061), ('exponential', 0.061), ('behalf', 0.059), ('annals', 0.059), ('add', 0.059), ('regarding', 0.059), ('tibshirani', 0.058), ('simon', 0.056), ('furthermore', 0.054), ('proven', 0.053), ('contrary', 0.052), ('however', 0.052), ('yair', 0.051), ('topic', 0.051), ('pairs', 0.051), ('good', 0.051), ('towards', 0.05), ('components', 0.05), ('calculate', 0.05)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999988 2314 andrew gelman stats-2014-05-01-Heller, Heller, and Gorfine on univariate and multivariate information measures

Introduction: Malka Gorfine writes: We noticed that the important topic of association measures and tests came up again in your blog, and we have few comments in this regard. It is useful to distinguish between the univariate and multivariate methods. A consistent multivariate method can recognise dependence between two vectors of random variables, while a univariate method can only loop over pairs of components and check for dependency between them. There are very few consistent multivariate methods. To the best of our knowledge there are three practical methods: 1) HSIC by Gretton et al. (http://www.gatsby.ucl.ac.uk/~gretton/papers/GreBouSmoSch05.pdf) 2) dcov by Szekely et al. (http://projecteuclid.org/euclid.aoas/1267453933) 3) A method we introduced in Heller et al (Biometrika, 2013, 503—510, http://biomet.oxfordjournals.org/content/early/2012/12/04/biomet.ass070.full.pdf+html, and an R package, HHG, is available as well http://cran.r-project.org/web/packages/HHG/index.html). A

2 0.50384021 1230 andrew gelman stats-2012-03-26-Further thoughts on nonparametric correlation measures

Introduction: Malka Gorfine, Ruth Heller, and Yair Heller write a comment on the paper of Reshef et al. that we discussed a few months ago. Just to remind you what’s going on here, here’s my quick summary from December: Reshef et al. propose a new nonlinear R-squared-like measure. Unlike R-squared, this new method depends on a tuning parameter that controls the level of discretization, in a “How long is the coast of Britain” sort of way. The dependence on scale is inevitable for such a general method. Just consider: if you sample 1000 points from the unit bivariate normal distribution, (x,y) ~ N(0,I), you’ll be able to fit them perfectly by a 999-degree polynomial fit to the data. So the scale of the fit matters. The clever idea of the paper is that, instead of going for an absolute measure (which, as we’ve seen, will be scale-dependent), they focus on the problem of summarizing the grid of pairwise dependences in a large set of variables. As they put it: “Imagine a data set with hundreds

3 0.49725053 2324 andrew gelman stats-2014-05-07-Once more on nonparametric measures of mutual information

Introduction: Ben Murell writes: Our reply to Kinney and Atwal has come out (http://www.pnas.org/content/early/2014/04/29/1403623111.full.pdf) along with their response (http://www.pnas.org/content/early/2014/04/29/1404661111.full.pdf). I feel like they somewhat missed the point. If you’re still interested in this line of discussion, feel free to post, and maybe the Murrells and Kinney can bash it out in your comments! Background: Too many MC’s not enough MIC’s, or What principles should govern attempts to summarize bivariate associations in large multivariate datasets? Heller, Heller, and Gorfine on univariate and multivariate information measures Kinney and Atwal on the maximal information coefficient Mr. Pearson, meet Mr. Mandelbrot: Detecting Novel Associations in Large Data Sets Gorfine, Heller, Heller, Simon, and Tibshirani don’t like MIC The fun thing is that all these people are sending me their papers, and I’m enough of an outsider in this field that each of the

4 0.36298341 1706 andrew gelman stats-2013-02-04-Too many MC’s not enough MIC’s, or What principles should govern attempts to summarize bivariate associations in large multivariate datasets?

Introduction: Justin Kinney writes: Since your blog has discussed the “maximal information coefficient” (MIC) of Reshef et al., I figured you might want to see the critique that Gurinder Atwal and I have posted. In short, Reshef et al.’s central claim that MIC is “equitable” is incorrect. We [Kinney and Atwal] offer mathematical proof that the definition of “equitability” Reshef et al. propose is unsatisfiable—no nontrivial dependence measure, including MIC, has this property. Replicating the simulations in their paper with modestly larger data sets validates this finding. The heuristic notion of equitability, however, can be formalized instead as a self-consistency condition closely related to the Data Processing Inequality. Mutual information satisfies this new definition of equitability but MIC does not. We therefore propose that simply estimating mutual information will, in many cases, provide the sort of dependence measure Reshef et al. seek. For background, here are my two p

5 0.35081333 2247 andrew gelman stats-2014-03-14-The maximal information coefficient

Introduction: Justin Kinney writes: I wanted to let you know that the critique Mickey Atwal and I wrote regarding equitability and the maximal information coefficient has just been published . We discussed this paper last year, under the heading, Too many MC’s not enough MIC’s, or What principles should govern attempts to summarize bivariate associations in large multivariate datasets? Kinney and Atwal’s paper is interesting, with my only criticism being that in some places they seem to aim for what might not be possible. For example, they write that “mutual information is already widely believed to quantify dependencies without bias for relationships of one type or another,” which seems a bit vague to me. And later they write, “How to compute such an estimate that does not bias the resulting mutual information value remains an open problem,” which seems to me to miss the point in that unbiased statistical estimates are not generally possible and indeed are often not desirable. Their

6 0.17436838 2310 andrew gelman stats-2014-04-28-On deck this week

7 0.1636501 2315 andrew gelman stats-2014-05-02-Discovering general multidimensional associations

8 0.16297191 1380 andrew gelman stats-2012-06-15-Coaching, teaching, and writing

9 0.13682768 1726 andrew gelman stats-2013-02-18-What to read to catch up on multivariate statistics?

10 0.10770394 1825 andrew gelman stats-2013-04-25-It’s binless! A program for computing normalizing functions

11 0.10624889 2264 andrew gelman stats-2014-03-24-On deck this month

12 0.10166585 1904 andrew gelman stats-2013-06-18-Job opening! Come work with us!

13 0.1012072 1630 andrew gelman stats-2012-12-18-Postdoc positions at Microsoft Research – NYC

14 0.090225264 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

15 0.088048823 1972 andrew gelman stats-2013-08-07-When you’re planning on fitting a model, build up to it by fitting simpler models first. Then, once you have a model you like, check the hell out of it

16 0.086959578 1062 andrew gelman stats-2011-12-16-Mr. Pearson, meet Mr. Mandelbrot: Detecting Novel Associations in Large Data Sets

17 0.085022159 2069 andrew gelman stats-2013-10-19-R package for effect size calculations for psychology researchers

18 0.084358387 1175 andrew gelman stats-2012-02-19-Factual – a new place to find data

19 0.08297237 2260 andrew gelman stats-2014-03-22-Postdoc at Rennes on multilevel missing data imputation

20 0.082803912 2083 andrew gelman stats-2013-10-31-Value-added modeling in education: Gaming the system by sending kids on a field trip at test time


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.126), (1, 0.04), (2, -0.019), (3, -0.052), (4, 0.02), (5, 0.023), (6, -0.043), (7, -0.036), (8, -0.032), (9, 0.013), (10, 0.002), (11, -0.007), (12, 0.008), (13, -0.02), (14, 0.017), (15, 0.047), (16, 0.023), (17, 0.022), (18, -0.031), (19, -0.047), (20, 0.071), (21, 0.012), (22, 0.083), (23, 0.015), (24, 0.112), (25, 0.074), (26, 0.095), (27, 0.135), (28, 0.216), (29, 0.022), (30, 0.098), (31, 0.157), (32, 0.149), (33, 0.011), (34, 0.078), (35, -0.019), (36, 0.023), (37, 0.017), (38, -0.023), (39, -0.027), (40, 0.062), (41, 0.035), (42, 0.1), (43, 0.046), (44, -0.063), (45, -0.038), (46, 0.128), (47, -0.12), (48, -0.152), (49, 0.026)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97509724 2314 andrew gelman stats-2014-05-01-Heller, Heller, and Gorfine on univariate and multivariate information measures

Introduction: Malka Gorfine writes: We noticed that the important topic of association measures and tests came up again in your blog, and we have few comments in this regard. It is useful to distinguish between the univariate and multivariate methods. A consistent multivariate method can recognise dependence between two vectors of random variables, while a univariate method can only loop over pairs of components and check for dependency between them. There are very few consistent multivariate methods. To the best of our knowledge there are three practical methods: 1) HSIC by Gretton et al. (http://www.gatsby.ucl.ac.uk/~gretton/papers/GreBouSmoSch05.pdf) 2) dcov by Szekely et al. (http://projecteuclid.org/euclid.aoas/1267453933) 3) A method we introduced in Heller et al (Biometrika, 2013, 503—510, http://biomet.oxfordjournals.org/content/early/2012/12/04/biomet.ass070.full.pdf+html, and an R package, HHG, is available as well http://cran.r-project.org/web/packages/HHG/index.html). A

2 0.92131698 2324 andrew gelman stats-2014-05-07-Once more on nonparametric measures of mutual information

Introduction: Ben Murell writes: Our reply to Kinney and Atwal has come out (http://www.pnas.org/content/early/2014/04/29/1403623111.full.pdf) along with their response (http://www.pnas.org/content/early/2014/04/29/1404661111.full.pdf). I feel like they somewhat missed the point. If you’re still interested in this line of discussion, feel free to post, and maybe the Murrells and Kinney can bash it out in your comments! Background: Too many MC’s not enough MIC’s, or What principles should govern attempts to summarize bivariate associations in large multivariate datasets? Heller, Heller, and Gorfine on univariate and multivariate information measures Kinney and Atwal on the maximal information coefficient Mr. Pearson, meet Mr. Mandelbrot: Detecting Novel Associations in Large Data Sets Gorfine, Heller, Heller, Simon, and Tibshirani don’t like MIC The fun thing is that all these people are sending me their papers, and I’m enough of an outsider in this field that each of the

3 0.87103266 1230 andrew gelman stats-2012-03-26-Further thoughts on nonparametric correlation measures

Introduction: Malka Gorfine, Ruth Heller, and Yair Heller write a comment on the paper of Reshef et al. that we discussed a few months ago. Just to remind you what’s going on here, here’s my quick summary from December: Reshef et al. propose a new nonlinear R-squared-like measure. Unlike R-squared, this new method depends on a tuning parameter that controls the level of discretization, in a “How long is the coast of Britain” sort of way. The dependence on scale is inevitable for such a general method. Just consider: if you sample 1000 points from the unit bivariate normal distribution, (x,y) ~ N(0,I), you’ll be able to fit them perfectly by a 999-degree polynomial fit to the data. So the scale of the fit matters. The clever idea of the paper is that, instead of going for an absolute measure (which, as we’ve seen, will be scale-dependent), they focus on the problem of summarizing the grid of pairwise dependences in a large set of variables. As they put it: “Imagine a data set with hundreds

4 0.86379665 2247 andrew gelman stats-2014-03-14-The maximal information coefficient

Introduction: Justin Kinney writes: I wanted to let you know that the critique Mickey Atwal and I wrote regarding equitability and the maximal information coefficient has just been published . We discussed this paper last year, under the heading, Too many MC’s not enough MIC’s, or What principles should govern attempts to summarize bivariate associations in large multivariate datasets? Kinney and Atwal’s paper is interesting, with my only criticism being that in some places they seem to aim for what might not be possible. For example, they write that “mutual information is already widely believed to quantify dependencies without bias for relationships of one type or another,” which seems a bit vague to me. And later they write, “How to compute such an estimate that does not bias the resulting mutual information value remains an open problem,” which seems to me to miss the point in that unbiased statistical estimates are not generally possible and indeed are often not desirable. Their

5 0.83124214 1706 andrew gelman stats-2013-02-04-Too many MC’s not enough MIC’s, or What principles should govern attempts to summarize bivariate associations in large multivariate datasets?

Introduction: Justin Kinney writes: Since your blog has discussed the “maximal information coefficient” (MIC) of Reshef et al., I figured you might want to see the critique that Gurinder Atwal and I have posted. In short, Reshef et al.’s central claim that MIC is “equitable” is incorrect. We [Kinney and Atwal] offer mathematical proof that the definition of “equitability” Reshef et al. propose is unsatisfiable—no nontrivial dependence measure, including MIC, has this property. Replicating the simulations in their paper with modestly larger data sets validates this finding. The heuristic notion of equitability, however, can be formalized instead as a self-consistency condition closely related to the Data Processing Inequality. Mutual information satisfies this new definition of equitability but MIC does not. We therefore propose that simply estimating mutual information will, in many cases, provide the sort of dependence measure Reshef et al. seek. For background, here are my two p

6 0.74688596 1062 andrew gelman stats-2011-12-16-Mr. Pearson, meet Mr. Mandelbrot: Detecting Novel Associations in Large Data Sets

7 0.68813336 1825 andrew gelman stats-2013-04-25-It’s binless! A program for computing normalizing functions

8 0.65111846 2315 andrew gelman stats-2014-05-02-Discovering general multidimensional associations

9 0.57237023 1828 andrew gelman stats-2013-04-27-Time-Sharing Experiments for the Social Sciences

10 0.5241828 1214 andrew gelman stats-2012-03-15-Of forecasts and graph theory and characterizing a statistical method by the information it uses

11 0.48353612 2250 andrew gelman stats-2014-03-16-“I have no idea who Catalina Garcia is, but she makes a decent ruler”

12 0.47721216 1380 andrew gelman stats-2012-06-15-Coaching, teaching, and writing

13 0.4612684 301 andrew gelman stats-2010-09-28-Correlation, prediction, variation, etc.

14 0.4597913 778 andrew gelman stats-2011-06-24-New ideas on DIC from Martyn Plummer and Sumio Watanabe

15 0.45952162 2260 andrew gelman stats-2014-03-22-Postdoc at Rennes on multilevel missing data imputation

16 0.45614702 1975 andrew gelman stats-2013-08-09-Understanding predictive information criteria for Bayesian models

17 0.45552909 2277 andrew gelman stats-2014-03-31-The most-cited statistics papers ever

18 0.42504513 1648 andrew gelman stats-2013-01-02-A important new survey of Bayesian predictive methods for model assessment, selection and comparison

19 0.41264516 519 andrew gelman stats-2011-01-16-Update on the generalized method of moments

20 0.41152003 1107 andrew gelman stats-2012-01-08-More on essentialism


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(15, 0.017), (16, 0.022), (17, 0.248), (22, 0.03), (24, 0.177), (65, 0.018), (79, 0.024), (86, 0.042), (99, 0.295)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.94158947 2314 andrew gelman stats-2014-05-01-Heller, Heller, and Gorfine on univariate and multivariate information measures

Introduction: Malka Gorfine writes: We noticed that the important topic of association measures and tests came up again in your blog, and we have few comments in this regard. It is useful to distinguish between the univariate and multivariate methods. A consistent multivariate method can recognise dependence between two vectors of random variables, while a univariate method can only loop over pairs of components and check for dependency between them. There are very few consistent multivariate methods. To the best of our knowledge there are three practical methods: 1) HSIC by Gretton et al. (http://www.gatsby.ucl.ac.uk/~gretton/papers/GreBouSmoSch05.pdf) 2) dcov by Szekely et al. (http://projecteuclid.org/euclid.aoas/1267453933) 3) A method we introduced in Heller et al (Biometrika, 2013, 503—510, http://biomet.oxfordjournals.org/content/early/2012/12/04/biomet.ass070.full.pdf+html, and an R package, HHG, is available as well http://cran.r-project.org/web/packages/HHG/index.html). A

2 0.92768049 1230 andrew gelman stats-2012-03-26-Further thoughts on nonparametric correlation measures

Introduction: Malka Gorfine, Ruth Heller, and Yair Heller write a comment on the paper of Reshef et al. that we discussed a few months ago. Just to remind you what’s going on here, here’s my quick summary from December: Reshef et al. propose a new nonlinear R-squared-like measure. Unlike R-squared, this new method depends on a tuning parameter that controls the level of discretization, in a “How long is the coast of Britain” sort of way. The dependence on scale is inevitable for such a general method. Just consider: if you sample 1000 points from the unit bivariate normal distribution, (x,y) ~ N(0,I), you’ll be able to fit them perfectly by a 999-degree polynomial fit to the data. So the scale of the fit matters. The clever idea of the paper is that, instead of going for an absolute measure (which, as we’ve seen, will be scale-dependent), they focus on the problem of summarizing the grid of pairwise dependences in a large set of variables. As they put it: “Imagine a data set with hundreds

3 0.92247349 2324 andrew gelman stats-2014-05-07-Once more on nonparametric measures of mutual information

Introduction: Ben Murell writes: Our reply to Kinney and Atwal has come out (http://www.pnas.org/content/early/2014/04/29/1403623111.full.pdf) along with their response (http://www.pnas.org/content/early/2014/04/29/1404661111.full.pdf). I feel like they somewhat missed the point. If you’re still interested in this line of discussion, feel free to post, and maybe the Murrells and Kinney can bash it out in your comments! Background: Too many MC’s not enough MIC’s, or What principles should govern attempts to summarize bivariate associations in large multivariate datasets? Heller, Heller, and Gorfine on univariate and multivariate information measures Kinney and Atwal on the maximal information coefficient Mr. Pearson, meet Mr. Mandelbrot: Detecting Novel Associations in Large Data Sets Gorfine, Heller, Heller, Simon, and Tibshirani don’t like MIC The fun thing is that all these people are sending me their papers, and I’m enough of an outsider in this field that each of the

4 0.91622484 705 andrew gelman stats-2011-05-10-Some interesting unpublished ideas on survey weighting

Introduction: A couple years ago we had an amazing all-star session at the Joint Statistical Meetings. The topic was new approaches to survey weighting (which is a mess , as I’m sure you’ve heard). Xiao-Li Meng recommended shrinking weights by taking them to a fractional power (such as square root) instead of trimming the extremes. Rod Little combined design-based and model-based survey inference. Michael Elliott used mixture models for complex survey design. And here’s my introduction to the session.

5 0.90953982 309 andrew gelman stats-2010-10-01-Why Development Economics Needs Theory?

Introduction: Robert Neumann writes: in the JEP 24(3), page18, Daron Acemoglu states: Why Development Economics Needs Theory There is no general agreement on how much we should rely on economic theory in motivating empirical work and whether we should try to formulate and estimate “structural parameters.” I (Acemoglu) argue that the answer is largely “yes” because otherwise econometric estimates would lack external validity, in which case they can neither inform us about whether a particular model or theory is a useful approximation to reality, nor would they be useful in providing us guidance on what the effects of similar shocks and policies would be in different circumstances or if implemented in different scales. I therefore define “structural parameters” as those that provide external validity and would thus be useful in testing theories or in policy analysis beyond the specific environment and sample from which they are derived. External validity becomes a particularly challenging t

6 0.88383722 1136 andrew gelman stats-2012-01-23-Fight! (also a bit of reminiscence at the end)

7 0.87805903 1557 andrew gelman stats-2012-11-01-‘Researcher Degrees of Freedom’

8 0.87630606 1467 andrew gelman stats-2012-08-23-The pinch-hitter syndrome again

9 0.87179339 1362 andrew gelman stats-2012-06-03-Question 24 of my final exam for Design and Analysis of Sample Surveys

10 0.86991715 1616 andrew gelman stats-2012-12-10-John McAfee is a Heinlein hero

11 0.86257786 2315 andrew gelman stats-2014-05-02-Discovering general multidimensional associations

12 0.85821354 397 andrew gelman stats-2010-11-06-Multilevel quantile regression

13 0.85743785 1076 andrew gelman stats-2011-12-21-Derman, Rodrik and the nature of statistical models

14 0.84829879 2359 andrew gelman stats-2014-06-04-All the Assumptions That Are My Life

15 0.84132123 1422 andrew gelman stats-2012-07-20-Likelihood thresholds and decisions

16 0.83379418 259 andrew gelman stats-2010-09-06-Inbox zero. Really.

17 0.82961959 2136 andrew gelman stats-2013-12-16-Whither the “bet on sparsity principle” in a nonsparse world?

18 0.82876074 1383 andrew gelman stats-2012-06-18-Hierarchical modeling as a framework for extrapolation

19 0.82566953 1228 andrew gelman stats-2012-03-25-Continuous variables in Bayesian networks

20 0.82401985 1361 andrew gelman stats-2012-06-02-Question 23 of my final exam for Design and Analysis of Sample Surveys