andrew_gelman_stats andrew_gelman_stats-2012 andrew_gelman_stats-2012-1477 knowledge-graph by maker-knowledge-mining

1477 andrew gelman stats-2012-08-30-Visualizing Distributions of Covariance Matrices


meta infos for this blog

Source: html

Introduction: Since we’ve been discussing prior distributions on covariance matrices, I will recommend this recent article (coauthored with Tomoki Tokuda, Ben Goodrich, Iven Van Mechelen, and Francis Tuerlinckx) on their visualization: We present some methods for graphing distributions of covariance matrices and demonstrate them on several models, including the Wishart, inverse-Wishart, and scaled inverse-Wishart families in different dimensions. Our visualizations follow the principle of decomposing a covariance matrix into scale parameters and correlations, pulling out marginal summaries where possible and using two and three-dimensional plots to reveal multivariate structure. Visualizing a distribution of covariance matrices is a step beyond visualizing a single covariance matrix or a single multivariate dataset. Our visualization methods are available through the R package VisCov.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Our visualizations follow the principle of decomposing a covariance matrix into scale parameters and correlations, pulling out marginal summaries where possible and using two and three-dimensional plots to reveal multivariate structure. [sent-2, score-1.948]

2 Visualizing a distribution of covariance matrices is a step beyond visualizing a single covariance matrix or a single multivariate dataset. [sent-3, score-2.396]

3 Our visualization methods are available through the R package VisCov. [sent-4, score-0.404]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('covariance', 0.524), ('matrices', 0.328), ('visualizing', 0.237), ('multivariate', 0.187), ('matrix', 0.187), ('visualization', 0.165), ('tokuda', 0.158), ('tomoki', 0.158), ('viscov', 0.158), ('decomposing', 0.149), ('goodrich', 0.143), ('distributions', 0.141), ('iven', 0.134), ('mechelen', 0.134), ('tuerlinckx', 0.134), ('wishart', 0.13), ('scaled', 0.125), ('coauthored', 0.122), ('francis', 0.118), ('single', 0.116), ('graphing', 0.112), ('van', 0.11), ('pulling', 0.11), ('summaries', 0.104), ('visualizations', 0.103), ('families', 0.102), ('ben', 0.099), ('reveal', 0.092), ('methods', 0.092), ('plots', 0.088), ('marginal', 0.087), ('correlations', 0.085), ('package', 0.083), ('discussing', 0.08), ('demonstrate', 0.078), ('principle', 0.075), ('recommend', 0.069), ('scale', 0.068), ('present', 0.065), ('step', 0.064), ('available', 0.064), ('parameters', 0.064), ('follow', 0.062), ('beyond', 0.058), ('distribution', 0.055), ('prior', 0.054), ('including', 0.05), ('several', 0.05), ('possible', 0.048), ('since', 0.045)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 1477 andrew gelman stats-2012-08-30-Visualizing Distributions of Covariance Matrices

Introduction: Since we’ve been discussing prior distributions on covariance matrices, I will recommend this recent article (coauthored with Tomoki Tokuda, Ben Goodrich, Iven Van Mechelen, and Francis Tuerlinckx) on their visualization: We present some methods for graphing distributions of covariance matrices and demonstrate them on several models, including the Wishart, inverse-Wishart, and scaled inverse-Wishart families in different dimensions. Our visualizations follow the principle of decomposing a covariance matrix into scale parameters and correlations, pulling out marginal summaries where possible and using two and three-dimensional plots to reveal multivariate structure. Visualizing a distribution of covariance matrices is a step beyond visualizing a single covariance matrix or a single multivariate dataset. Our visualization methods are available through the R package VisCov.

2 0.76115799 1991 andrew gelman stats-2013-08-21-BDA3 table of contents (also a new paper on visualization)

Introduction: In response to our recent posting of Amazon’s offer of Bayesian Data Analysis 3rd edition at 40% off, some people asked what was in this new edition, with more information beyond the beautiful cover image and the brief paragraph I’d posted earlier. Here’s the table of contents. The following sections have all-new material: 1.4 New introduction of BDA principles using a simple spell checking example 2.9 Weakly informative prior distributions 5.7 Weakly informative priors for hierarchical variance parameters 7.1-7.4 Predictive accuracy for model evaluation and comparison 10.6 Computing environments 11.4 Split R-hat 11.5 New measure of effective number of simulation draws 13.7 Variational inference 13.8 Expectation propagation 13.9 Other approximations 14.6 Regularization for regression models C.1 Getting started with R and Stan C.2 Fitting a hierarchical model in Stan C.4 Programming Hamiltonian Monte Carlo in R And the new chapters: 20 Basis function models 2

3 0.22366619 1474 andrew gelman stats-2012-08-29-More on scaled-inverse Wishart and prior independence

Introduction: I’ve had a couple of email conversations in the past couple days on dependence in multivariate prior distributions. Modeling the degrees of freedom and scale parameters in the t distribution First, in our Stan group we’ve been discussing the choice of priors for the degrees-of-freedom parameter in the t distribution. I wrote that also there’s the question of parameterization. It does not necessarily make sense to have independent priors on the df and scale parameters. In some sense, the meaning of the scale parameter changes with the df. Prior dependence between correlation and scale parameters in the scaled inverse-Wishart model The second case of parameterization in prior distribution arose from an email I received from Chris Chatham pointing me to this exploration by Matt Simpson of the scaled inverse-Wishart prior distribution for hierarchical covariance matrices. Simpson writes: A popular prior for Σ is the inverse-Wishart distribution [ not the same as the

4 0.22073686 1466 andrew gelman stats-2012-08-22-The scaled inverse Wishart prior distribution for a covariance matrix in a hierarchical model

Introduction: Since we’re talking about the scaled inverse Wishart . . . here’s a recent message from Chris Chatham: I have been reading your book on Bayesian Hierarchical/Multilevel Modeling but have been struggling a bit with deciding whether to model my multivariate normal distribution using the scaled inverse Wishart approach you advocate given the arguments at this blog post [entitled "Why an inverse-Wishart prior may not be such a good idea"]. My reply: We discuss this in our book. We know the inverse-Wishart has problems, that’s why we recommend the scaled inverse-Wishart, which is a more general class of models. Here ‘s an old blog post on the topic. And also of course there’s the description in our book. Chris pointed me to the following comment by Simon Barthelmé: Using the scaled inverse Wishart doesn’t change anything, the standard deviations of the invidual coefficients and their covariance are still dependent. My answer would be to use a prior that models the stan

5 0.18553028 555 andrew gelman stats-2011-02-04-Handy Matrix Cheat Sheet, with Gradients

Introduction: This post is an (unpaid) advertisement for the following extremely useful resource: Petersen, K. B. and M. S. Pedersen. 2008. The Matrix Cookbook . Tehcnical Report, Technical University of Denmark. It contains 70+ pages of useful relations and derivations involving matrices. What grabbed my eye was the computation of gradients for matrix operations ranging from eigenvalues and determinants to multivariate normal density functions. I had no idea the multivariate normal had such a clean gradient (see section 8). We’ve been playing around with Hamiltonian (aka Hybrid) Monte Carlo for sampling from the posterior of hierarchical generalized linear models with lots of interactions. HMC speeds up Metropolis sampling by using the gradient of the log probability to drive samples in the direction of higher probability density, which is particularly useful for correlated parameters that mix slowly with standard Gibbs sampling. Matt “III” Hoffman ‘s already got it workin

6 0.15566263 929 andrew gelman stats-2011-09-27-Visual diagnostics for discrete-data regressions

7 0.15431406 1806 andrew gelman stats-2013-04-16-My talk in Chicago this Thurs 6:30pm

8 0.13050668 215 andrew gelman stats-2010-08-18-DataMarket

9 0.12522167 2258 andrew gelman stats-2014-03-21-Random matrices in the news

10 0.11240946 1710 andrew gelman stats-2013-02-06-The new Stan 1.1.1, featuring Gaussian processes!

11 0.11037416 1726 andrew gelman stats-2013-02-18-What to read to catch up on multivariate statistics?

12 0.10528505 859 andrew gelman stats-2011-08-18-Misunderstanding analysis of covariance

13 0.10515637 1188 andrew gelman stats-2012-02-28-Reference on longitudinal models?

14 0.10322821 2296 andrew gelman stats-2014-04-19-Index or indicator variables

15 0.10196264 779 andrew gelman stats-2011-06-25-Avoiding boundary estimates using a prior distribution as regularization

16 0.10043037 1196 andrew gelman stats-2012-03-04-Piss-poor monocausal social science

17 0.099896237 2277 andrew gelman stats-2014-03-31-The most-cited statistics papers ever

18 0.094966784 1753 andrew gelman stats-2013-03-06-Stan 1.2.0 and RStan 1.2.0

19 0.093740627 1286 andrew gelman stats-2012-04-28-Agreement Groups in US Senate and Dynamic Clustering

20 0.090811446 2161 andrew gelman stats-2014-01-07-My recent debugging experience


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.08), (1, 0.095), (2, -0.015), (3, 0.063), (4, 0.07), (5, -0.039), (6, 0.003), (7, -0.05), (8, -0.108), (9, 0.036), (10, -0.006), (11, 0.006), (12, -0.017), (13, 0.015), (14, 0.029), (15, -0.027), (16, -0.014), (17, 0.01), (18, 0.009), (19, -0.001), (20, 0.034), (21, -0.03), (22, 0.066), (23, 0.055), (24, 0.046), (25, 0.037), (26, -0.011), (27, 0.093), (28, 0.1), (29, -0.009), (30, -0.066), (31, 0.049), (32, 0.067), (33, 0.018), (34, 0.069), (35, -0.042), (36, 0.009), (37, 0.004), (38, 0.013), (39, 0.035), (40, -0.074), (41, 0.008), (42, 0.075), (43, 0.018), (44, 0.056), (45, 0.004), (46, -0.086), (47, -0.035), (48, 0.083), (49, -0.158)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98701829 1477 andrew gelman stats-2012-08-30-Visualizing Distributions of Covariance Matrices

Introduction: Since we’ve been discussing prior distributions on covariance matrices, I will recommend this recent article (coauthored with Tomoki Tokuda, Ben Goodrich, Iven Van Mechelen, and Francis Tuerlinckx) on their visualization: We present some methods for graphing distributions of covariance matrices and demonstrate them on several models, including the Wishart, inverse-Wishart, and scaled inverse-Wishart families in different dimensions. Our visualizations follow the principle of decomposing a covariance matrix into scale parameters and correlations, pulling out marginal summaries where possible and using two and three-dimensional plots to reveal multivariate structure. Visualizing a distribution of covariance matrices is a step beyond visualizing a single covariance matrix or a single multivariate dataset. Our visualization methods are available through the R package VisCov.

2 0.8356455 1991 andrew gelman stats-2013-08-21-BDA3 table of contents (also a new paper on visualization)

Introduction: In response to our recent posting of Amazon’s offer of Bayesian Data Analysis 3rd edition at 40% off, some people asked what was in this new edition, with more information beyond the beautiful cover image and the brief paragraph I’d posted earlier. Here’s the table of contents. The following sections have all-new material: 1.4 New introduction of BDA principles using a simple spell checking example 2.9 Weakly informative prior distributions 5.7 Weakly informative priors for hierarchical variance parameters 7.1-7.4 Predictive accuracy for model evaluation and comparison 10.6 Computing environments 11.4 Split R-hat 11.5 New measure of effective number of simulation draws 13.7 Variational inference 13.8 Expectation propagation 13.9 Other approximations 14.6 Regularization for regression models C.1 Getting started with R and Stan C.2 Fitting a hierarchical model in Stan C.4 Programming Hamiltonian Monte Carlo in R And the new chapters: 20 Basis function models 2

3 0.73188901 555 andrew gelman stats-2011-02-04-Handy Matrix Cheat Sheet, with Gradients

Introduction: This post is an (unpaid) advertisement for the following extremely useful resource: Petersen, K. B. and M. S. Pedersen. 2008. The Matrix Cookbook . Tehcnical Report, Technical University of Denmark. It contains 70+ pages of useful relations and derivations involving matrices. What grabbed my eye was the computation of gradients for matrix operations ranging from eigenvalues and determinants to multivariate normal density functions. I had no idea the multivariate normal had such a clean gradient (see section 8). We’ve been playing around with Hamiltonian (aka Hybrid) Monte Carlo for sampling from the posterior of hierarchical generalized linear models with lots of interactions. HMC speeds up Metropolis sampling by using the gradient of the log probability to drive samples in the direction of higher probability density, which is particularly useful for correlated parameters that mix slowly with standard Gibbs sampling. Matt “III” Hoffman ‘s already got it workin

4 0.6236552 931 andrew gelman stats-2011-09-29-Hamiltonian Monte Carlo stories

Introduction: Tomas Iesmantas had asked me for advice on a regression problem with 50 parameters, and I’d recommended Hamiltonian Monte Carlo. A few weeks later he reported back: After trying several modifications (HMC for all parameters at once, HMC just for first level parameters and Riemman manifold Hamiltonian Monte Carlo method), I finally got it running with HMC just for first level parameters and for others using direct sampling, since conditional distributions turned out to have closed form. However, even in this case it is quite tricky, since I had to employ mass matrix and not just diagonal but at the beginning of algorithm generated it randomly (ensuring it is positive definite). Such random generation of mass matrix is quite blind step, but it proved to be quite helpful. Riemman manifold HMC is quite vagarious, or to be more specific, metric of manifold is very sensitive. In my model log-likelihood I had exponents and values of metrics matrix elements was very large and wh

5 0.61979389 1339 andrew gelman stats-2012-05-23-Learning Differential Geometry for Hamiltonian Monte Carlo

Introduction: You can get a taste of Hamiltonian Monte Carlo (HMC) by reading the very gentle introduction in David MacKay’s general text on information theory: MacKay, D. 2003. Information Theory, Inference, and Learning Algorithms . Cambridge University Press. [see Chapter 31, which is relatively standalone and can be downloaded separately.] Follow this up with Radford Neal’s much more thorough introduction to HMC: Neal, R. 2011. MCMC Using Hamiltonian Dynamics . In Brooks, Gelman, Jones and Meng, eds., Handbook of Markov Chain Monte Carlo . Chapman and Hall/CRC Press. To understand why HMC works and set yourself on the path to understanding generalizations like Riemann manifold HMC , you’ll need to know a bit about differential geometry. I really liked the combination of these two books: Magnus, J. R. and H. Neudecker. 2007. Matrix Differential Calculus with Application in Statistics and Econometrics . 3rd Edition. Wiley? and Leimkuhler, B. and S.

6 0.59735441 535 andrew gelman stats-2011-01-24-Bleg: Automatic Differentiation for Log Prob Gradients?

7 0.55995798 1682 andrew gelman stats-2013-01-19-R package for Bayes factors

8 0.5415405 2277 andrew gelman stats-2014-03-31-The most-cited statistics papers ever

9 0.53493333 674 andrew gelman stats-2011-04-21-Handbook of Markov Chain Monte Carlo

10 0.52061528 501 andrew gelman stats-2011-01-04-A new R package for fititng multilevel models

11 0.50599986 2117 andrew gelman stats-2013-11-29-The gradual transition to replicable science

12 0.50277376 1466 andrew gelman stats-2012-08-22-The scaled inverse Wishart prior distribution for a covariance matrix in a hierarchical model

13 0.49915656 1674 andrew gelman stats-2013-01-15-Prior Selection for Vector Autoregressions

14 0.4989742 846 andrew gelman stats-2011-08-09-Default priors update?

15 0.49246496 442 andrew gelman stats-2010-12-01-bayesglm in Stata?

16 0.49135509 779 andrew gelman stats-2011-06-25-Avoiding boundary estimates using a prior distribution as regularization

17 0.48285779 1309 andrew gelman stats-2012-05-09-The first version of my “inference from iterative simulation using parallel sequences” paper!

18 0.47144806 778 andrew gelman stats-2011-06-24-New ideas on DIC from Martyn Plummer and Sumio Watanabe

19 0.47121817 2231 andrew gelman stats-2014-03-03-Running into a Stan Reference by Accident

20 0.46804249 1710 andrew gelman stats-2013-02-06-The new Stan 1.1.1, featuring Gaussian processes!


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(5, 0.04), (11, 0.037), (24, 0.15), (30, 0.013), (34, 0.09), (44, 0.019), (74, 0.014), (77, 0.016), (84, 0.016), (89, 0.334), (98, 0.035), (99, 0.104)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.93751252 1477 andrew gelman stats-2012-08-30-Visualizing Distributions of Covariance Matrices

Introduction: Since we’ve been discussing prior distributions on covariance matrices, I will recommend this recent article (coauthored with Tomoki Tokuda, Ben Goodrich, Iven Van Mechelen, and Francis Tuerlinckx) on their visualization: We present some methods for graphing distributions of covariance matrices and demonstrate them on several models, including the Wishart, inverse-Wishart, and scaled inverse-Wishart families in different dimensions. Our visualizations follow the principle of decomposing a covariance matrix into scale parameters and correlations, pulling out marginal summaries where possible and using two and three-dimensional plots to reveal multivariate structure. Visualizing a distribution of covariance matrices is a step beyond visualizing a single covariance matrix or a single multivariate dataset. Our visualization methods are available through the R package VisCov.

2 0.87289006 1685 andrew gelman stats-2013-01-21-Class on computational social science this semester, Fridays, 1:00-3:40pm

Introduction: Sharad Goel, Jake Hofman, and Sergei Vassilvitskii are teaching this awesome class on computational social science this semester in the applied math department at Columbia. Here’s the course info . You should take this course. These guys are amazing.

3 0.87197047 2243 andrew gelman stats-2014-03-11-The myth of the myth of the myth of the hot hand

Introduction: Phil pointed me to this paper so I thought I probably better repeat what I wrote a couple years ago: 1. The effects are certainly not zero. We are not machines, and anything that can affect our expectations (for example, our success in previous tries) should affect our performance. 2. The effects I’ve seen are small, on the order of 2 percentage points (for example, the probability of a success in some sports task might be 45% if you’re “hot” and 43% otherwise). 3. There’s a huge amount of variation, not just between but also among players. Sometimes if you succeed you will stay relaxed and focused, other times you can succeed and get overconfidence. 4. Whatever the latest results on particular sports, I can’t see anyone overturning the basic finding of Gilovich, Vallone, and Tversky that players and spectators alike will perceive the hot hand even when it does not exist and dramatically overestimate the magnitude and consistency of any hot-hand phenomenon that does exist.

4 0.85540378 1215 andrew gelman stats-2012-03-16-The “hot hand” and problems with hypothesis testing

Introduction: Gur Yaari writes : Anyone who has ever watched a sports competition is familiar with expressions like “on fire”, “in the zone”, “on a roll”, “momentum” and so on. But what do these expressions really mean? In 1985 when Thomas Gilovich, Robert Vallone and Amos Tversky studied this phenomenon for the first time, they defined it as: “. . . these phrases express a belief that the performance of a player during a particular period is significantly better than expected on the basis of the player’s overall record”. Their conclusion was that what people tend to perceive as a “hot hand” is essentially a cognitive illusion caused by a misperception of random sequences. Until recently there was little, if any, evidence to rule out their conclusion. Increased computing power and new data availability from various sports now provide surprising evidence of this phenomenon, thus reigniting the debate. Yaari goes on to some studies that have found time dependence in basketball, baseball, voll

5 0.81476957 1953 andrew gelman stats-2013-07-24-Recently in the sister blog

Introduction: Would You Accept DNA From A Murderer?

6 0.76676035 1160 andrew gelman stats-2012-02-09-Familial Linkage between Neuropsychiatric Disorders and Intellectual Interests

7 0.75875199 1756 andrew gelman stats-2013-03-10-He said he was sorry

8 0.75688493 1708 andrew gelman stats-2013-02-05-Wouldn’t it be cool if Glenn Hubbard were consulting for Herbalife and I were on the other side?

9 0.72664428 459 andrew gelman stats-2010-12-09-Solve mazes by starting at the exit

10 0.71914417 833 andrew gelman stats-2011-07-31-Untunable Metropolis

11 0.70353365 1032 andrew gelman stats-2011-11-28-Does Avastin work on breast cancer? Should Medicare be paying for it?

12 0.67000079 407 andrew gelman stats-2010-11-11-Data Visualization vs. Statistical Graphics

13 0.64827079 1991 andrew gelman stats-2013-08-21-BDA3 table of contents (also a new paper on visualization)

14 0.62679827 1855 andrew gelman stats-2013-05-13-Stan!

15 0.59643209 566 andrew gelman stats-2011-02-09-The boxer, the wrestler, and the coin flip, again

16 0.59054863 1572 andrew gelman stats-2012-11-10-I don’t like this cartoon

17 0.588907 1320 andrew gelman stats-2012-05-14-Question 4 of my final exam for Design and Analysis of Sample Surveys

18 0.57045537 231 andrew gelman stats-2010-08-24-Yet another Bayesian job opportunity

19 0.5655334 1628 andrew gelman stats-2012-12-17-Statistics in a world where nothing is random

20 0.56467509 1903 andrew gelman stats-2013-06-17-Weak identification provides partial information