andrew_gelman_stats andrew_gelman_stats-2014 andrew_gelman_stats-2014-2272 knowledge-graph by maker-knowledge-mining

2272 andrew gelman stats-2014-03-29-I agree with this comment


meta infos for this blog

Source: html

Introduction: The anonymous commenter puts it well : The problem is simple, the researchers are disproving always false null hypotheses and taking this disproof as near proof that their theory is correct.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 The anonymous commenter puts it well : The problem is simple, the researchers are disproving always false null hypotheses and taking this disproof as near proof that their theory is correct. [sent-1, score-3.591]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('disproving', 0.422), ('disproof', 0.404), ('proof', 0.291), ('anonymous', 0.286), ('hypotheses', 0.257), ('null', 0.252), ('puts', 0.242), ('commenter', 0.242), ('near', 0.233), ('false', 0.206), ('correct', 0.177), ('taking', 0.161), ('theory', 0.152), ('simple', 0.139), ('researchers', 0.136), ('always', 0.116), ('problem', 0.098), ('well', 0.093)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 2272 andrew gelman stats-2014-03-29-I agree with this comment

Introduction: The anonymous commenter puts it well : The problem is simple, the researchers are disproving always false null hypotheses and taking this disproof as near proof that their theory is correct.

2 0.39817345 2281 andrew gelman stats-2014-04-04-The Notorious N.H.S.T. presents: Mo P-values Mo Problems

Introduction: A recent discussion between commenters Question and Fernando captured one of the recurrent themes here from the past year. Question: The problem is simple, the researchers are disproving always false null hypotheses and taking this disproof as near proof that their theory is correct. Fernando: Whereas it is probably true that researchers misuse NHT, the problem with tabloid science is broader and deeper. It is systemic. Question: I do not see how anything can be deeper than replacing careful description, prediction, falsification, and independent replication with dynamite plots, p-values, affirming the consequent, and peer review. From my own experience I am confident in saying that confusion caused by NHST is at the root of this problem. Fernando: Incentives? Impact factors? Publish or die? “Interesting” and “new” above quality and reliability, or actually answering a research question, and a silly and unbecoming obsession with being quoted in NYT, etc. . . . Giv

3 0.16442236 1826 andrew gelman stats-2013-04-26-“A Vast Graveyard of Undead Theories: Publication Bias and Psychological Science’s Aversion to the Null”

Introduction: Erin Jonaitis points us to this article by Christopher Ferguson and Moritz Heene, who write: Publication bias remains a controversial issue in psychological science. . . . that the field often constructs arguments to block the publication and interpretation of null results and that null results may be further extinguished through questionable researcher practices. Given that science is dependent on the process of falsification, we argue that these problems reduce psychological science’s capability to have a proper mechanism for theory falsification, thus resulting in the promulgation of numerous “undead” theories that are ideologically popular but have little basis in fact. They mention the infamous Daryl Bem article. It is pretty much only because Bem’s claims are (presumably) false that they got published in a major research journal. Had the claims been true—that is, had Bem run identical experiments, analyzed his data more carefully and objectively, and reported that the r

4 0.15392223 256 andrew gelman stats-2010-09-04-Noooooooooooooooooooooooooooooooooooooooooooooooo!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Introduction: Masanao sends this one in, under the heading, “another incident of misunderstood p-value”: Warren Davies, a positive psychology MSc student at UEL, provides the latest in our ongoing series of guest features for students. Warren has just released a Psychology Study Guide, which covers information on statistics, research methods and study skills for psychology students. Despite the myriad rules and procedures of science, some research findings are pure flukes. Perhaps you’re testing a new drug, and by chance alone, a large number of people spontaneously get better. The better your study is conducted, the lower the chance that your result was a fluke – but still, there is always a certain probability that it was. Statistical significance testing gives you an idea of what this probability is. In science we’re always testing hypotheses. We never conduct a study to ‘see what happens’, because there’s always at least one way to make any useless set of data look important. We take

5 0.14354087 2263 andrew gelman stats-2014-03-24-Empirical implications of Empirical Implications of Theoretical Models

Introduction: Robert Bloomfield writes: Most of the people in my field (accounting, which is basically applied economics and finance, leavened with psychology and organizational behavior) use ‘positive research methods’, which are typically described as coming to the data with a predefined theory, and using hypothesis testing to accept or reject the theory’s predictions. But a substantial minority use ‘interpretive research methods’ (sometimes called qualitative methods, for those that call positive research ‘quantitative’). No one seems entirely happy with the definition of this method, but I’ve found it useful to think of it as an attempt to see the world through the eyes of your subjects, much as Jane Goodall lived with gorillas and tried to see the world through their eyes.) Interpretive researchers often criticize positive researchers by noting that the latter don’t make the best use of their data, because they come to the data with a predetermined theory, and only test a narrow set of h

6 0.11267326 2149 andrew gelman stats-2013-12-26-Statistical evidence for revised standards

7 0.10782488 1168 andrew gelman stats-2012-02-14-The tabloids strike again

8 0.10678594 114 andrew gelman stats-2010-06-28-More on Bayesian deduction-induction

9 0.10172868 1626 andrew gelman stats-2012-12-16-The lamest, grudgingest, non-retraction retraction ever

10 0.099563435 524 andrew gelman stats-2011-01-19-Data exploration and multiple comparisons

11 0.096815832 2093 andrew gelman stats-2013-11-07-I’m negative on the expression “false positives”

12 0.094357654 2295 andrew gelman stats-2014-04-18-One-tailed or two-tailed?

13 0.091529146 2127 andrew gelman stats-2013-12-08-The never-ending (and often productive) race between theory and practice

14 0.090382755 2326 andrew gelman stats-2014-05-08-Discussion with Steven Pinker on research that is attached to data that are so noisy as to be essentially uninformative

15 0.085806355 1760 andrew gelman stats-2013-03-12-Misunderstanding the p-value

16 0.085279815 1607 andrew gelman stats-2012-12-05-The p-value is not . . .

17 0.081796765 171 andrew gelman stats-2010-07-30-Silly baseball example illustrates a couple of key ideas they don’t usually teach you in statistics class

18 0.080111094 1869 andrew gelman stats-2013-05-24-In which I side with Neyman over Fisher

19 0.078274399 696 andrew gelman stats-2011-05-04-Whassup with glm()?

20 0.076651856 2312 andrew gelman stats-2014-04-29-Ken Rice presents a unifying approach to statistical inference and hypothesis testing


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.063), (1, 0.011), (2, -0.011), (3, -0.062), (4, -0.052), (5, -0.036), (6, -0.003), (7, 0.022), (8, 0.034), (9, -0.062), (10, -0.066), (11, 0.016), (12, -0.005), (13, -0.079), (14, -0.02), (15, 0.004), (16, -0.018), (17, -0.061), (18, -0.014), (19, -0.041), (20, 0.024), (21, -0.013), (22, -0.051), (23, -0.006), (24, -0.08), (25, -0.025), (26, 0.073), (27, 0.036), (28, 0.035), (29, -0.037), (30, 0.02), (31, 0.022), (32, 0.067), (33, 0.012), (34, -0.068), (35, -0.049), (36, 0.06), (37, -0.032), (38, 0.013), (39, -0.033), (40, -0.069), (41, 0.022), (42, 0.031), (43, -0.003), (44, -0.004), (45, 0.018), (46, 0.015), (47, -0.026), (48, 0.054), (49, -0.008)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98929334 2272 andrew gelman stats-2014-03-29-I agree with this comment

Introduction: The anonymous commenter puts it well : The problem is simple, the researchers are disproving always false null hypotheses and taking this disproof as near proof that their theory is correct.

2 0.76688206 2281 andrew gelman stats-2014-04-04-The Notorious N.H.S.T. presents: Mo P-values Mo Problems

Introduction: A recent discussion between commenters Question and Fernando captured one of the recurrent themes here from the past year. Question: The problem is simple, the researchers are disproving always false null hypotheses and taking this disproof as near proof that their theory is correct. Fernando: Whereas it is probably true that researchers misuse NHT, the problem with tabloid science is broader and deeper. It is systemic. Question: I do not see how anything can be deeper than replacing careful description, prediction, falsification, and independent replication with dynamite plots, p-values, affirming the consequent, and peer review. From my own experience I am confident in saying that confusion caused by NHST is at the root of this problem. Fernando: Incentives? Impact factors? Publish or die? “Interesting” and “new” above quality and reliability, or actually answering a research question, and a silly and unbecoming obsession with being quoted in NYT, etc. . . . Giv

3 0.71657705 1024 andrew gelman stats-2011-11-23-Of hypothesis tests and Unitarians

Introduction: Xian, Judith, and I read this line in a book by statistician Murray Aitkin in which he considered the following hypothetical example: A survey of 100 individuals expressing support (Yes/No) for the president, before and after a presidential address . . . The question of interest is whether there has been a change in support between the surveys . . . We want to assess the evidence for the hypothesis of equality H1 against the alternative hypothesis H2 of a change. Here is our response : Based on our experience in public opinion research, this is not a real question. Support for any political position is always changing. The real question is how much the support has changed, or perhaps how this change is distributed across the population. A defender of Aitkin (and of classical hypothesis testing) might respond at this point that, yes, everybody knows that changes are never exactly zero and that we should take a more “grown-up” view of the null hypothesis, not that the change

4 0.70694083 256 andrew gelman stats-2010-09-04-Noooooooooooooooooooooooooooooooooooooooooooooooo!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Introduction: Masanao sends this one in, under the heading, “another incident of misunderstood p-value”: Warren Davies, a positive psychology MSc student at UEL, provides the latest in our ongoing series of guest features for students. Warren has just released a Psychology Study Guide, which covers information on statistics, research methods and study skills for psychology students. Despite the myriad rules and procedures of science, some research findings are pure flukes. Perhaps you’re testing a new drug, and by chance alone, a large number of people spontaneously get better. The better your study is conducted, the lower the chance that your result was a fluke – but still, there is always a certain probability that it was. Statistical significance testing gives you an idea of what this probability is. In science we’re always testing hypotheses. We never conduct a study to ‘see what happens’, because there’s always at least one way to make any useless set of data look important. We take

5 0.69771677 2295 andrew gelman stats-2014-04-18-One-tailed or two-tailed?

Introduction: Someone writes: Suppose I have two groups of people, A and B, which differ on some characteristic of interest to me; and for each person I measure a single real-valued quantity X. I have a theory that group A has a higher mean value of X than group B. I test this theory by using a t-test. Am I entitled to use a *one-tailed* t-test? Or should I use a *two-tailed* one (thereby giving a p-value that is twice as large)? I know you will probably answer: Forget the t-test; you should use Bayesian methods instead. But what is the standard frequentist answer to this question? My reply: The quick answer here is that different people will do different things here. I would say the 2-tailed p-value is more standard but some people will insist on the one-tailed version, and it’s hard to make a big stand on this one, given all the other problems with p-values in practice: http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf http://www.stat.columbia.edu/~gelm

6 0.67330259 1355 andrew gelman stats-2012-05-31-Lindley’s paradox

7 0.66860402 1826 andrew gelman stats-2013-04-26-“A Vast Graveyard of Undead Theories: Publication Bias and Psychological Science’s Aversion to the Null”

8 0.65790027 2149 andrew gelman stats-2013-12-26-Statistical evidence for revised standards

9 0.63543379 1869 andrew gelman stats-2013-05-24-In which I side with Neyman over Fisher

10 0.62611759 2263 andrew gelman stats-2014-03-24-Empirical implications of Empirical Implications of Theoretical Models

11 0.61673242 2102 andrew gelman stats-2013-11-15-“Are all significant p-values created equal?”

12 0.60251135 1883 andrew gelman stats-2013-06-04-Interrogating p-values

13 0.59203142 2312 andrew gelman stats-2014-04-29-Ken Rice presents a unifying approach to statistical inference and hypothesis testing

14 0.58874887 331 andrew gelman stats-2010-10-10-Bayes jumps the shark

15 0.58728045 1760 andrew gelman stats-2013-03-12-Misunderstanding the p-value

16 0.57663774 2326 andrew gelman stats-2014-05-08-Discussion with Steven Pinker on research that is attached to data that are so noisy as to be essentially uninformative

17 0.55715883 2183 andrew gelman stats-2014-01-23-Discussion on preregistration of research studies

18 0.55605078 1101 andrew gelman stats-2012-01-05-What are the standards for reliability in experimental psychology?

19 0.54726046 2127 andrew gelman stats-2013-12-08-The never-ending (and often productive) race between theory and practice

20 0.52854055 1861 andrew gelman stats-2013-05-17-Where do theories come from?


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(9, 0.155), (16, 0.123), (21, 0.215), (76, 0.051), (86, 0.13), (99, 0.134)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.93960124 2272 andrew gelman stats-2014-03-29-I agree with this comment

Introduction: The anonymous commenter puts it well : The problem is simple, the researchers are disproving always false null hypotheses and taking this disproof as near proof that their theory is correct.

2 0.73694193 1232 andrew gelman stats-2012-03-27-Banned in NYC school tests

Introduction: The list includes “hunting” but not “fishing,” so that’s cool. I wonder how they’d feel about a question involving different cuts of meat. In any case, I’m happy to see that “Bayes” is not on the banned list. P.S. Russell explains .

3 0.73224878 151 andrew gelman stats-2010-07-16-Wanted: Probability distributions for rank orderings

Introduction: Dietrich Stoyan writes: I asked the IMS people for an expert in statistics of voting/elections and they wrote me your name. I am a statistician, but never worked in the field voting/elections. It was my son-in-law who asked me for statistical theories in that field. He posed in particular the following problem: The aim of the voting is to come to a ranking of c candidates. Every vote is a permutation of these c candidates. The problem is to have probability distributions in the set of all permutations of c elements. Are there theories for such distributions? I should be very grateful for a fast answer with hints to literature. (I confess that I do not know your books.) My reply: Rather than trying to model the ranks directly, I’d recommend modeling a latent continuous outcome which then implies a distribution on ranks, if the ranks are of interest. There are lots of distributions of c-dimensional continuous outcomes. In political science, the usual way to start is

4 0.71963507 672 andrew gelman stats-2011-04-20-The R code for those time-use graphs

Introduction: By popular demand, here’s my R script for the time-use graphs : # The data a1 <- c(4.2,3.2,11.1,1.3,2.2,2.0) a2 <- c(3.9,3.2,10.0,0.8,3.1,3.1) a3 <- c(6.3,2.5,9.8,0.9,2.2,2.4) a4 <- c(4.4,3.1,9.8,0.8,3.3,2.7) a5 <- c(4.8,3.0,9.9,0.7,3.3,2.4) a6 <- c(4.0,3.4,10.5,0.7,3.3,2.1) a <- rbind(a1,a2,a3,a4,a5,a6) avg <- colMeans (a) avg.array <- t (array (avg, rev(dim(a)))) diff <- a - avg.array country.name <- c("France", "Germany", "Japan", "Britain", "USA", "Turkey") # The line plots par (mfrow=c(2,3), mar=c(4,4,2,.5), mgp=c(2,.7,0), tck=-.02, oma=c(3,0,4,0), bg="gray96", fg="gray30") for (i in 1:6){ plot (c(1,6), c(-1,1.7), xlab="", ylab="", xaxt="n", yaxt="n", bty="l", type="n") lines (1:6, diff[i,], col="blue") points (1:6, diff[i,], pch=19, col="black") if (i>3){ axis (1, c(1,3,5), c ("Work,\nstudy", "Eat,\nsleep", "Leisure"), mgp=c(2,1.5,0), tck=0, cex.axis=1.2) axis (1, c(2,4,6), c ("Unpaid\nwork", "Personal\nCare", "Other"), mgp=c(2,1.5,0),

5 0.68027616 2219 andrew gelman stats-2014-02-21-The world’s most popular languages that the Mac documentation hasn’t been translated into

Introduction: I was updating my Mac and noticed the following: Lots of obscure European languages there. That got me wondering: what’s the least obscure language not on the above list? Igbo? Swahili? Or maybe Tagalog? I did a quick google and found this list of languages by number of native speakers. Once you see the list, the answer is obvious: Hindi, first language of 295 million people, is not on Apple’s list. The next most popular languages not included: Bengali, Punjabi, Javanese, Wu, Telegu, Marathi, Tamil, Urdu. Wow: most of these are Indian! Then comes Persian and a bunch of others. It turns out that Tagalog, Igbo, and Swahili, are way down on this list with 28 million, 24 million, and 26 million native speakers, respectively. Only 26 million for Swahili? This made me want to check the list of languages by total number of speakers . The ranking of most of the languages isn’t much different, but Swahili is now #10, at 140 million. Hindi and Bengali are still th

6 0.67350852 1857 andrew gelman stats-2013-05-15-Does quantum uncertainty have a place in everyday applied statistics?

7 0.67160559 894 andrew gelman stats-2011-09-07-Hipmunk FAIL: Graphics without content is not enough

8 0.66668749 1275 andrew gelman stats-2012-04-22-Please stop me before I barf again

9 0.66538811 529 andrew gelman stats-2011-01-21-“City Opens Inquiry on Grading Practices at a Top-Scoring Bronx School”

10 0.66446882 62 andrew gelman stats-2010-06-01-Two Postdoc Positions Available on Bayesian Hierarchical Modeling

11 0.66202682 29 andrew gelman stats-2010-05-12-Probability of successive wins in baseball

12 0.66150343 1291 andrew gelman stats-2012-04-30-Systematic review of publication bias in studies on publication bias

13 0.6571058 1826 andrew gelman stats-2013-04-26-“A Vast Graveyard of Undead Theories: Publication Bias and Psychological Science’s Aversion to the Null”

14 0.65190911 577 andrew gelman stats-2011-02-16-Annals of really really stupid spam

15 0.65041006 218 andrew gelman stats-2010-08-20-I think you knew this already

16 0.64360839 2306 andrew gelman stats-2014-04-26-Sleazy sock puppet can’t stop spamming our discussion of compressed sensing and promoting the work of Xiteng Liu

17 0.64143342 2298 andrew gelman stats-2014-04-21-On deck this week

18 0.63848394 1961 andrew gelman stats-2013-07-29-Postdocs in probabilistic modeling! With David Blei! And Stan!

19 0.63667548 432 andrew gelman stats-2010-11-27-Neumann update

20 0.63256049 1049 andrew gelman stats-2011-12-09-Today in the sister blog