hunch_net hunch_net-2008 hunch_net-2008-290 knowledge-graph by maker-knowledge-mining

290 hunch net-2008-02-27-The Stats Handicap


meta infos for this blog

Source: html

Introduction: Graduating students in Statistics appear to be at a substantial handicap compared to graduating students in Machine Learning, despite being in substantially overlapping subjects. The problem seems to be cultural. Statistics comes from a mathematics background which emphasizes large publications slowly published under review at journals. Machine Learning comes from a Computer Science background which emphasizes quick publishing at reviewed conferences. This has a number of implications: Graduating statistics PhDs often have 0-2 publications while graduating machine learning PhDs might have 5-15. Graduating ML students have had a chance for others to build on their work. Stats students have had no such chance. Graduating ML students have attended a number of conferences and presented their work, giving them a chance to meet people. Stats students have had fewer chances of this sort. In short, Stats students have had relatively few chances to distinguish themselves and


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Graduating students in Statistics appear to be at a substantial handicap compared to graduating students in Machine Learning, despite being in substantially overlapping subjects. [sent-1, score-1.469]

2 Statistics comes from a mathematics background which emphasizes large publications slowly published under review at journals. [sent-3, score-0.642]

3 Machine Learning comes from a Computer Science background which emphasizes quick publishing at reviewed conferences. [sent-4, score-0.494]

4 This has a number of implications: Graduating statistics PhDs often have 0-2 publications while graduating machine learning PhDs might have 5-15. [sent-5, score-0.821]

5 Graduating ML students have had a chance for others to build on their work. [sent-6, score-0.509]

6 Graduating ML students have attended a number of conferences and presented their work, giving them a chance to meet people. [sent-8, score-0.629]

7 Stats students have had fewer chances of this sort. [sent-9, score-0.606]

8 In short, Stats students have had relatively few chances to distinguish themselves and are heavily reliant on their advisors for jobs afterwards. [sent-10, score-0.883]

9 This is a poor situation, because advisors have a strong incentive to place students well, implying that recommendation letters must always be considered with a grain of salt. [sent-11, score-0.918]

10 This problem is more or less prevalent depending on which Stats department students go to. [sent-12, score-0.566]

11 In some places the difference is substantial, and in other places not. [sent-13, score-0.2]

12 One practical implication of this, is that when considering graduating stats PhDs for hire, some amount of affirmative action is in order. [sent-14, score-1.115]

13 At a minimum, this implies spending extra time getting to know the candidate and what the candidate can do is in order. [sent-15, score-0.382]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('graduating', 0.541), ('stats', 0.417), ('students', 0.386), ('phds', 0.25), ('chances', 0.167), ('emphasizes', 0.146), ('advisors', 0.146), ('publications', 0.146), ('statistics', 0.134), ('candidate', 0.133), ('background', 0.11), ('places', 0.1), ('hire', 0.083), ('grain', 0.083), ('letters', 0.083), ('chance', 0.078), ('ml', 0.075), ('comes', 0.072), ('spending', 0.069), ('prevalent', 0.067), ('meet', 0.064), ('department', 0.064), ('jobs', 0.064), ('implication', 0.062), ('incentive', 0.062), ('slowly', 0.061), ('heavily', 0.061), ('reviewed', 0.059), ('distinguish', 0.059), ('recommendation', 0.059), ('substantial', 0.059), ('quick', 0.056), ('mathematics', 0.055), ('implications', 0.055), ('fewer', 0.053), ('published', 0.052), ('attended', 0.052), ('publishing', 0.051), ('despite', 0.051), ('action', 0.05), ('poor', 0.05), ('minimum', 0.05), ('presented', 0.049), ('depending', 0.049), ('considered', 0.049), ('situation', 0.048), ('extra', 0.047), ('compared', 0.046), ('build', 0.045), ('considering', 0.045)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 290 hunch net-2008-02-27-The Stats Handicap

Introduction: Graduating students in Statistics appear to be at a substantial handicap compared to graduating students in Machine Learning, despite being in substantially overlapping subjects. The problem seems to be cultural. Statistics comes from a mathematics background which emphasizes large publications slowly published under review at journals. Machine Learning comes from a Computer Science background which emphasizes quick publishing at reviewed conferences. This has a number of implications: Graduating statistics PhDs often have 0-2 publications while graduating machine learning PhDs might have 5-15. Graduating ML students have had a chance for others to build on their work. Stats students have had no such chance. Graduating ML students have attended a number of conferences and presented their work, giving them a chance to meet people. Stats students have had fewer chances of this sort. In short, Stats students have had relatively few chances to distinguish themselves and

2 0.14190316 228 hunch net-2007-01-15-The Machine Learning Department

Introduction: Carnegie Mellon School of Computer Science has the first academic Machine Learning department . This department already existed as the Center for Automated Learning and Discovery , but recently changed it’s name. The reason for changing the name is obvious: very few people think of themselves as “Automated Learner and Discoverers”, but there are number of people who think of themselves as “Machine Learners”. Machine learning is both more succinct and recognizable—good properties for a name. A more interesting question is “Should there be a Machine Learning Department?”. Tom Mitchell has a relevant whitepaper claiming that machine learning is answering a different question than other fields or departments. The fundamental debate here is “Is machine learning different from statistics?” At a cultural level, there is no real debate: they are different. Machine learning is characterized by several very active large peer reviewed conferences, operating in a computer

3 0.1239116 73 hunch net-2005-05-17-A Short Guide to PhD Graduate Study

Introduction: Graduate study is a mysterious and uncertain process. This easiest way to see this is by noting that a very old advisor/student mechanism is preferred. There is no known succesful mechanism for “mass producing” PhDs as is done (in some sense) for undergraduate and masters study. Here are a few hints that might be useful to prospective or current students based on my own experience. Masters or PhD (a) You want a PhD if you want to do research. (b) You want a masters if you want to make money. People wanting (b) will be manifestly unhappy with (a) because it typically means years of low pay. People wanting (a) should try to avoid (b) because it prolongs an already long process. Attitude . Many students struggle for awhile with the wrong attitude towards research. Most students come into graduate school with 16-19 years of schooling where the principle means of success is proving that you know something via assignments, tests, etc… Research does not work this way. Re

4 0.12323233 316 hunch net-2008-09-04-Fall ML Conferences

Introduction: If you are in the New York area and interested in machine learning, consider submitting a 2 page abstract to the ML symposium by tomorrow (Sept 5th) midnight. It’s a fun one day affair on October 10 in an awesome location overlooking the world trade center site. A bit further off (but a real conference) is the AI and Stats deadline on November 5, to be held in Florida April 16-19.

5 0.11672579 110 hunch net-2005-09-10-“Failure” is an option

Introduction: This is about the hard choices that graduate students must make. The cultural definition of success in academic research is to: Produce good research which many other people appreciate. Produce many students who go on to do the same. There are fundamental reasons why this is success in the local culture. Good research appreciated by others means access to jobs. Many students succesful in the same way implies that there are a number of people who think in a similar way and appreciate your work. In order to graduate, a phd student must live in an academic culture for a period of several years. It is common to adopt the culture’s definition of success during this time. It’s also common for many phd students discover they are not suited to an academic research lifestyle. This collision of values and abilities naturally results in depression. The most fundamental advice when this happens is: change something. Pick a new advisor. Pick a new research topic. Or leave th

6 0.096487328 445 hunch net-2011-09-28-Somebody’s Eating Your Lunch

7 0.091793209 344 hunch net-2009-02-22-Effective Research Funding

8 0.084978893 313 hunch net-2008-08-18-Radford Neal starts a blog

9 0.08355955 302 hunch net-2008-05-25-Inappropriate Mathematics for Machine Learning

10 0.080354765 132 hunch net-2005-11-26-The Design of an Optimal Research Environment

11 0.073215455 378 hunch net-2009-11-15-The Other Online Learning

12 0.070188619 195 hunch net-2006-07-12-Who is having visa problems reaching US conferences?

13 0.06960503 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

14 0.064604327 339 hunch net-2009-01-27-Key Scientific Challenges

15 0.0640558 389 hunch net-2010-02-26-Yahoo! ML events

16 0.063464507 75 hunch net-2005-05-28-Running A Machine Learning Summer School

17 0.057192773 410 hunch net-2010-09-17-New York Area Machine Learning Events

18 0.055486221 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

19 0.054832667 457 hunch net-2012-02-29-Key Scientific Challenges and the Franklin Symposium

20 0.053211205 493 hunch net-2014-02-16-Metacademy: a package manager for knowledge


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.095), (1, -0.036), (2, -0.056), (3, 0.053), (4, -0.056), (5, -0.004), (6, -0.001), (7, 0.045), (8, -0.055), (9, -0.045), (10, 0.047), (11, 0.009), (12, 0.059), (13, -0.01), (14, 0.061), (15, 0.053), (16, 0.036), (17, 0.021), (18, 0.039), (19, 0.068), (20, 0.048), (21, -0.011), (22, 0.038), (23, 0.02), (24, 0.044), (25, -0.076), (26, 0.039), (27, -0.069), (28, 0.018), (29, 0.08), (30, -0.002), (31, 0.054), (32, 0.015), (33, 0.061), (34, -0.016), (35, 0.036), (36, -0.034), (37, -0.016), (38, 0.043), (39, 0.008), (40, -0.032), (41, 0.038), (42, 0.098), (43, 0.001), (44, 0.015), (45, -0.035), (46, 0.018), (47, -0.018), (48, -0.051), (49, 0.004)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96418977 290 hunch net-2008-02-27-The Stats Handicap

Introduction: Graduating students in Statistics appear to be at a substantial handicap compared to graduating students in Machine Learning, despite being in substantially overlapping subjects. The problem seems to be cultural. Statistics comes from a mathematics background which emphasizes large publications slowly published under review at journals. Machine Learning comes from a Computer Science background which emphasizes quick publishing at reviewed conferences. This has a number of implications: Graduating statistics PhDs often have 0-2 publications while graduating machine learning PhDs might have 5-15. Graduating ML students have had a chance for others to build on their work. Stats students have had no such chance. Graduating ML students have attended a number of conferences and presented their work, giving them a chance to meet people. Stats students have had fewer chances of this sort. In short, Stats students have had relatively few chances to distinguish themselves and

2 0.60972923 339 hunch net-2009-01-27-Key Scientific Challenges

Introduction: Yahoo released the Key Scientific Challenges program. There is a Machine Learning list I worked on and a Statistics list which Deepak worked on. I’m hoping this is taken quite seriously by graduate students. The primary value, is that it gave us a chance to sit down and publicly specify directions of research which would be valuable to make progress on. A good strategy for a beginning graduate student is to pick one of these directions, pursue it, and make substantial advances for a PhD. The directions are sufficiently general that I’m sure any serious advance has applications well beyond Yahoo. A secondary point, (which I’m sure is primary for many ) is that there is money for graduate students here. It’s unrestricted, so you can use it for any reasonable travel, supplies, etc…

3 0.57096422 313 hunch net-2008-08-18-Radford Neal starts a blog

Introduction: here on statistics, ML, CS, and other things he knows well.

4 0.56739902 445 hunch net-2011-09-28-Somebody’s Eating Your Lunch

Introduction: Since we last discussed the other online learning , Stanford has very visibly started pushing mass teaching in AI , Machine Learning , and Databases . In retrospect, it’s not too surprising that the next step up in serious online teaching experiments are occurring at the computer science department of a university embedded in the land of startups. Numbers on the order of 100000 are quite significant—similar in scale to the number of computer science undergraduate students/year in the US. Although these populations surely differ, the fact that they could overlap is worth considering for the future. It’s too soon to say how successful these classes will be and there are many easy criticisms to make: Registration != Learning … but if only 1/10th complete these classes, the scale of teaching still surpasses the scale of any traditional process. 1st year excitement != nth year routine … but if only 1/10th take future classes, the scale of teaching still surpass

5 0.56240416 228 hunch net-2007-01-15-The Machine Learning Department

Introduction: Carnegie Mellon School of Computer Science has the first academic Machine Learning department . This department already existed as the Center for Automated Learning and Discovery , but recently changed it’s name. The reason for changing the name is obvious: very few people think of themselves as “Automated Learner and Discoverers”, but there are number of people who think of themselves as “Machine Learners”. Machine learning is both more succinct and recognizable—good properties for a name. A more interesting question is “Should there be a Machine Learning Department?”. Tom Mitchell has a relevant whitepaper claiming that machine learning is answering a different question than other fields or departments. The fundamental debate here is “Is machine learning different from statistics?” At a cultural level, there is no real debate: they are different. Machine learning is characterized by several very active large peer reviewed conferences, operating in a computer

6 0.54220897 69 hunch net-2005-05-11-Visa Casualties

7 0.53747052 110 hunch net-2005-09-10-“Failure” is an option

8 0.52822256 335 hunch net-2009-01-08-Predictive Analytics World

9 0.52796525 414 hunch net-2010-10-17-Partha Niyogi has died

10 0.50696343 73 hunch net-2005-05-17-A Short Guide to PhD Graduate Study

11 0.46632162 75 hunch net-2005-05-28-Running A Machine Learning Summer School

12 0.46551785 449 hunch net-2011-11-26-Giving Thanks

13 0.45820907 302 hunch net-2008-05-25-Inappropriate Mathematics for Machine Learning

14 0.45393443 457 hunch net-2012-02-29-Key Scientific Challenges and the Franklin Symposium

15 0.44441023 493 hunch net-2014-02-16-Metacademy: a package manager for knowledge

16 0.44386998 389 hunch net-2010-02-26-Yahoo! ML events

17 0.43978363 13 hunch net-2005-02-04-JMLG

18 0.4382059 195 hunch net-2006-07-12-Who is having visa problems reaching US conferences?

19 0.43195578 448 hunch net-2011-10-24-2011 ML symposium and the bears

20 0.42358428 378 hunch net-2009-11-15-The Other Online Learning


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(10, 0.031), (27, 0.11), (38, 0.058), (53, 0.032), (55, 0.077), (56, 0.031), (83, 0.019), (94, 0.031), (95, 0.108), (97, 0.374)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.89221585 290 hunch net-2008-02-27-The Stats Handicap

Introduction: Graduating students in Statistics appear to be at a substantial handicap compared to graduating students in Machine Learning, despite being in substantially overlapping subjects. The problem seems to be cultural. Statistics comes from a mathematics background which emphasizes large publications slowly published under review at journals. Machine Learning comes from a Computer Science background which emphasizes quick publishing at reviewed conferences. This has a number of implications: Graduating statistics PhDs often have 0-2 publications while graduating machine learning PhDs might have 5-15. Graduating ML students have had a chance for others to build on their work. Stats students have had no such chance. Graduating ML students have attended a number of conferences and presented their work, giving them a chance to meet people. Stats students have had fewer chances of this sort. In short, Stats students have had relatively few chances to distinguish themselves and

2 0.4307504 344 hunch net-2009-02-22-Effective Research Funding

Introduction: With a worldwide recession on, my impression is that the carnage in research has not been as severe as might be feared, at least in the United States. I know of two notable negative impacts: It’s quite difficult to get a job this year, as many companies and universities simply aren’t hiring. This is particularly tough on graduating students. Perhaps 10% of IBM research was fired. In contrast, around the time of the dot com bust, ATnT Research and Lucent had one or several 50% size firings wiping out much of the remainder of Bell Labs , triggering a notable diaspora for the respected machine learning group there. As the recession progresses, we may easily see more firings as companies in particular reach a point where they can no longer support research. There are a couple positives to the recession as well. Both the implosion of Wall Street (which siphoned off smart people) and the general difficulty of getting a job coming out of an undergraduate education s

3 0.42136857 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers

Introduction: Martin Pool and I recently discussed the similarities and differences between academia and open source programming. Similarities: Cost profile Research and programming share approximately the same cost profile: A large upfront effort is required to produce something useful, and then “anyone” can use it. (The “anyone” is not quite right for either group because only sufficiently technical people could use it.) Wealth profile A “wealthy” academic or open source programmer is someone who has contributed a lot to other people in research or programs. Much of academia is a “gift culture”: whoever gives the most is most respected. Problems Both academia and open source programming suffer from similar problems. Whether or not (and which) open source program is used are perhaps too-often personality driven rather than driven by capability or usefulness. Similar phenomena can happen in academia with respect to directions of research. Funding is often a problem for

4 0.41872817 456 hunch net-2012-02-24-ICML+50%

Introduction: The ICML paper deadline has passed. Joelle and I were surprised to see the number of submissions jump from last year by about 50% to around 900 submissions. A tiny portion of these are immediate rejects(*), so this is a much larger set of papers than expected. The number of workshop submissions also doubled compared to last year, so ICML may grow significantly this year, if we can manage to handle the load well. The prospect of making 900 good decisions is fundamentally daunting, and success will rely heavily on the program committee and area chairs at this point. For those who want to rubberneck a bit more, here’s a breakdown of submissions by primary topic of submitted papers: 66 Reinforcement Learning 52 Supervised Learning 51 Clustering 46 Kernel Methods 40 Optimization Algorithms 39 Feature Selection and Dimensionality Reduction 33 Learning Theory 33 Graphical Models 33 Applications 29 Probabilistic Models 29 NN & Deep Learning 26 Transfer and Multi-Ta

5 0.41619581 373 hunch net-2009-10-03-Static vs. Dynamic multiclass prediction

Introduction: I have had interesting discussions about distinction between static vs. dynamic classes with Kishore and Hal . The distinction arises in multiclass prediction settings. A static set of classes is given by a set of labels {1,…,k} and the goal is generally to choose the most likely label given features. The static approach is the one that we typically analyze and think about in machine learning. The dynamic setting is one that is often used in practice. The basic idea is that the number of classes is not fixed, varying on a per example basis. These different classes are generally defined by a choice of features. The distinction between these two settings as far as theory goes, appears to be very substantial. For example, in the static setting, in learning reductions land , we have techniques now for robust O(log(k)) time prediction in many multiclass setting variants. In the dynamic setting, the best techniques known are O(k) , and furthermore this exponential

6 0.41040844 389 hunch net-2010-02-26-Yahoo! ML events

7 0.40988919 464 hunch net-2012-05-03-Microsoft Research, New York City

8 0.40579998 127 hunch net-2005-11-02-Progress in Active Learning

9 0.40440452 466 hunch net-2012-06-05-ICML acceptance statistics

10 0.40326661 30 hunch net-2005-02-25-Why Papers?

11 0.39911932 36 hunch net-2005-03-05-Funding Research

12 0.39213133 234 hunch net-2007-02-22-Create Your Own ICML Workshop

13 0.3882539 132 hunch net-2005-11-26-The Design of an Optimal Research Environment

14 0.38791168 406 hunch net-2010-08-22-KDD 2010

15 0.38789004 343 hunch net-2009-02-18-Decision by Vetocracy

16 0.38603842 454 hunch net-2012-01-30-ICML Posters and Scope

17 0.38424137 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

18 0.3836574 301 hunch net-2008-05-23-Three levels of addressing the Netflix Prize

19 0.38313949 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

20 0.38308248 225 hunch net-2007-01-02-Retrospective