hunch_net hunch_net-2013 hunch_net-2013-491 knowledge-graph by maker-knowledge-mining

491 hunch net-2013-11-21-Ben Taskar is gone


meta infos for this blog

Source: html

Introduction: I was not as personally close to Ben as Sam , but the level of tragedy is similar and I can’t help but be greatly saddened by the loss. Various news stories have coverage, but the synopsis is that he had a heart attack on Sunday and is survived by his wife Anat and daughter Aviv. There is discussion of creating a memorial fund for them, which I hope comes to fruition, and plan to contribute to. I will remember Ben as someone who thought carefully and comprehensively about new ways to do things, then fought hard and successfully for what he believed in. It is an ideal we strive for, that Ben accomplished. Edit: donations go here , and more information is here .


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 I was not as personally close to Ben as Sam , but the level of tragedy is similar and I can’t help but be greatly saddened by the loss. [sent-1, score-0.573]

2 Various news stories have coverage, but the synopsis is that he had a heart attack on Sunday and is survived by his wife Anat and daughter Aviv. [sent-2, score-0.795]

3 There is discussion of creating a memorial fund for them, which I hope comes to fruition, and plan to contribute to. [sent-3, score-0.975]

4 I will remember Ben as someone who thought carefully and comprehensively about new ways to do things, then fought hard and successfully for what he believed in. [sent-4, score-1.147]

5 It is an ideal we strive for, that Ben accomplished. [sent-5, score-0.337]

6 Edit: donations go here , and more information is here . [sent-6, score-0.337]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('ben', 0.527), ('sunday', 0.201), ('fought', 0.201), ('strive', 0.201), ('donations', 0.201), ('memorial', 0.201), ('wife', 0.201), ('believed', 0.201), ('heart', 0.176), ('coverage', 0.176), ('fund', 0.167), ('sam', 0.167), ('successfully', 0.167), ('stories', 0.161), ('edit', 0.155), ('contribute', 0.15), ('attack', 0.142), ('ideal', 0.136), ('remember', 0.125), ('plan', 0.121), ('personally', 0.119), ('close', 0.117), ('news', 0.115), ('carefully', 0.1), ('creating', 0.097), ('greatly', 0.097), ('someone', 0.093), ('comes', 0.087), ('level', 0.084), ('thought', 0.084), ('help', 0.081), ('discussion', 0.08), ('go', 0.077), ('similar', 0.075), ('various', 0.074), ('ways', 0.074), ('hope', 0.072), ('hard', 0.066), ('information', 0.059), ('things', 0.053), ('new', 0.036)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 491 hunch net-2013-11-21-Ben Taskar is gone

Introduction: I was not as personally close to Ben as Sam , but the level of tragedy is similar and I can’t help but be greatly saddened by the loss. Various news stories have coverage, but the synopsis is that he had a heart attack on Sunday and is survived by his wife Anat and daughter Aviv. There is discussion of creating a memorial fund for them, which I hope comes to fruition, and plan to contribute to. I will remember Ben as someone who thought carefully and comprehensively about new ways to do things, then fought hard and successfully for what he believed in. It is an ideal we strive for, that Ben accomplished. Edit: donations go here , and more information is here .

2 0.20637819 386 hunch net-2010-01-13-Sam Roweis died

Introduction: and I can’t help but remember him. I first met Sam as an undergraduate at Caltech where he was TA for Hopfield ‘s class, and again when I visited Gatsby , when he invited me to visit Toronto , and at too many conferences to recount. His personality was a combination of enthusiastic and thoughtful, with a great ability to phrase a problem so it’s solution must be understood. With respect to my own work, Sam was the one who advised me to make my first tutorial , leading to others, and to other things, all of which I’m grateful to him for. In fact, my every interaction with Sam was positive, and that was his way. His death is being called a suicide which is so incompatible with my understanding of Sam that it strains my credibility. But we know that his many responsibilities were great, and it is well understood that basically all sane researchers have legions of inner doubts. Having been depressed now and then myself, it’s helpful to understand at least intellectually

3 0.096162297 379 hunch net-2009-11-23-ICML 2009 Workshops (and Tutorials)

Introduction: I’m the workshops chair for ICML this year. As such, I would like to personally encourage people to consider running a workshop. My general view of workshops is that they are excellent as opportunities to discuss and develop research directions—some of my best work has come from collaborations at workshops and several workshops have substantially altered my thinking about various problems. My experience running workshops is that setting them up and making them fly often appears much harder than it actually is, and the workshops often come off much better than expected in the end. Submissions are due January 18, two weeks before papers. Similarly, Ben Taskar is looking for good tutorials , which is complementary. Workshops are about exploring a subject, while a tutorial is about distilling it down into an easily taught essence, a vital part of the research process. Tutorials are due February 13, two weeks after papers.

4 0.062182043 167 hunch net-2006-03-27-Gradients everywhere

Introduction: One of the basic observations from the atomic learning workshop is that gradient-based optimization is pervasive. For example, at least 7 (of 12) speakers used the word ‘gradient’ in their talk and several others may be approximating a gradient. The essential useful quality of a gradient is that it decouples local updates from global optimization. Restated: Given a gradient, we can determine how to change individual parameters of the system so as to improve overall performance. It’s easy to feel depressed about this and think “nothing has happened”, but that appears untrue. Many of the talks were about clever techniques for computing gradients where your calculus textbook breaks down. Sometimes there are clever approximations of the gradient. ( Simon Osindero ) Sometimes we can compute constrained gradients via iterated gradient/project steps. ( Ben Taskar ) Sometimes we can compute gradients anyways over mildly nondifferentiable functions. ( Drew Bagnell ) Even give

5 0.058695436 77 hunch net-2005-05-29-Maximum Margin Mismatch?

Introduction: John makes a fascinating point about structured classification (and slightly scooped my post!). Maximum Margin Markov Networks (M3N) are an interesting example of the second class of structured classifiers (where the classification of one label depends on the others), and one of my favorite papers. I’m not alone: the paper won the best student paper award at NIPS in 2003. There are some things I find odd about the paper. For instance, it says of probabilistic models “cannot handle high dimensional feature spaces and lack strong theoretical guarrantees.” I’m aware of no such limitations. Also: “Unfortunately, even probabilistic graphical models that are trained discriminatively do not achieve the same level of performance as SVMs, especially when kernel features are used.” This is quite interesting and contradicts my own experience as well as that of a number of people I greatly respect . I wonder what the root cause is: perhaps there is something different abo

6 0.050304465 400 hunch net-2010-06-13-The Good News on Exploration and Learning

7 0.04581967 367 hunch net-2009-08-16-Centmail comments

8 0.043899018 114 hunch net-2005-09-20-Workshop Proposal: Atomic Learning

9 0.040887482 161 hunch net-2006-03-05-“Structural” Learning

10 0.040386401 36 hunch net-2005-03-05-Funding Research

11 0.038065325 464 hunch net-2012-05-03-Microsoft Research, New York City

12 0.038061246 358 hunch net-2009-06-01-Multitask Poisoning

13 0.037618238 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer

14 0.037212357 233 hunch net-2007-02-16-The Forgetting

15 0.036673535 222 hunch net-2006-12-05-Recruitment Conferences

16 0.036547069 134 hunch net-2005-12-01-The Webscience Future

17 0.034002036 301 hunch net-2008-05-23-Three levels of addressing the Netflix Prize

18 0.033747628 39 hunch net-2005-03-10-Breaking Abstractions

19 0.033383757 47 hunch net-2005-03-28-Open Problems for Colt

20 0.033351477 237 hunch net-2007-04-02-Contextual Scaling


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.066), (1, -0.025), (2, -0.024), (3, 0.037), (4, -0.018), (5, 0.013), (6, 0.017), (7, 0.022), (8, 0.002), (9, 0.004), (10, -0.013), (11, -0.014), (12, -0.005), (13, 0.019), (14, -0.008), (15, 0.009), (16, -0.033), (17, 0.015), (18, 0.019), (19, 0.015), (20, -0.008), (21, 0.007), (22, -0.032), (23, 0.0), (24, -0.011), (25, 0.016), (26, 0.008), (27, 0.029), (28, -0.029), (29, 0.015), (30, -0.019), (31, -0.059), (32, 0.012), (33, -0.001), (34, -0.039), (35, 0.04), (36, -0.002), (37, 0.008), (38, -0.044), (39, 0.096), (40, 0.043), (41, -0.001), (42, -0.027), (43, 0.059), (44, -0.023), (45, 0.009), (46, 0.015), (47, 0.047), (48, -0.007), (49, 0.037)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96977711 491 hunch net-2013-11-21-Ben Taskar is gone

Introduction: I was not as personally close to Ben as Sam , but the level of tragedy is similar and I can’t help but be greatly saddened by the loss. Various news stories have coverage, but the synopsis is that he had a heart attack on Sunday and is survived by his wife Anat and daughter Aviv. There is discussion of creating a memorial fund for them, which I hope comes to fruition, and plan to contribute to. I will remember Ben as someone who thought carefully and comprehensively about new ways to do things, then fought hard and successfully for what he believed in. It is an ideal we strive for, that Ben accomplished. Edit: donations go here , and more information is here .

2 0.73643446 386 hunch net-2010-01-13-Sam Roweis died

Introduction: and I can’t help but remember him. I first met Sam as an undergraduate at Caltech where he was TA for Hopfield ‘s class, and again when I visited Gatsby , when he invited me to visit Toronto , and at too many conferences to recount. His personality was a combination of enthusiastic and thoughtful, with a great ability to phrase a problem so it’s solution must be understood. With respect to my own work, Sam was the one who advised me to make my first tutorial , leading to others, and to other things, all of which I’m grateful to him for. In fact, my every interaction with Sam was positive, and that was his way. His death is being called a suicide which is so incompatible with my understanding of Sam that it strains my credibility. But we know that his many responsibilities were great, and it is well understood that basically all sane researchers have legions of inner doubts. Having been depressed now and then myself, it’s helpful to understand at least intellectually

3 0.56834304 222 hunch net-2006-12-05-Recruitment Conferences

Introduction: One of the subsidiary roles of conferences is recruitment. NIPS is optimally placed in time for this because it falls right before the major recruitment season. I personally found job hunting embarrassing, and was relatively inept at it. I expect this is true of many people, because it is not something done often. The basic rule is: make the plausible hirers aware of your interest. Any corporate sponsor is a “plausible”, regardless of whether or not there is a booth. CRA and the acm job center are other reasonable sources. There are substantial differences between the different possibilities. Putting some effort into understanding the distinctions is a good idea, although you should always remember where the other person is coming from.

4 0.49476972 358 hunch net-2009-06-01-Multitask Poisoning

Introduction: There are many ways that interesting research gets done. For example it’s common at a conference for someone to discuss a problem with a partial solution, and for someone else to know how to solve a piece of it, resulting in a paper. In some sense, these are the easiest results we can achieve, so we should ask: Can all research be this easy? The answer is certainly no for fields where research inherently requires experimentation to discover how the real world works. However, mathematics, including parts of physics, computer science, statistics, etc… which are effectively mathematics don’t require experimentation. In effect, a paper can be simply a pure expression of thinking. Can all mathematical-style research be this easy? What’s going on here is research-by-communication. Someone knows something, someone knows something else, and as soon as someone knows both things, a problem is solved. The interesting thing about research-by-communication is that it is becoming radic

5 0.47376096 449 hunch net-2011-11-26-Giving Thanks

Introduction: Thanksgiving is perhaps my favorite holiday, because pausing your life and giving thanks provides a needed moment of perspective. As a researcher, I am most thankful for my education, without which I could not function. I want to share this, because it provides some sense of how a researcher starts. My long term memory seems to function particularly well, which makes any education I get is particularly useful. I am naturally obsessive, which makes me chase down details until I fully understand things. Natural obsessiveness can go wrong, of course, but it’s a great ally when you absolutely must get things right. My childhood was all in one hometown, which was a conscious sacrifice on the part of my father, implying disruptions from moving around were eliminated. I’m not sure how important this was since travel has it’s own benefits, but it bears thought. I had several great teachers in grade school, and naturally gravitated towards teachers over classmates, as they seemed

6 0.47213835 22 hunch net-2005-02-18-What it means to do research.

7 0.46941668 91 hunch net-2005-07-10-Thinking the Unthought

8 0.46419859 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer

9 0.46136898 376 hunch net-2009-11-06-Yisong Yue on Self-improving Systems

10 0.45521224 106 hunch net-2005-09-04-Science in the Government

11 0.45404974 464 hunch net-2012-05-03-Microsoft Research, New York City

12 0.45270675 140 hunch net-2005-12-14-More NIPS Papers II

13 0.44788396 231 hunch net-2007-02-10-Best Practices for Collaboration

14 0.43541837 39 hunch net-2005-03-10-Breaking Abstractions

15 0.433382 193 hunch net-2006-07-09-The Stock Prediction Machine Learning Problem

16 0.43126497 296 hunch net-2008-04-21-The Science 2.0 article

17 0.42649239 370 hunch net-2009-09-18-Necessary and Sufficient Research

18 0.42615291 323 hunch net-2008-11-04-Rise of the Machines

19 0.41707444 121 hunch net-2005-10-12-The unrealized potential of the research lab

20 0.41415772 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(19, 0.555), (27, 0.091), (53, 0.052), (55, 0.115), (94, 0.04)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.94552058 491 hunch net-2013-11-21-Ben Taskar is gone

Introduction: I was not as personally close to Ben as Sam , but the level of tragedy is similar and I can’t help but be greatly saddened by the loss. Various news stories have coverage, but the synopsis is that he had a heart attack on Sunday and is survived by his wife Anat and daughter Aviv. There is discussion of creating a memorial fund for them, which I hope comes to fruition, and plan to contribute to. I will remember Ben as someone who thought carefully and comprehensively about new ways to do things, then fought hard and successfully for what he believed in. It is an ideal we strive for, that Ben accomplished. Edit: donations go here , and more information is here .

2 0.50320423 419 hunch net-2010-12-04-Vowpal Wabbit, version 5.0, and the second heresy

Introduction: I’ve released version 5.0 of the Vowpal Wabbit online learning software. The major number has changed since the last release because I regard all earlier versions as obsolete—there are several new algorithms & features including substantial changes and upgrades to the default learning algorithm. The biggest changes are new algorithms: Nikos and I improved the default algorithm. The basic update rule still uses gradient descent, but the size of the update is carefully controlled so that it’s impossible to overrun the label. In addition, the normalization has changed. Computationally, these changes are virtually free and yield better results, sometimes much better. Less careful updates can be reenabled with –loss_function classic, although results are still not identical to previous due to normalization changes. Nikos also implemented the per-feature learning rates as per these two papers . Often, this works better than the default algorithm. It isn’t the defa

3 0.49795988 306 hunch net-2008-07-02-Proprietary Data in Academic Research?

Introduction: Should results of experiments on proprietary datasets be in the academic research literature? The arguments I can imagine in the “against” column are: Experiments are not repeatable. Repeatability in experiments is essential to science because it allows others to compare new methods with old and discover which is better. It’s unfair. Academics who don’t have insider access to proprietary data are at a substantial disadvantage when competing with others who do. I’m unsympathetic to argument (2). To me, it looks like their are simply some resource constraints, and these should not prevent research progress. For example, we wouldn’t prevent publishing about particle accelerator experiments by physicists at CERN because physicists at CMU couldn’t run their own experiments. Argument (1) seems like a real issue. The argument for is: Yes, they are another form of evidence that an algorithm is good. The degree to which they are evidence is less than for public

4 0.27041101 452 hunch net-2012-01-04-Why ICML? and the summer conferences

Introduction: Here’s a quick reference for summer ML-related conferences sorted by due date: Conference Due date Location Reviewing KDD Feb 10 August 12-16, Beijing, China Single Blind COLT Feb 14 June 25-June 27, Edinburgh, Scotland Single Blind? (historically) ICML Feb 24 June 26-July 1, Edinburgh, Scotland Double Blind, author response, zero SPOF UAI March 30 August 15-17, Catalina Islands, California Double Blind, author response Geographically, this is greatly dispersed and the UAI/KDD conflict is unfortunate. Machine Learning conferences are triannual now, between NIPS , AIStat , and ICML . This has not always been the case: the academic default is annual summer conferences, then NIPS started with a December conference, and now AIStat has grown into an April conference. However, the first claim is not quite correct. NIPS and AIStat have few competing venues while ICML implicitly competes with many other conf

5 0.26980773 116 hunch net-2005-09-30-Research in conferences

Introduction: Conferences exist as part of the process of doing research. They provide many roles including “announcing research”, “meeting people”, and “point of reference”. Not all conferences are alike so a basic question is: “to what extent do individual conferences attempt to aid research?” This question is very difficult to answer in any satisfying way. What we can do is compare details of the process across multiple conferences. Comments The average quality of comments across conferences can vary dramatically. At one extreme, the tradition in CS theory conferences is to provide essentially zero feedback. At the other extreme, some conferences have a strong tradition of providing detailed constructive feedback. Detailed feedback can give authors significant guidance about how to improve research. This is the most subjective entry. Blind Virtually all conferences offer single blind review where authors do not know reviewers. Some also provide double blind review where rev

6 0.2695035 395 hunch net-2010-04-26-Compassionate Reviewing

7 0.26688293 270 hunch net-2007-11-02-The Machine Learning Award goes to …

8 0.26445362 40 hunch net-2005-03-13-Avoiding Bad Reviewing

9 0.26258728 453 hunch net-2012-01-28-Why COLT?

10 0.25955835 437 hunch net-2011-07-10-ICML 2011 and the future

11 0.25949237 90 hunch net-2005-07-07-The Limits of Learning Theory

12 0.25543803 454 hunch net-2012-01-30-ICML Posters and Scope

13 0.25458968 331 hunch net-2008-12-12-Summer Conferences

14 0.25420398 484 hunch net-2013-06-16-Representative Reviewing

15 0.25007173 379 hunch net-2009-11-23-ICML 2009 Workshops (and Tutorials)

16 0.2499817 96 hunch net-2005-07-21-Six Months

17 0.24920465 89 hunch net-2005-07-04-The Health of COLT

18 0.24806531 423 hunch net-2011-02-02-User preferences for search engines

19 0.24738757 448 hunch net-2011-10-24-2011 ML symposium and the bears

20 0.2459591 461 hunch net-2012-04-09-ICML author feedback is open