hunch_net hunch_net-2007 hunch_net-2007-264 knowledge-graph by maker-knowledge-mining
Source: html
Introduction: Here . I’m particularly interested in the Web Search , Efficient ML , and (of course) Learning Problem Design workshops but there are many others to check out as well. Workshops are a great chance to make progress on or learn about a topic. Relevance and interaction amongst diverse people can sometimes be magical.
sentIndex sentText sentNum sentScore
1 I’m particularly interested in the Web Search , Efficient ML , and (of course) Learning Problem Design workshops but there are many others to check out as well. [sent-2, score-1.05]
2 Workshops are a great chance to make progress on or learn about a topic. [sent-3, score-0.695]
3 Relevance and interaction amongst diverse people can sometimes be magical. [sent-4, score-0.864]
wordName wordTfidf (topN-words)
[('magical', 0.382), ('workshops', 0.379), ('relevance', 0.318), ('interaction', 0.264), ('web', 0.238), ('diverse', 0.233), ('check', 0.226), ('search', 0.204), ('progress', 0.192), ('amongst', 0.188), ('efficient', 0.183), ('chance', 0.178), ('course', 0.176), ('ml', 0.172), ('design', 0.152), ('others', 0.143), ('interested', 0.133), ('learn', 0.125), ('particularly', 0.123), ('sometimes', 0.12), ('great', 0.116), ('make', 0.084), ('problem', 0.069), ('people', 0.059), ('many', 0.046), ('learning', 0.023)]
simIndex simValue blogId blogTitle
same-blog 1 1.0 264 hunch net-2007-09-30-NIPS workshops are out.
Introduction: Here . I’m particularly interested in the Web Search , Efficient ML , and (of course) Learning Problem Design workshops but there are many others to check out as well. Workshops are a great chance to make progress on or learn about a topic. Relevance and interaction amongst diverse people can sometimes be magical.
2 0.23929788 46 hunch net-2005-03-24-The Role of Workshops
Introduction: A good workshop is often far more interesting than the papers at a conference. This happens because a workshop has a much tighter focus than a conference. Since you choose the workshops fitting your interest, the increased relevance can greatly enhance the level of your interest and attention. Roughly speaking, a workshop program consists of elements related to a subject of your interest. The main conference program consists of elements related to someone’s interest (which is rarely your own). Workshops are more about doing research while conferences are more about presenting research. Several conferences have associated workshop programs, some with deadlines due shortly. ICML workshops Due April 1 IJCAI workshops Deadlines Vary KDD workshops Not yet finalized Anyone going to these conferences should examine the workshops and see if any are of interest. (If none are, then maybe you should organize one next year.)
3 0.22883926 379 hunch net-2009-11-23-ICML 2009 Workshops (and Tutorials)
Introduction: I’m the workshops chair for ICML this year. As such, I would like to personally encourage people to consider running a workshop. My general view of workshops is that they are excellent as opportunities to discuss and develop research directions—some of my best work has come from collaborations at workshops and several workshops have substantially altered my thinking about various problems. My experience running workshops is that setting them up and making them fly often appears much harder than it actually is, and the workshops often come off much better than expected in the end. Submissions are due January 18, two weeks before papers. Similarly, Ben Taskar is looking for good tutorials , which is complementary. Workshops are about exploring a subject, while a tutorial is about distilling it down into an easily taught essence, a vital part of the research process. Tutorials are due February 13, two weeks after papers.
4 0.21028064 216 hunch net-2006-11-02-2006 NIPS workshops
Introduction: I expect the NIPS 2006 workshops to be quite interesting, and recommend going for anyone interested in machine learning research. (Most or all of the workshops webpages can be found two links deep.)
5 0.19812435 285 hunch net-2008-01-23-Why Workshop?
Introduction: I second the call for workshops at ICML/COLT/UAI . Several times before , details of why and how to run a workshop have been mentioned. There is a simple reason to prefer workshops here: attendance. The Helsinki colocation has placed workshops directly between ICML and COLT/UAI , which is optimal for getting attendees from any conference. In addition, last year ICML had relatively few workshops and NIPS workshops were overloaded. In addition to those that happened a similar number were rejected. The overload has strange consequences—for example, the best attended workshop wasn’t an official NIPS workshop. Aside from intrinsic interest, the Deep Learning workshop benefited greatly from being off schedule.
6 0.19113058 375 hunch net-2009-10-26-NIPS workshops
7 0.166596 293 hunch net-2008-03-23-Interactive Machine Learning
8 0.12561221 266 hunch net-2007-10-15-NIPS workshops extended to 3 days
9 0.12466256 71 hunch net-2005-05-14-NIPS
10 0.12017003 141 hunch net-2005-12-17-Workshops as Franchise Conferences
11 0.11046513 345 hunch net-2009-03-08-Prediction Science
12 0.11000026 113 hunch net-2005-09-19-NIPS Workshops
13 0.10425932 387 hunch net-2010-01-19-Deadline Season, 2010
14 0.10379176 146 hunch net-2006-01-06-MLTV
15 0.095700324 120 hunch net-2005-10-10-Predictive Search is Coming
16 0.094823897 174 hunch net-2006-04-27-Conferences, Workshops, and Tutorials
17 0.092272311 347 hunch net-2009-03-26-Machine Learning is too easy
18 0.087695494 423 hunch net-2011-02-02-User preferences for search engines
19 0.083266377 488 hunch net-2013-08-31-Extreme Classification workshop at NIPS
20 0.082787216 483 hunch net-2013-06-10-The Large Scale Learning class notes
topicId topicWeight
[(0, 0.137), (1, -0.088), (2, -0.168), (3, -0.112), (4, 0.018), (5, 0.182), (6, 0.167), (7, 0.068), (8, 0.112), (9, 0.059), (10, -0.029), (11, 0.119), (12, 0.0), (13, 0.061), (14, -0.084), (15, 0.098), (16, 0.021), (17, -0.032), (18, 0.134), (19, 0.036), (20, 0.151), (21, 0.112), (22, 0.04), (23, -0.044), (24, 0.044), (25, -0.042), (26, 0.021), (27, 0.009), (28, 0.035), (29, 0.051), (30, -0.047), (31, -0.049), (32, -0.023), (33, -0.044), (34, -0.007), (35, -0.026), (36, -0.058), (37, -0.053), (38, -0.007), (39, -0.048), (40, 0.009), (41, -0.033), (42, -0.033), (43, 0.042), (44, 0.012), (45, -0.015), (46, -0.057), (47, 0.091), (48, 0.063), (49, -0.003)]
simIndex simValue blogId blogTitle
same-blog 1 0.98144233 264 hunch net-2007-09-30-NIPS workshops are out.
Introduction: Here . I’m particularly interested in the Web Search , Efficient ML , and (of course) Learning Problem Design workshops but there are many others to check out as well. Workshops are a great chance to make progress on or learn about a topic. Relevance and interaction amongst diverse people can sometimes be magical.
2 0.77712762 216 hunch net-2006-11-02-2006 NIPS workshops
Introduction: I expect the NIPS 2006 workshops to be quite interesting, and recommend going for anyone interested in machine learning research. (Most or all of the workshops webpages can be found two links deep.)
3 0.71961272 379 hunch net-2009-11-23-ICML 2009 Workshops (and Tutorials)
Introduction: I’m the workshops chair for ICML this year. As such, I would like to personally encourage people to consider running a workshop. My general view of workshops is that they are excellent as opportunities to discuss and develop research directions—some of my best work has come from collaborations at workshops and several workshops have substantially altered my thinking about various problems. My experience running workshops is that setting them up and making them fly often appears much harder than it actually is, and the workshops often come off much better than expected in the end. Submissions are due January 18, two weeks before papers. Similarly, Ben Taskar is looking for good tutorials , which is complementary. Workshops are about exploring a subject, while a tutorial is about distilling it down into an easily taught essence, a vital part of the research process. Tutorials are due February 13, two weeks after papers.
4 0.67771125 285 hunch net-2008-01-23-Why Workshop?
Introduction: I second the call for workshops at ICML/COLT/UAI . Several times before , details of why and how to run a workshop have been mentioned. There is a simple reason to prefer workshops here: attendance. The Helsinki colocation has placed workshops directly between ICML and COLT/UAI , which is optimal for getting attendees from any conference. In addition, last year ICML had relatively few workshops and NIPS workshops were overloaded. In addition to those that happened a similar number were rejected. The overload has strange consequences—for example, the best attended workshop wasn’t an official NIPS workshop. Aside from intrinsic interest, the Deep Learning workshop benefited greatly from being off schedule.
5 0.67654109 266 hunch net-2007-10-15-NIPS workshops extended to 3 days
Introduction: (Unofficially, at least.) The Deep Learning Workshop is being held the afternoon before the rest of the workshops in Vancouver, BC. Separate registration is needed, and open. What’s happening fundamentally here is that there are too many interesting workshops to fit into 2 days. Perhaps we can get it officially expanded to 3 days next year.
6 0.66435969 71 hunch net-2005-05-14-NIPS
7 0.66164786 113 hunch net-2005-09-19-NIPS Workshops
8 0.65297443 46 hunch net-2005-03-24-The Role of Workshops
9 0.59436595 375 hunch net-2009-10-26-NIPS workshops
10 0.45167261 271 hunch net-2007-11-05-CMU wins DARPA Urban Challenge
11 0.42378879 141 hunch net-2005-12-17-Workshops as Franchise Conferences
12 0.40407231 488 hunch net-2013-08-31-Extreme Classification workshop at NIPS
13 0.38247034 345 hunch net-2009-03-08-Prediction Science
14 0.37355745 293 hunch net-2008-03-23-Interactive Machine Learning
15 0.36652628 174 hunch net-2006-04-27-Conferences, Workshops, and Tutorials
16 0.35588172 481 hunch net-2013-04-15-NEML II
17 0.35510445 156 hunch net-2006-02-11-Yahoo’s Learning Problems.
18 0.35111627 120 hunch net-2005-10-10-Predictive Search is Coming
19 0.34964278 146 hunch net-2006-01-06-MLTV
20 0.34566504 144 hunch net-2005-12-28-Yet more nips thoughts
topicId topicWeight
[(11, 0.241), (27, 0.213), (38, 0.106), (55, 0.267)]
simIndex simValue blogId blogTitle
same-blog 1 0.92675602 264 hunch net-2007-09-30-NIPS workshops are out.
Introduction: Here . I’m particularly interested in the Web Search , Efficient ML , and (of course) Learning Problem Design workshops but there are many others to check out as well. Workshops are a great chance to make progress on or learn about a topic. Relevance and interaction amongst diverse people can sometimes be magical.
2 0.83498263 487 hunch net-2013-07-24-ICML 2012 videos lost
Introduction: A big ouch—all the videos for ICML 2012 were lost in a shuffle. Rajnish sends the below, but if anyone can help that would be greatly appreciated. —————————————————————————— Sincere apologies to ICML community for loosing 2012 archived videos What happened: In order to publish 2013 videos, we decided to move 2012 videos to another server. We have a weekly backup service from the provider but after removing the videos from the current server, when we tried to retrieve the 2012 videos from backup service, the backup did not work because of provider-specific requirements that we had ignored while removing the data from previous server. What are we doing about this: At this point, we are still looking into raw footage to find if we can retrieve some of the videos, but following are the steps we are taking to make sure this does not happen again in future: (1) We are going to create a channel on Vimeo (and potentially on YouTube) and we will publish there the p-in-p- or slide-vers
3 0.82154584 287 hunch net-2008-01-28-Sufficient Computation
Introduction: Do we have computer hardware sufficient for AI? This question is difficult to answer, but here’s a try: One way to achieve AI is by simulating a human brain. A human brain has about 10 15 synapses which operate at about 10 2 per second implying about 10 17 bit ops per second. A modern computer runs at 10 9 cycles/second and operates on 10 2 bits per cycle implying 10 11 bits processed per second. The gap here is only 6 orders of magnitude, which can be plausibly surpassed via cluster machines. For example, the BlueGene/L operates 10 5 nodes (one order of magnitude short). It’s peak recorded performance is about 0.5*10 15 FLOPS which translates to about 10 16 bit ops per second, which is nearly 10 17 . There are many criticisms (both positive and negative) for this argument. Simulation of a human brain might require substantially more detail. Perhaps an additional 10 2 is required per neuron. We may not need to simulate a human brain to achieve AI. Ther
4 0.79794478 402 hunch net-2010-07-02-MetaOptimize
Introduction: Joseph Turian creates MetaOptimize for discussion of NLP and ML on big datasets. This includes a blog , but perhaps more importantly a question and answer section . I’m hopeful it will take off.
5 0.7877661 270 hunch net-2007-11-02-The Machine Learning Award goes to …
Introduction: Perhaps the biggest CS prize for research is the Turing Award , which has a $0.25M cash prize associated with it. It appears none of the prizes so far have been for anything like machine learning (the closest are perhaps database awards). In CS theory, there is the Gödel Prize which is smaller and newer, offering a $5K prize along and perhaps (more importantly) recognition. One such award has been given for Machine Learning, to Robert Schapire and Yoav Freund for Adaboost. In Machine Learning, there seems to be no equivalent of these sorts of prizes. There are several plausible reasons for this: There is no coherent community. People drift in and out of the central conferences all the time. Most of the author names from 10 years ago do not occur in the conferences of today. In addition, the entire subject area is fairly new. There are at least a core group of people who have stayed around. Machine Learning work doesn’t last Almost every paper is fo
6 0.78556412 395 hunch net-2010-04-26-Compassionate Reviewing
7 0.77936763 265 hunch net-2007-10-14-NIPS workshp: Learning Problem Design
8 0.77388209 90 hunch net-2005-07-07-The Limits of Learning Theory
9 0.76842844 453 hunch net-2012-01-28-Why COLT?
10 0.76295644 452 hunch net-2012-01-04-Why ICML? and the summer conferences
11 0.75097197 448 hunch net-2011-10-24-2011 ML symposium and the bears
12 0.75055391 20 hunch net-2005-02-15-ESPgame and image labeling
13 0.74557841 302 hunch net-2008-05-25-Inappropriate Mathematics for Machine Learning
14 0.74366856 65 hunch net-2005-05-02-Reviewing techniques for conferences
15 0.73409915 446 hunch net-2011-10-03-Monday announcements
16 0.73300111 271 hunch net-2007-11-05-CMU wins DARPA Urban Challenge
17 0.72986615 40 hunch net-2005-03-13-Avoiding Bad Reviewing
18 0.7261036 484 hunch net-2013-06-16-Representative Reviewing
19 0.72496426 454 hunch net-2012-01-30-ICML Posters and Scope
20 0.72438109 89 hunch net-2005-07-04-The Health of COLT