hunch_net hunch_net-2013 hunch_net-2013-488 knowledge-graph by maker-knowledge-mining
Source: html
Introduction: Manik and I are organizing the extreme classification workshop at NIPS this year. We have a number of good speakers lined up, but I would further encourage anyone working in the area to submit an abstract by October 9. I believe this is an idea whose time has now come. The NIPS website doesn’t have other workshops listed yet, but I expect several others to be of significant interest.
sentIndex sentText sentNum sentScore
1 Manik and I are organizing the extreme classification workshop at NIPS this year. [sent-1, score-0.7]
2 We have a number of good speakers lined up, but I would further encourage anyone working in the area to submit an abstract by October 9. [sent-2, score-1.841]
3 I believe this is an idea whose time has now come. [sent-3, score-0.578]
4 The NIPS website doesn’t have other workshops listed yet, but I expect several others to be of significant interest. [sent-4, score-1.102]
wordName wordTfidf (topN-words)
[('lined', 0.344), ('listed', 0.287), ('nips', 0.279), ('whose', 0.266), ('organizing', 0.238), ('october', 0.238), ('speakers', 0.227), ('submit', 0.223), ('encourage', 0.214), ('abstract', 0.204), ('website', 0.204), ('extreme', 0.187), ('workshops', 0.171), ('anyone', 0.142), ('classification', 0.141), ('area', 0.141), ('doesn', 0.136), ('workshop', 0.134), ('expect', 0.132), ('interest', 0.129), ('others', 0.129), ('believe', 0.126), ('working', 0.123), ('idea', 0.117), ('significant', 0.113), ('yet', 0.111), ('would', 0.086), ('number', 0.082), ('time', 0.069), ('several', 0.066), ('good', 0.055)]
simIndex simValue blogId blogTitle
same-blog 1 1.0 488 hunch net-2013-08-31-Extreme Classification workshop at NIPS
Introduction: Manik and I are organizing the extreme classification workshop at NIPS this year. We have a number of good speakers lined up, but I would further encourage anyone working in the area to submit an abstract by October 9. I believe this is an idea whose time has now come. The NIPS website doesn’t have other workshops listed yet, but I expect several others to be of significant interest.
2 0.2063829 409 hunch net-2010-09-13-AIStats
Introduction: Geoff Gordon points out AIStats 2011 in Ft. Lauderdale, Florida. The call for papers is now out, due Nov. 1. The plan is to experiment with the review process to encourage quality in several ways. I expect to submit a paper and would encourage others with good research to do likewise.
3 0.20383562 404 hunch net-2010-08-20-The Workshop on Cores, Clusters, and Clouds
Introduction: Alekh , John , Ofer , and I are organizing a workshop at NIPS this year on learning in parallel and distributed environments. The general interest level in parallel learning seems to be growing rapidly, so I expect quite a bit of attendance. Please join us if you are parallel-interested. And, if you are working in the area of parallel learning, please consider submitting an abstract due Oct. 17 for presentation at the workshop.
4 0.19402702 216 hunch net-2006-11-02-2006 NIPS workshops
Introduction: I expect the NIPS 2006 workshops to be quite interesting, and recommend going for anyone interested in machine learning research. (Most or all of the workshops webpages can be found two links deep.)
5 0.18639858 285 hunch net-2008-01-23-Why Workshop?
Introduction: I second the call for workshops at ICML/COLT/UAI . Several times before , details of why and how to run a workshop have been mentioned. There is a simple reason to prefer workshops here: attendance. The Helsinki colocation has placed workshops directly between ICML and COLT/UAI , which is optimal for getting attendees from any conference. In addition, last year ICML had relatively few workshops and NIPS workshops were overloaded. In addition to those that happened a similar number were rejected. The overload has strange consequences—for example, the best attended workshop wasn’t an official NIPS workshop. Aside from intrinsic interest, the Deep Learning workshop benefited greatly from being off schedule.
6 0.17307028 198 hunch net-2006-07-25-Upcoming conference
7 0.16768134 46 hunch net-2005-03-24-The Role of Workshops
8 0.16498357 141 hunch net-2005-12-17-Workshops as Franchise Conferences
9 0.15101051 369 hunch net-2009-08-27-New York Area Machine Learning Events
10 0.14625829 234 hunch net-2007-02-22-Create Your Own ICML Workshop
11 0.14423376 375 hunch net-2009-10-26-NIPS workshops
12 0.14122836 437 hunch net-2011-07-10-ICML 2011 and the future
13 0.13518032 71 hunch net-2005-05-14-NIPS
14 0.13377763 265 hunch net-2007-10-14-NIPS workshp: Learning Problem Design
15 0.13288893 379 hunch net-2009-11-23-ICML 2009 Workshops (and Tutorials)
16 0.13064414 113 hunch net-2005-09-19-NIPS Workshops
17 0.11738339 443 hunch net-2011-09-03-Fall Machine Learning Events
18 0.115345 462 hunch net-2012-04-20-Both new: STOC workshops and NEML
19 0.10473908 472 hunch net-2012-08-27-NYAS ML 2012 and ICML 2013
20 0.10290992 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei
topicId topicWeight
[(0, 0.144), (1, -0.147), (2, -0.137), (3, -0.162), (4, 0.032), (5, 0.261), (6, 0.146), (7, 0.026), (8, -0.002), (9, -0.038), (10, -0.005), (11, -0.045), (12, 0.099), (13, 0.015), (14, -0.002), (15, 0.005), (16, -0.008), (17, -0.002), (18, -0.023), (19, 0.002), (20, -0.114), (21, 0.035), (22, -0.034), (23, -0.044), (24, -0.012), (25, 0.071), (26, 0.018), (27, 0.062), (28, 0.01), (29, -0.021), (30, 0.009), (31, 0.015), (32, -0.028), (33, -0.035), (34, 0.021), (35, 0.07), (36, -0.023), (37, 0.006), (38, 0.081), (39, 0.024), (40, 0.059), (41, 0.049), (42, -0.006), (43, 0.058), (44, -0.015), (45, -0.015), (46, 0.094), (47, -0.11), (48, -0.088), (49, -0.013)]
simIndex simValue blogId blogTitle
same-blog 1 0.99155188 488 hunch net-2013-08-31-Extreme Classification workshop at NIPS
Introduction: Manik and I are organizing the extreme classification workshop at NIPS this year. We have a number of good speakers lined up, but I would further encourage anyone working in the area to submit an abstract by October 9. I believe this is an idea whose time has now come. The NIPS website doesn’t have other workshops listed yet, but I expect several others to be of significant interest.
2 0.67488796 46 hunch net-2005-03-24-The Role of Workshops
Introduction: A good workshop is often far more interesting than the papers at a conference. This happens because a workshop has a much tighter focus than a conference. Since you choose the workshops fitting your interest, the increased relevance can greatly enhance the level of your interest and attention. Roughly speaking, a workshop program consists of elements related to a subject of your interest. The main conference program consists of elements related to someone’s interest (which is rarely your own). Workshops are more about doing research while conferences are more about presenting research. Several conferences have associated workshop programs, some with deadlines due shortly. ICML workshops Due April 1 IJCAI workshops Deadlines Vary KDD workshops Not yet finalized Anyone going to these conferences should examine the workshops and see if any are of interest. (If none are, then maybe you should organize one next year.)
3 0.65988338 113 hunch net-2005-09-19-NIPS Workshops
Introduction: Attendance at the NIPS workshops is highly recommended for both research and learning. Unfortunately, there does not yet appear to be a public list of workshops. However, I found the following workshop webpages of interest: Machine Learning in Finance Learning to Rank Foundations of Active Learning Machine Learning Based Robotics in Unstructured Environments There are many more workshops. In fact, there are so many that it is not plausible anyone can attend every workshop they are interested in. Maybe in future years the organizers can spread them out over more days to reduce overlap. Many of these workshops are accepting presentation proposals (due mid-October).
4 0.64672321 285 hunch net-2008-01-23-Why Workshop?
Introduction: I second the call for workshops at ICML/COLT/UAI . Several times before , details of why and how to run a workshop have been mentioned. There is a simple reason to prefer workshops here: attendance. The Helsinki colocation has placed workshops directly between ICML and COLT/UAI , which is optimal for getting attendees from any conference. In addition, last year ICML had relatively few workshops and NIPS workshops were overloaded. In addition to those that happened a similar number were rejected. The overload has strange consequences—for example, the best attended workshop wasn’t an official NIPS workshop. Aside from intrinsic interest, the Deep Learning workshop benefited greatly from being off schedule.
5 0.61003435 443 hunch net-2011-09-03-Fall Machine Learning Events
Introduction: Many Machine Learning related events are coming up this fall. September 9 , abstracts for the New York Machine Learning Symposium are due. Send a 2 page pdf, if interested, and note that we: widened submissions to be from anybody rather than students. set aside a larger fraction of time for contributed submissions. September 15 , there is a machine learning meetup , where I’ll be discussing terascale learning at AOL. September 16 , there is a CS&Econ; day at New York Academy of Sciences. This is not ML focused, but it’s easy to imagine interest. September 23 and later NIPS workshop submissions start coming due. As usual, there are too many good ones, so I won’t be able to attend all those that interest me. I do hope some workshop makers consider ICML this coming summer, as we are increasing to a 2 day format for you. Here are a few that interest me: Big Learning is about dealing with lots of data. Abstracts are due September 30 . The Bayes
6 0.59736699 198 hunch net-2006-07-25-Upcoming conference
7 0.59442687 141 hunch net-2005-12-17-Workshops as Franchise Conferences
8 0.59016359 404 hunch net-2010-08-20-The Workshop on Cores, Clusters, and Clouds
9 0.58783728 216 hunch net-2006-11-02-2006 NIPS workshops
10 0.55187267 234 hunch net-2007-02-22-Create Your Own ICML Workshop
11 0.55154568 124 hunch net-2005-10-19-Workshop: Atomic Learning
12 0.5453521 71 hunch net-2005-05-14-NIPS
13 0.53958744 321 hunch net-2008-10-19-NIPS 2008 workshop on Kernel Learning
14 0.515104 375 hunch net-2009-10-26-NIPS workshops
15 0.51314795 379 hunch net-2009-11-23-ICML 2009 Workshops (and Tutorials)
16 0.50251073 266 hunch net-2007-10-15-NIPS workshops extended to 3 days
17 0.49143586 455 hunch net-2012-02-20-Berkeley Streaming Data Workshop
18 0.47353494 409 hunch net-2010-09-13-AIStats
19 0.47308064 180 hunch net-2006-05-21-NIPS paper evaluation criteria
20 0.44983682 462 hunch net-2012-04-20-Both new: STOC workshops and NEML
topicId topicWeight
[(27, 0.048), (38, 0.636), (55, 0.157)]
simIndex simValue blogId blogTitle
same-blog 1 0.97251219 488 hunch net-2013-08-31-Extreme Classification workshop at NIPS
Introduction: Manik and I are organizing the extreme classification workshop at NIPS this year. We have a number of good speakers lined up, but I would further encourage anyone working in the area to submit an abstract by October 9. I believe this is an idea whose time has now come. The NIPS website doesn’t have other workshops listed yet, but I expect several others to be of significant interest.
2 0.9653157 125 hunch net-2005-10-20-Machine Learning in the News
Introduction: The New York Times had a short interview about machine learning in datamining being used pervasively by the IRS and large corporations to predict who to audit and who to target for various marketing campaigns. This is a big application area of machine learning. It can be harmful (learning + databases = another way to invade privacy) or beneficial (as google demonstrates, better targeting of marketing campaigns is far less annoying). This is yet more evidence that we can not rely upon “I’m just another fish in the school” logic for our expectations about treatment by government and large corporations.
3 0.9088307 339 hunch net-2009-01-27-Key Scientific Challenges
Introduction: Yahoo released the Key Scientific Challenges program. There is a Machine Learning list I worked on and a Statistics list which Deepak worked on. I’m hoping this is taken quite seriously by graduate students. The primary value, is that it gave us a chance to sit down and publicly specify directions of research which would be valuable to make progress on. A good strategy for a beginning graduate student is to pick one of these directions, pursue it, and make substantial advances for a PhD. The directions are sufficiently general that I’m sure any serious advance has applications well beyond Yahoo. A secondary point, (which I’m sure is primary for many ) is that there is money for graduate students here. It’s unrestricted, so you can use it for any reasonable travel, supplies, etc…
4 0.8502863 181 hunch net-2006-05-23-What is the best regret transform reduction from multiclass to binary?
Introduction: This post is about an open problem in learning reductions. Background A reduction might transform a a multiclass prediction problem where there are k possible labels into a binary learning problem where there are only 2 possible labels. On this induced binary problem we might learn a binary classifier with some error rate e . After subtracting the minimum possible (Bayes) error rate b , we get a regret r = e – b . The PECOC (Probabilistic Error Correcting Output Code) reduction has the property that binary regret r implies multiclass regret at most 4r 0.5 . The problem This is not the “rightest” answer. Consider the k=2 case, where we reduce binary to binary. There exists a reduction (the identity) with the property that regret r implies regret r . This is substantially superior to the transform given by the PECOC reduction, which suggests that a better reduction may exist for general k . For example, we can not rule out the possibility that a reduction
5 0.84047788 83 hunch net-2005-06-18-Lower Bounds for Learning Reductions
Introduction: Learning reductions transform a solver of one type of learning problem into a solver of another type of learning problem. When we analyze these for robustness we can make statement of the form “Reduction R has the property that regret r (or loss) on subproblems of type A implies regret at most f ( r ) on the original problem of type B “. A lower bound for a learning reduction would have the form “for all reductions R , there exists a learning problem of type B and learning algorithm for problems of type A where regret r on induced problems implies at least regret f ( r ) for B “. The pursuit of lower bounds is often questionable because, unlike upper bounds, they do not yield practical algorithms. Nevertheless, they may be helpful as a tool for thinking about what is learnable and how learnable it is. This has already come up here and here . At the moment, there is no coherent theory of lower bounds for learning reductions, and we have little understa
6 0.8163023 170 hunch net-2006-04-06-Bounds greater than 1
7 0.71736979 233 hunch net-2007-02-16-The Forgetting
8 0.70525992 353 hunch net-2009-05-08-Computability in Artificial Intelligence
9 0.63110125 236 hunch net-2007-03-15-Alternative Machine Learning Reductions Definitions
10 0.60815126 251 hunch net-2007-06-24-Interesting Papers at ICML 2007
11 0.59969628 284 hunch net-2008-01-18-Datasets
12 0.58951169 239 hunch net-2007-04-18-$50K Spock Challenge
13 0.54706788 409 hunch net-2010-09-13-AIStats
14 0.50804096 72 hunch net-2005-05-16-Regret minimizing vs error limiting reductions
15 0.46430019 82 hunch net-2005-06-17-Reopening RL->Classification
16 0.45927042 264 hunch net-2007-09-30-NIPS workshops are out.
17 0.4536052 26 hunch net-2005-02-21-Problem: Cross Validation
18 0.42487881 439 hunch net-2011-08-01-Interesting papers at COLT 2011
19 0.41719896 49 hunch net-2005-03-30-What can Type Theory teach us about Machine Learning?
20 0.41333303 19 hunch net-2005-02-14-Clever Methods of Overfitting