hunch_net hunch_net-2007 hunch_net-2007-232 knowledge-graph by maker-knowledge-mining

232 hunch net-2007-02-11-24


meta infos for this blog

Source: html

Introduction: To commemorate the Twenty Fourth Annual International Conference on Machine Learning (ICML-07), the FOX Network has decided to launch a new spin-off series in prime time. Through unofficial sources, I have obtained the story arc for the first season, which appears frighteningly realistic.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 To commemorate the Twenty Fourth Annual International Conference on Machine Learning (ICML-07), the FOX Network has decided to launch a new spin-off series in prime time. [sent-1, score-1.124]

2 Through unofficial sources, I have obtained the story arc for the first season, which appears frighteningly realistic. [sent-2, score-1.077]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('launch', 0.321), ('frighteningly', 0.321), ('prime', 0.321), ('obtained', 0.298), ('realistic', 0.298), ('fourth', 0.281), ('season', 0.268), ('annual', 0.248), ('story', 0.233), ('series', 0.217), ('international', 0.217), ('decided', 0.208), ('sources', 0.184), ('network', 0.176), ('appears', 0.147), ('conference', 0.108), ('first', 0.078), ('new', 0.057), ('machine', 0.047), ('learning', 0.019)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 232 hunch net-2007-02-11-24

Introduction: To commemorate the Twenty Fourth Annual International Conference on Machine Learning (ICML-07), the FOX Network has decided to launch a new spin-off series in prime time. Through unofficial sources, I have obtained the story arc for the first season, which appears frighteningly realistic.

2 0.088557482 268 hunch net-2007-10-19-Second Annual Reinforcement Learning Competition

Introduction: The Second Annual Reinforcement Learning Competition is about to get started. The aim of the competition is to facilitate direct comparisons between various learning methods on important and realistic domains. This year’s event will feature well-known benchmark domains as well as more challenging problems of real-world complexity, such as helicopter control and robot soccer keepaway. The competition begins on November 1st, 2007 when training software is released. Results must be submitted by July 1st, 2008. The competition will culminate in an event at ICML-08 in Helsinki, Finland, at which the winners will be announced. For more information, visit the competition website.

3 0.084702231 11 hunch net-2005-02-02-Paper Deadlines

Introduction: It’s conference season, and smell of budding papers is in the air. IJCAI 2005 , January 21 COLT 2005 , February 2 KDD 2005 , February 18 ICML 2005 , March 8 UAI 2005 , March 16 AAAI 2005 , March 18

4 0.059742708 255 hunch net-2007-07-13-The View From China

Introduction: I’m visiting Beijing for the Pao-Lu Hsu Statistics Conference on Machine Learning. I had several discussions about the state of Chinese research. Given the large population and economy, you might expect substantial research—more than has been observed at international conferences. The fundamental problem seems to be the Cultural Revolution which lobotimized higher education, and the research associated with it. There has been a process of slow recovery since then, which has begun to be felt in the research world via increased participation in international conferences and (now) conferences in China. The amount of effort going into construction in Beijing is very impressive—people are literally building a skyscraper at night outside the window of the hotel I’m staying at (and this is not unusual). If a small fraction of this effort is later focused onto supporting research, the effect could be very substantial. General growth in China’s research portfolio should be expecte

5 0.052125193 242 hunch net-2007-04-30-COLT 2007

Introduction: Registration for COLT 2007 is now open. The conference will take place on 13-15 June, 2007, in San Diego, California, as part of the 2007 Federated Computing Research Conference (FCRC), which includes STOC, Complexity, and EC. The website for COLT: http://www.learningtheory.org/colt2007/index.html The early registration deadline is May 11, and the cutoff date for discounted hotel rates is May 9. Before registering, take note that the fees are substantially lower for members of ACM and/or SIGACT than for nonmembers. If you’ve been contemplating joining either of these two societies (annual dues: $99 for ACM, $18 for SIGACT), now would be a good time!

6 0.051793631 16 hunch net-2005-02-09-Intuitions from applied learning

7 0.051270176 141 hunch net-2005-12-17-Workshops as Franchise Conferences

8 0.051003985 75 hunch net-2005-05-28-Running A Machine Learning Summer School

9 0.044610776 452 hunch net-2012-01-04-Why ICML? and the summer conferences

10 0.042630173 474 hunch net-2012-10-18-7th Annual Machine Learning Symposium

11 0.03956778 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

12 0.038643688 337 hunch net-2009-01-21-Nearly all natural problems require nonlinearity

13 0.038547322 297 hunch net-2008-04-22-Taking the next step

14 0.038214725 384 hunch net-2009-12-24-Top graduates this season

15 0.038158119 143 hunch net-2005-12-27-Automated Labeling

16 0.038012728 131 hunch net-2005-11-16-The Everything Ensemble Edge

17 0.037728496 226 hunch net-2007-01-04-2007 Summer Machine Learning Conferences

18 0.036916345 464 hunch net-2012-05-03-Microsoft Research, New York City

19 0.035475284 69 hunch net-2005-05-11-Visa Casualties

20 0.035441387 385 hunch net-2009-12-27-Interesting things at NIPS 2009


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.044), (1, -0.032), (2, -0.035), (3, -0.023), (4, -0.001), (5, -0.032), (6, -0.012), (7, 0.02), (8, 0.013), (9, -0.007), (10, -0.024), (11, 0.0), (12, 0.013), (13, -0.002), (14, 0.024), (15, 0.032), (16, 0.017), (17, 0.029), (18, 0.014), (19, 0.029), (20, 0.019), (21, -0.043), (22, 0.013), (23, 0.017), (24, -0.009), (25, 0.027), (26, -0.059), (27, -0.007), (28, 0.007), (29, 0.022), (30, 0.005), (31, 0.06), (32, -0.016), (33, -0.03), (34, 0.0), (35, -0.015), (36, 0.015), (37, -0.015), (38, 0.02), (39, 0.025), (40, -0.002), (41, -0.009), (42, 0.044), (43, -0.016), (44, -0.032), (45, -0.044), (46, 0.022), (47, 0.004), (48, 0.049), (49, -0.051)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.90593952 232 hunch net-2007-02-11-24

Introduction: To commemorate the Twenty Fourth Annual International Conference on Machine Learning (ICML-07), the FOX Network has decided to launch a new spin-off series in prime time. Through unofficial sources, I have obtained the story arc for the first season, which appears frighteningly realistic.

2 0.65327811 93 hunch net-2005-07-13-“Sister Conference” presentations

Introduction: Some of the “sister conference” presentations at AAAI have been great. Roughly speaking, the conference organizers asked other conference organizers to come give a summary of their conference. Many different AI-related conferences accepted. The presenters typically discuss some of the background and goals of the conference then mention the results from a few papers they liked. This is great because it provides a mechanism to get a digested overview of the work of several thousand researchers—something which is simply available nowhere else. Based on these presentations, it looks like there is a significant component of (and opportunity for) applied machine learning in AIIDE , IUI , and ACL . There was also some discussion of having a super-colocation event similar to FCRC , but centered on AI & Learning. This seems like a fine idea. The field is fractured across so many different conferences that the mixing of a supercolocation seems likely helpful for research.

3 0.4967626 146 hunch net-2006-01-06-MLTV

Introduction: As part of a PASCAL project, the Slovenians have been filming various machine learning events and placing them on the web here . This includes, for example, the Chicago 2005 Machine Learning Summer School as well as a number of other summer schools, workshops, and conferences. There are some significant caveats here—for example, I can’t access it from Linux. Based upon the webserver logs, I expect that is a problem for most people—Computer scientists are particularly nonstandard in their choice of computing platform. Nevertheless, the core idea here is excellent and details of compatibility can be fixed later. With modern technology toys, there is no fundamental reason why the process of announcing new work at a conference should happen only once and only for the people who could make it to that room in that conference. The problems solved include: The multitrack vs. single-track debate. (“Sometimes the single track doesn’t interest me” vs. “When it’s multitrack I mis

4 0.48615959 416 hunch net-2010-10-29-To Vidoelecture or not

Introduction: (update: cross-posted on CACM ) For the first time in several years, ICML 2010 did not have videolectures attending. Luckily, the tutorial on exploration and learning which Alina and I put together can be viewed , since we also presented at KDD 2010 , which included videolecture support. ICML didn’t cover the cost of a videolecture, because PASCAL didn’t provide a grant for it this year. On the other hand, KDD covered it out of registration costs. The cost of videolectures isn’t cheap. For a workshop the baseline quote we have is 270 euro per hour, plus a similar cost for the cameraman’s travel and accomodation. This can be reduced substantially by having a volunteer with a camera handle the cameraman duties, uploading the video and slides to be processed for a quoted 216 euro per hour. Youtube is the most predominant free video site with a cost of $0, but it turns out to be a poor alternative. 15 minute upload limits do not match typical talk lengths.

5 0.48014456 297 hunch net-2008-04-22-Taking the next step

Introduction: At the last ICML , Tom Dietterich asked me to look into systems for commenting on papers. I’ve been slow getting to this, but it’s relevant now. The essential observation is that we now have many tools for online collaboration, but they are not yet much used in academic research. If we can find the right way to use them, then perhaps great things might happen, with extra kudos to the first conference that manages to really create an online community. Various conferences have been poking at this. For example, UAI has setup a wiki , COLT has started using Joomla , with some dynamic content, and AAAI has been setting up a “ student blog “. Similarly, Dinoj Surendran setup a twiki for the Chicago Machine Learning Summer School , which was quite useful for coordinating events and other things. I believe the most important thing is a willingness to experiment. A good place to start seems to be enhancing existing conference websites. For example, the ICML 2007 papers pag

6 0.47989464 268 hunch net-2007-10-19-Second Annual Reinforcement Learning Competition

7 0.45571166 141 hunch net-2005-12-17-Workshops as Franchise Conferences

8 0.44877476 21 hunch net-2005-02-17-Learning Research Programs

9 0.4402661 174 hunch net-2006-04-27-Conferences, Workshops, and Tutorials

10 0.42345923 66 hunch net-2005-05-03-Conference attendance is mandatory

11 0.40340382 270 hunch net-2007-11-02-The Machine Learning Award goes to …

12 0.40222394 335 hunch net-2009-01-08-Predictive Analytics World

13 0.39528227 377 hunch net-2009-11-09-NYAS ML Symposium this year.

14 0.38562155 372 hunch net-2009-09-29-Machine Learning Protests at the G20

15 0.38387197 276 hunch net-2007-12-10-Learning Track of International Planning Competition

16 0.38072318 80 hunch net-2005-06-10-Workshops are not Conferences

17 0.37934059 283 hunch net-2008-01-07-2008 Summer Machine Learning Conference Schedule

18 0.37441155 242 hunch net-2007-04-30-COLT 2007

19 0.37186816 171 hunch net-2006-04-09-Progress in Machine Translation

20 0.36388102 212 hunch net-2006-10-04-Health of Conferences Wiki


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.036), (48, 0.769)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95460808 232 hunch net-2007-02-11-24

Introduction: To commemorate the Twenty Fourth Annual International Conference on Machine Learning (ICML-07), the FOX Network has decided to launch a new spin-off series in prime time. Through unofficial sources, I have obtained the story arc for the first season, which appears frighteningly realistic.

2 0.54483128 468 hunch net-2012-06-29-ICML survey and comments

Introduction: Just about nothing could keep me from attending ICML , except for Dora who arrived on Monday. Consequently, I have only secondhand reports that the conference is going well. For those who are remote (like me) or after the conference (like everyone), Mark Reid has setup the ICML discussion site where you can comment on any paper or subscribe to papers. Authors are automatically subscribed to their own papers, so it should be possible to have a discussion significantly after the fact, as people desire. We also conducted a survey before the conference and have the survey results now. This can be compared with the ICML 2010 survey results . Looking at the comparable questions, we can sometimes order the answers to have scores ranging from 0 to 3 or 0 to 4 with 3 or 4 being best and 0 worst, then compute the average difference between 2012 and 2010. Glancing through them, I see: Most people found the papers they reviewed a good fit for their expertise (-.037 w.r.t 20

3 0.50456381 303 hunch net-2008-06-09-The Minimum Sample Complexity of Importance Weighting

Introduction: This post is about a trick that I learned from Dale Schuurmans which has been repeatedly useful for me over time. The basic trick has to do with importance weighting for monte carlo integration. Consider the problem of finding: N = E x ~ D f(x) given samples from D and knowledge of f . Often, we don’t have samples from D available. Instead, we must make do with samples from some other distribution Q . In that case, we can still often solve the problem, as long as Q(x) isn’t 0 when D(x) is nonzero, using the importance weighting formula: E x ~ Q f(x) D(x)/Q(x) A basic question is: How many samples from Q are required in order to estimate N to some precision? In general the convergence rate is not bounded, because f(x) D(x)/Q(x) is not bounded given the assumptions. Nevertheless, there is one special value Q(x) = f(x) D(x) / N where the sample complexity turns out to be 1 , which is typically substantially better than the sample complexity of the orig

4 0.49743608 46 hunch net-2005-03-24-The Role of Workshops

Introduction: A good workshop is often far more interesting than the papers at a conference. This happens because a workshop has a much tighter focus than a conference. Since you choose the workshops fitting your interest, the increased relevance can greatly enhance the level of your interest and attention. Roughly speaking, a workshop program consists of elements related to a subject of your interest. The main conference program consists of elements related to someone’s interest (which is rarely your own). Workshops are more about doing research while conferences are more about presenting research. Several conferences have associated workshop programs, some with deadlines due shortly. ICML workshops Due April 1 IJCAI workshops Deadlines Vary KDD workshops Not yet finalized Anyone going to these conferences should examine the workshops and see if any are of interest. (If none are, then maybe you should organize one next year.)

5 0.41683656 327 hunch net-2008-11-16-Observations on Linearity for Reductions to Regression

Introduction: Dean Foster and Daniel Hsu had a couple observations about reductions to regression that I wanted to share. This will make the most sense for people familiar with error correcting output codes (see the tutorial, page 11 ). Many people are comfortable using linear regression in a one-against-all style, where you try to predict the probability of choice i vs other classes, yet they are not comfortable with more complex error correcting codes because they fear that they create harder problems. This fear turns out to be mathematically incoherent under a linear representation: comfort in the linear case should imply comfort with more complex codes. In particular, If there exists a set of weight vectors w i such that P(i|x)=, then for any invertible error correcting output code C , there exists weight vectors w c which decode to perfectly predict the probability of each class. The proof is simple and constructive: the weight vector w c can be constructed acc

6 0.39815226 445 hunch net-2011-09-28-Somebody’s Eating Your Lunch

7 0.39586344 318 hunch net-2008-09-26-The SODA Program Committee

8 0.15620601 466 hunch net-2012-06-05-ICML acceptance statistics

9 0.12377405 116 hunch net-2005-09-30-Research in conferences

10 0.11549655 449 hunch net-2011-11-26-Giving Thanks

11 0.11498248 75 hunch net-2005-05-28-Running A Machine Learning Summer School

12 0.11333857 403 hunch net-2010-07-18-ICML & COLT 2010

13 0.1122718 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

14 0.098891951 463 hunch net-2012-05-02-ICML: Behind the Scenes

15 0.09856236 259 hunch net-2007-08-19-Choice of Metrics

16 0.09590216 461 hunch net-2012-04-09-ICML author feedback is open

17 0.095424414 280 hunch net-2007-12-20-Cool and Interesting things at NIPS, take three

18 0.09422645 485 hunch net-2013-06-29-The Benefits of Double-Blind Review

19 0.089891508 456 hunch net-2012-02-24-ICML+50%

20 0.089647993 430 hunch net-2011-04-11-The Heritage Health Prize