hunch_net hunch_net-2006 hunch_net-2006-146 knowledge-graph by maker-knowledge-mining

146 hunch net-2006-01-06-MLTV


meta infos for this blog

Source: html

Introduction: As part of a PASCAL project, the Slovenians have been filming various machine learning events and placing them on the web here . This includes, for example, the Chicago 2005 Machine Learning Summer School as well as a number of other summer schools, workshops, and conferences. There are some significant caveats here—for example, I can’t access it from Linux. Based upon the webserver logs, I expect that is a problem for most people—Computer scientists are particularly nonstandard in their choice of computing platform. Nevertheless, the core idea here is excellent and details of compatibility can be fixed later. With modern technology toys, there is no fundamental reason why the process of announcing new work at a conference should happen only once and only for the people who could make it to that room in that conference. The problems solved include: The multitrack vs. single-track debate. (“Sometimes the single track doesn’t interest me” vs. “When it’s multitrack I mis


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 As part of a PASCAL project, the Slovenians have been filming various machine learning events and placing them on the web here . [sent-1, score-0.455]

2 This includes, for example, the Chicago 2005 Machine Learning Summer School as well as a number of other summer schools, workshops, and conferences. [sent-2, score-0.119]

3 There are some significant caveats here—for example, I can’t access it from Linux. [sent-3, score-0.083]

4 Based upon the webserver logs, I expect that is a problem for most people—Computer scientists are particularly nonstandard in their choice of computing platform. [sent-4, score-0.392]

5 Nevertheless, the core idea here is excellent and details of compatibility can be fixed later. [sent-5, score-0.103]

6 With modern technology toys, there is no fundamental reason why the process of announcing new work at a conference should happen only once and only for the people who could make it to that room in that conference. [sent-6, score-0.599]

7 The problems solved include: The multitrack vs. [sent-7, score-0.206]

8 “When it’s multitrack I miss good talks” “I couldn’t attend because I was giving birth/going to a funeral/a wedding” “What was that? [sent-10, score-0.285]

9 For example, maybe a shift towards recording and placing things on the web will result in lower attendance at a conference. [sent-13, score-0.677]

10 Such a fear is confused in a few ways: People go to conferences for many more reasons than just announcing new work. [sent-14, score-0.459]

11 Other goals include doing research, meeting old friends, worrying about job openings, skiing, and visiting new places. [sent-15, score-0.453]

12 There also a subtle benefit of going to a conference: it represents a commitment of time to research. [sent-16, score-0.378]

13 It is this commitment which makes two people from the same place start working together at a conference. [sent-17, score-0.373]

14 Given all these benefits of going to a conference, there is plenty of reason for them to continue to exist. [sent-18, score-0.187]

15 It is important to remember that a conference is a process in aid of research. [sent-19, score-0.223]

16 Recording and making available for download the presentations at a conference makes research easier by solving all the problems listed above. [sent-20, score-0.809]

17 This is just another new information technology. [sent-21, score-0.079]

18 When the web came out, computer scientists and physicists quickly adopted a “place any paper on your webpage” style even when journals forced them to sign away the rights of the paper to publish. [sent-22, score-1.086]

19 Doing this was simply healthy for the researcher because his papers were more easily readable. [sent-23, score-0.081]

20 The same logic applies to making presentations at a conference available on the web. [sent-24, score-0.544]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('web', 0.277), ('conference', 0.223), ('recording', 0.222), ('announcing', 0.206), ('multitrack', 0.206), ('commitment', 0.185), ('scientists', 0.178), ('placing', 0.178), ('presentations', 0.144), ('summer', 0.119), ('toys', 0.111), ('nonstandard', 0.111), ('openings', 0.111), ('place', 0.109), ('include', 0.107), ('compatibility', 0.103), ('couldn', 0.103), ('rights', 0.103), ('webserver', 0.103), ('computer', 0.099), ('worrying', 0.097), ('physicists', 0.097), ('adopted', 0.097), ('fears', 0.097), ('represents', 0.097), ('skiing', 0.097), ('available', 0.097), ('going', 0.096), ('confused', 0.093), ('download', 0.093), ('listed', 0.093), ('visiting', 0.093), ('reason', 0.091), ('pascal', 0.089), ('schools', 0.086), ('caveats', 0.083), ('journals', 0.081), ('friends', 0.081), ('healthy', 0.081), ('webpage', 0.081), ('fear', 0.081), ('making', 0.08), ('makes', 0.079), ('new', 0.079), ('miss', 0.079), ('wish', 0.079), ('forced', 0.077), ('meeting', 0.077), ('sign', 0.077), ('chicago', 0.075)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 146 hunch net-2006-01-06-MLTV

Introduction: As part of a PASCAL project, the Slovenians have been filming various machine learning events and placing them on the web here . This includes, for example, the Chicago 2005 Machine Learning Summer School as well as a number of other summer schools, workshops, and conferences. There are some significant caveats here—for example, I can’t access it from Linux. Based upon the webserver logs, I expect that is a problem for most people—Computer scientists are particularly nonstandard in their choice of computing platform. Nevertheless, the core idea here is excellent and details of compatibility can be fixed later. With modern technology toys, there is no fundamental reason why the process of announcing new work at a conference should happen only once and only for the people who could make it to that room in that conference. The problems solved include: The multitrack vs. single-track debate. (“Sometimes the single track doesn’t interest me” vs. “When it’s multitrack I mis

2 0.17181104 141 hunch net-2005-12-17-Workshops as Franchise Conferences

Introduction: Founding a successful new conference is extraordinarily difficult. As a conference founder, you must manage to attract a significant number of good papers—enough to entice the participants into participating next year and to (generally) to grow the conference. For someone choosing to participate in a new conference, there is a very significant decision to make: do you send a paper to some new conference with no guarantee that the conference will work out? Or do you send it to another (possibly less related) conference that you are sure will work? The conference founding problem is a joint agreement problem with a very significant barrier. Workshops are a way around this problem, and workshops attached to conferences are a particularly effective means for this. A workshop at a conference is sure to have people available to speak and attend and is sure to have a large audience available. Presenting work at a workshop is not generally exclusive: it can also be presented at a confe

3 0.16076131 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

4 0.13512428 93 hunch net-2005-07-13-“Sister Conference” presentations

Introduction: Some of the “sister conference” presentations at AAAI have been great. Roughly speaking, the conference organizers asked other conference organizers to come give a summary of their conference. Many different AI-related conferences accepted. The presenters typically discuss some of the background and goals of the conference then mention the results from a few papers they liked. This is great because it provides a mechanism to get a digested overview of the work of several thousand researchers—something which is simply available nowhere else. Based on these presentations, it looks like there is a significant component of (and opportunity for) applied machine learning in AIIDE , IUI , and ACL . There was also some discussion of having a super-colocation event similar to FCRC , but centered on AI & Learning. This seems like a fine idea. The field is fractured across so many different conferences that the mixing of a supercolocation seems likely helpful for research.

5 0.13033681 174 hunch net-2006-04-27-Conferences, Workshops, and Tutorials

Introduction: This is a reminder that many deadlines for summer conference registration are coming up, and attendance is a very good idea. It’s entirely reasonable for anyone to visit a conference once, even when they don’t have a paper. For students, visiting a conference is almost a ‘must’—there is no where else that a broad cross-section of research is on display. Workshops are also a very good idea. ICML has 11 , KDD has 9 , and AAAI has 19 . Workshops provide an opportunity to get a good understanding of some current area of research. They are probably the forum most conducive to starting new lines of research because they are so interactive. Tutorials are a good way to gain some understanding of a long-standing direction of research. They are generally more coherent than workshops. ICML has 7 and AAAI has 15 .

6 0.12449641 452 hunch net-2012-01-04-Why ICML? and the summer conferences

7 0.11948048 75 hunch net-2005-05-28-Running A Machine Learning Summer School

8 0.11293585 116 hunch net-2005-09-30-Research in conferences

9 0.10810716 134 hunch net-2005-12-01-The Webscience Future

10 0.10544439 42 hunch net-2005-03-17-Going all the Way, Sometimes

11 0.10473128 233 hunch net-2007-02-16-The Forgetting

12 0.10379176 264 hunch net-2007-09-30-NIPS workshops are out.

13 0.10172961 297 hunch net-2008-04-22-Taking the next step

14 0.10158391 449 hunch net-2011-11-26-Giving Thanks

15 0.098989688 22 hunch net-2005-02-18-What it means to do research.

16 0.098736793 4 hunch net-2005-01-26-Summer Schools

17 0.096954957 106 hunch net-2005-09-04-Science in the Government

18 0.095199279 454 hunch net-2012-01-30-ICML Posters and Scope

19 0.09336631 343 hunch net-2009-02-18-Decision by Vetocracy

20 0.092002034 132 hunch net-2005-11-26-The Design of an Optimal Research Environment


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.215), (1, -0.124), (2, -0.072), (3, 0.021), (4, -0.061), (5, 0.003), (6, 0.068), (7, 0.046), (8, 0.01), (9, 0.071), (10, -0.022), (11, 0.012), (12, 0.051), (13, 0.026), (14, 0.027), (15, -0.027), (16, 0.045), (17, 0.134), (18, 0.084), (19, 0.104), (20, 0.084), (21, -0.049), (22, 0.019), (23, -0.012), (24, -0.002), (25, 0.031), (26, -0.018), (27, 0.028), (28, -0.031), (29, -0.042), (30, -0.001), (31, 0.103), (32, 0.006), (33, -0.056), (34, 0.038), (35, -0.029), (36, -0.007), (37, -0.017), (38, 0.047), (39, 0.034), (40, 0.031), (41, -0.028), (42, 0.006), (43, 0.077), (44, -0.012), (45, 0.107), (46, 0.067), (47, 0.001), (48, 0.0), (49, -0.087)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96541804 146 hunch net-2006-01-06-MLTV

Introduction: As part of a PASCAL project, the Slovenians have been filming various machine learning events and placing them on the web here . This includes, for example, the Chicago 2005 Machine Learning Summer School as well as a number of other summer schools, workshops, and conferences. There are some significant caveats here—for example, I can’t access it from Linux. Based upon the webserver logs, I expect that is a problem for most people—Computer scientists are particularly nonstandard in their choice of computing platform. Nevertheless, the core idea here is excellent and details of compatibility can be fixed later. With modern technology toys, there is no fundamental reason why the process of announcing new work at a conference should happen only once and only for the people who could make it to that room in that conference. The problems solved include: The multitrack vs. single-track debate. (“Sometimes the single track doesn’t interest me” vs. “When it’s multitrack I mis

2 0.69058043 93 hunch net-2005-07-13-“Sister Conference” presentations

Introduction: Some of the “sister conference” presentations at AAAI have been great. Roughly speaking, the conference organizers asked other conference organizers to come give a summary of their conference. Many different AI-related conferences accepted. The presenters typically discuss some of the background and goals of the conference then mention the results from a few papers they liked. This is great because it provides a mechanism to get a digested overview of the work of several thousand researchers—something which is simply available nowhere else. Based on these presentations, it looks like there is a significant component of (and opportunity for) applied machine learning in AIIDE , IUI , and ACL . There was also some discussion of having a super-colocation event similar to FCRC , but centered on AI & Learning. This seems like a fine idea. The field is fractured across so many different conferences that the mixing of a supercolocation seems likely helpful for research.

3 0.64991772 174 hunch net-2006-04-27-Conferences, Workshops, and Tutorials

Introduction: This is a reminder that many deadlines for summer conference registration are coming up, and attendance is a very good idea. It’s entirely reasonable for anyone to visit a conference once, even when they don’t have a paper. For students, visiting a conference is almost a ‘must’—there is no where else that a broad cross-section of research is on display. Workshops are also a very good idea. ICML has 11 , KDD has 9 , and AAAI has 19 . Workshops provide an opportunity to get a good understanding of some current area of research. They are probably the forum most conducive to starting new lines of research because they are so interactive. Tutorials are a good way to gain some understanding of a long-standing direction of research. They are generally more coherent than workshops. ICML has 7 and AAAI has 15 .

4 0.64510238 141 hunch net-2005-12-17-Workshops as Franchise Conferences

Introduction: Founding a successful new conference is extraordinarily difficult. As a conference founder, you must manage to attract a significant number of good papers—enough to entice the participants into participating next year and to (generally) to grow the conference. For someone choosing to participate in a new conference, there is a very significant decision to make: do you send a paper to some new conference with no guarantee that the conference will work out? Or do you send it to another (possibly less related) conference that you are sure will work? The conference founding problem is a joint agreement problem with a very significant barrier. Workshops are a way around this problem, and workshops attached to conferences are a particularly effective means for this. A workshop at a conference is sure to have people available to speak and attend and is sure to have a large audience available. Presenting work at a workshop is not generally exclusive: it can also be presented at a confe

5 0.63766092 75 hunch net-2005-05-28-Running A Machine Learning Summer School

Introduction: We just finished the Chicago 2005 Machine Learning Summer School . The school was 2 weeks long with about 130 (or 140 counting the speakers) participants. For perspective, this is perhaps the largest graduate level machine learning class I am aware of anywhere and anytime (previous MLSS s have been close). Overall, it seemed to go well, although the students are the real authority on this. For those who missed it, DVDs will be available from our Slovenian friends. Email Mrs Spela Sitar of the Jozsef Stefan Institute for details. The following are some notes for future planning and those interested. Good Decisions Acquiring the larger-than-necessary “Assembly Hall” at International House . Our attendance came in well above our expectations, so this was a critical early decision that made a huge difference. The invited speakers were key. They made a huge difference in the quality of the content. Delegating early and often was important. One key difficulty here

6 0.61807287 297 hunch net-2008-04-22-Taking the next step

7 0.60614115 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

8 0.5912779 232 hunch net-2007-02-11-24

9 0.58701885 231 hunch net-2007-02-10-Best Practices for Collaboration

10 0.57941788 449 hunch net-2011-11-26-Giving Thanks

11 0.57727545 437 hunch net-2011-07-10-ICML 2011 and the future

12 0.57610434 233 hunch net-2007-02-16-The Forgetting

13 0.57547742 416 hunch net-2010-10-29-To Vidoelecture or not

14 0.57413155 1 hunch net-2005-01-19-Why I decided to run a weblog.

15 0.54214978 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops

16 0.53390932 249 hunch net-2007-06-21-Presentation Preparation

17 0.51360369 80 hunch net-2005-06-10-Workshops are not Conferences

18 0.51088631 172 hunch net-2006-04-14-JMLR is a success

19 0.51080555 69 hunch net-2005-05-11-Visa Casualties

20 0.50936472 98 hunch net-2005-07-27-Not goal metrics


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(10, 0.012), (27, 0.145), (38, 0.042), (42, 0.037), (53, 0.052), (55, 0.122), (80, 0.286), (94, 0.14), (95, 0.084)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.92434233 222 hunch net-2006-12-05-Recruitment Conferences

Introduction: One of the subsidiary roles of conferences is recruitment. NIPS is optimally placed in time for this because it falls right before the major recruitment season. I personally found job hunting embarrassing, and was relatively inept at it. I expect this is true of many people, because it is not something done often. The basic rule is: make the plausible hirers aware of your interest. Any corporate sponsor is a “plausible”, regardless of whether or not there is a booth. CRA and the acm job center are other reasonable sources. There are substantial differences between the different possibilities. Putting some effort into understanding the distinctions is a good idea, although you should always remember where the other person is coming from.

same-blog 2 0.86111104 146 hunch net-2006-01-06-MLTV

Introduction: As part of a PASCAL project, the Slovenians have been filming various machine learning events and placing them on the web here . This includes, for example, the Chicago 2005 Machine Learning Summer School as well as a number of other summer schools, workshops, and conferences. There are some significant caveats here—for example, I can’t access it from Linux. Based upon the webserver logs, I expect that is a problem for most people—Computer scientists are particularly nonstandard in their choice of computing platform. Nevertheless, the core idea here is excellent and details of compatibility can be fixed later. With modern technology toys, there is no fundamental reason why the process of announcing new work at a conference should happen only once and only for the people who could make it to that room in that conference. The problems solved include: The multitrack vs. single-track debate. (“Sometimes the single track doesn’t interest me” vs. “When it’s multitrack I mis

3 0.8067959 68 hunch net-2005-05-10-Learning Reductions are Reductionist

Introduction: This is about a fundamental motivation for the investigation of reductions in learning. It applies to many pieces of work other than my own. The reductionist approach to problem solving is characterized by taking a problem, decomposing it into as-small-as-possible subproblems, discovering how to solve the subproblems, and then discovering how to use the solutions to the subproblems to solve larger problems. The reductionist approach to solving problems has often payed off very well. Computer science related examples of the reductionist approach include: Reducing computation to the transistor. All of our CPUs are built from transistors. Reducing rendering of images to rendering a triangle (or other simple polygons). Computers can now render near-realistic scenes in real time. The big breakthrough came from learning how to render many triangles quickly. This approach to problem solving extends well beyond computer science. Many fields of science focus on theories mak

4 0.73892468 141 hunch net-2005-12-17-Workshops as Franchise Conferences

Introduction: Founding a successful new conference is extraordinarily difficult. As a conference founder, you must manage to attract a significant number of good papers—enough to entice the participants into participating next year and to (generally) to grow the conference. For someone choosing to participate in a new conference, there is a very significant decision to make: do you send a paper to some new conference with no guarantee that the conference will work out? Or do you send it to another (possibly less related) conference that you are sure will work? The conference founding problem is a joint agreement problem with a very significant barrier. Workshops are a way around this problem, and workshops attached to conferences are a particularly effective means for this. A workshop at a conference is sure to have people available to speak and attend and is sure to have a large audience available. Presenting work at a workshop is not generally exclusive: it can also be presented at a confe

5 0.71541977 34 hunch net-2005-03-02-Prior, “Prior” and Bias

Introduction: Many different ways of reasoning about learning exist, and many of these suggest that some method of saying “I prefer this predictor to that predictor” is useful and necessary. Examples include Bayesian reasoning, prediction bounds, and online learning. One difficulty which arises is that the manner and meaning of saying “I prefer this predictor to that predictor” differs. Prior (Bayesian) A prior is a probability distribution over a set of distributions which expresses a belief in the probability that some distribution is the distribution generating the data. “Prior” (Prediction bounds & online learning) The “prior” is a measure over a set of classifiers which expresses the degree to which you hope the classifier will predict well. Bias (Regularization, Early termination of neural network training, etc…) The bias is some (often implicitly specified by an algorithm) way of preferring one predictor to another. This only scratches the surface—there are yet more subt

6 0.63775581 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

7 0.63323295 423 hunch net-2011-02-02-User preferences for search engines

8 0.61649078 286 hunch net-2008-01-25-Turing’s Club for Machine Learning

9 0.61553633 136 hunch net-2005-12-07-Is the Google way the way for machine learning?

10 0.61384964 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers

11 0.61382431 75 hunch net-2005-05-28-Running A Machine Learning Summer School

12 0.61211956 40 hunch net-2005-03-13-Avoiding Bad Reviewing

13 0.61186397 132 hunch net-2005-11-26-The Design of an Optimal Research Environment

14 0.60874289 22 hunch net-2005-02-18-What it means to do research.

15 0.60751575 464 hunch net-2012-05-03-Microsoft Research, New York City

16 0.60740316 437 hunch net-2011-07-10-ICML 2011 and the future

17 0.60472137 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

18 0.60175949 95 hunch net-2005-07-14-What Learning Theory might do

19 0.60167462 301 hunch net-2008-05-23-Three levels of addressing the Netflix Prize

20 0.6008352 419 hunch net-2010-12-04-Vowpal Wabbit, version 5.0, and the second heresy