hunch_net hunch_net-2010 hunch_net-2010-417 knowledge-graph by maker-knowledge-mining

417 hunch net-2010-11-18-ICML 2011 – Call for Tutorials


meta infos for this blog

Source: html

Introduction: I would like to encourage people to consider giving a tutorial at next years ICML. The ideal tutorial attracts a wide audience, provides a gentle and easily taught introduction to the chosen research area, and also covers the most important contributions in depth. Submissions are due January 14  (about two weeks before paper deadline). http://www.icml-2011.org/tutorials.php Regards, Ulf


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 I would like to encourage people to consider giving a tutorial at next years ICML. [sent-1, score-1.233]

2 The ideal tutorial attracts a wide audience, provides a gentle and easily taught introduction to the chosen research area, and also covers the most important contributions in depth. [sent-2, score-2.383]

3 Submissions are due January 14  (about two weeks before paper deadline). [sent-3, score-0.443]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('tutorial', 0.342), ('regards', 0.298), ('covers', 0.298), ('contributions', 0.248), ('introduction', 0.239), ('http', 0.223), ('taught', 0.211), ('weeks', 0.206), ('january', 0.201), ('audience', 0.201), ('ideal', 0.201), ('chosen', 0.201), ('wide', 0.193), ('encourage', 0.186), ('submissions', 0.182), ('giving', 0.161), ('deadline', 0.152), ('years', 0.136), ('provides', 0.128), ('next', 0.126), ('area', 0.122), ('easily', 0.116), ('consider', 0.103), ('due', 0.098), ('important', 0.086), ('would', 0.074), ('paper', 0.071), ('two', 0.068), ('research', 0.066), ('like', 0.059), ('also', 0.054), ('people', 0.046)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 417 hunch net-2010-11-18-ICML 2011 – Call for Tutorials

Introduction: I would like to encourage people to consider giving a tutorial at next years ICML. The ideal tutorial attracts a wide audience, provides a gentle and easily taught introduction to the chosen research area, and also covers the most important contributions in depth. Submissions are due January 14  (about two weeks before paper deadline). http://www.icml-2011.org/tutorials.php Regards, Ulf

2 0.21280316 379 hunch net-2009-11-23-ICML 2009 Workshops (and Tutorials)

Introduction: I’m the workshops chair for ICML this year. As such, I would like to personally encourage people to consider running a workshop. My general view of workshops is that they are excellent as opportunities to discuss and develop research directions—some of my best work has come from collaborations at workshops and several workshops have substantially altered my thinking about various problems. My experience running workshops is that setting them up and making them fly often appears much harder than it actually is, and the workshops often come off much better than expected in the end. Submissions are due January 18, two weeks before papers. Similarly, Ben Taskar is looking for good tutorials , which is complementary. Workshops are about exploring a subject, while a tutorial is about distilling it down into an easily taught essence, a vital part of the research process. Tutorials are due February 13, two weeks after papers.

3 0.15891263 442 hunch net-2011-08-20-The Large Scale Learning Survey Tutorial

Introduction: Ron Bekkerman initiated an effort to create an edited book on parallel machine learning that Misha and I have been helping with. The breadth of efforts to parallelize machine learning surprised me: I was only aware of a small fraction initially. This put us in a unique position, with knowledge of a wide array of different efforts, so it is natural to put together a survey tutorial on the subject of parallel learning for KDD , tomorrow. This tutorial is not limited to the book itself however, as several interesting new algorithms have come out since we started inviting chapters. This tutorial should interest anyone trying to use machine learning on significant quantities of data, anyone interested in developing algorithms for such, and of course who has bragging rights to the fastest learning algorithm on planet earth (Also note the Modeling with Hadoop tutorial just before ours which deals with one way of trying to speed up learning algorithms. We have almost no

4 0.1170828 409 hunch net-2010-09-13-AIStats

Introduction: Geoff Gordon points out AIStats 2011 in Ft. Lauderdale, Florida. The call for papers is now out, due Nov. 1. The plan is to experiment with the review process to encourage quality in several ways. I expect to submit a paper and would encourage others with good research to do likewise.

5 0.11361447 492 hunch net-2013-12-01-NIPS tutorials and Vowpal Wabbit 7.4

Introduction: At NIPS I’m giving a tutorial on Learning to Interact . In essence this is about dealing with causality in a contextual bandit framework. Relative to previous tutorials , I’ll be covering several new results that changed my understanding of the nature of the problem. Note that Judea Pearl and Elias Bareinboim have a tutorial on causality . This might appear similar, but is quite different in practice. Pearl and Bareinboim’s tutorial will be about the general concepts while mine will be about total mastery of the simplest nontrivial case, including code. Luckily, they have the right order. I recommend going to both I also just released version 7.4 of Vowpal Wabbit . When I was a frustrated learning theorist, I did not understand why people were not using learning reductions to solve problems. I’ve been slowly discovering why with VW, and addressing the issues. One of the issues is that machine learning itself was not automatic enough, while another is that creatin

6 0.0994533 433 hunch net-2011-04-23-ICML workshops due

7 0.099338353 304 hunch net-2008-06-27-Reviewing Horror Stories

8 0.090945266 145 hunch net-2005-12-29-Deadline Season

9 0.088758178 158 hunch net-2006-02-24-A Fundamentalist Organization of Machine Learning

10 0.086767748 476 hunch net-2012-12-29-Simons Institute Big Data Program

11 0.077879921 375 hunch net-2009-10-26-NIPS workshops

12 0.075845763 80 hunch net-2005-06-10-Workshops are not Conferences

13 0.074899554 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

14 0.074340031 387 hunch net-2010-01-19-Deadline Season, 2010

15 0.074289434 198 hunch net-2006-07-25-Upcoming conference

16 0.074127302 454 hunch net-2012-01-30-ICML Posters and Scope

17 0.069360606 484 hunch net-2013-06-16-Representative Reviewing

18 0.068067275 184 hunch net-2006-06-15-IJCAI is out of season

19 0.067631319 456 hunch net-2012-02-24-ICML+50%

20 0.064551286 452 hunch net-2012-01-04-Why ICML? and the summer conferences


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.11), (1, -0.105), (2, -0.053), (3, -0.05), (4, 0.001), (5, 0.027), (6, 0.026), (7, 0.007), (8, -0.035), (9, -0.003), (10, 0.028), (11, -0.074), (12, 0.003), (13, 0.005), (14, -0.022), (15, 0.022), (16, 0.022), (17, -0.026), (18, -0.084), (19, -0.098), (20, -0.041), (21, 0.037), (22, -0.036), (23, -0.031), (24, 0.025), (25, 0.039), (26, 0.026), (27, -0.151), (28, -0.082), (29, 0.006), (30, -0.001), (31, -0.153), (32, 0.064), (33, -0.048), (34, -0.021), (35, 0.019), (36, 0.075), (37, 0.058), (38, 0.06), (39, 0.113), (40, -0.006), (41, 0.111), (42, -0.043), (43, -0.022), (44, -0.137), (45, 0.007), (46, 0.093), (47, -0.046), (48, -0.019), (49, -0.006)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97761267 417 hunch net-2010-11-18-ICML 2011 – Call for Tutorials

Introduction: I would like to encourage people to consider giving a tutorial at next years ICML. The ideal tutorial attracts a wide audience, provides a gentle and easily taught introduction to the chosen research area, and also covers the most important contributions in depth. Submissions are due January 14  (about two weeks before paper deadline). http://www.icml-2011.org/tutorials.php Regards, Ulf

2 0.57566589 476 hunch net-2012-12-29-Simons Institute Big Data Program

Introduction: Michael Jordan sends the below: The new Simons Institute for the Theory of Computing will begin organizing semester-long programs starting in 2013. One of our first programs, set for Fall 2013, will be on the “Theoretical Foundations of Big Data Analysis”. The organizers of this program are Michael Jordan (chair), Stephen Boyd, Peter Buehlmann, Ravi Kannan, Michael Mahoney, and Muthu Muthukrishnan. See http://simons.berkeley.edu/program_bigdata2013.html for more information on the program. The Simons Institute has created a number of “Research Fellowships” for young researchers (within at most six years of the award of their PhD) who wish to participate in Institute programs, including the Big Data program. Individuals who already hold postdoctoral positions or who are junior faculty are welcome to apply, as are finishing PhDs. Please note that the application deadline is January 15, 2013. Further details are available at http://simons.berkeley.edu/fellows.h

3 0.52400964 321 hunch net-2008-10-19-NIPS 2008 workshop on Kernel Learning

Introduction: We’d like to invite hunch.net readers to participate in the NIPS 2008 workshop on kernel learning. While the main focus is on automatically learning kernels from data, we are also also looking at the broader questions of feature selection, multi-task learning and multi-view learning. There are no restrictions on the learning problem being addressed (regression, classification, etc), and both theoretical and applied work will be considered. The deadline for submissions is October 24 . More detail can be found here . Corinna Cortes, Arthur Gretton, Gert Lanckriet, Mehryar Mohri, Afshin Rostamizadeh

4 0.51444769 409 hunch net-2010-09-13-AIStats

Introduction: Geoff Gordon points out AIStats 2011 in Ft. Lauderdale, Florida. The call for papers is now out, due Nov. 1. The plan is to experiment with the review process to encourage quality in several ways. I expect to submit a paper and would encourage others with good research to do likewise.

5 0.49872985 442 hunch net-2011-08-20-The Large Scale Learning Survey Tutorial

Introduction: Ron Bekkerman initiated an effort to create an edited book on parallel machine learning that Misha and I have been helping with. The breadth of efforts to parallelize machine learning surprised me: I was only aware of a small fraction initially. This put us in a unique position, with knowledge of a wide array of different efforts, so it is natural to put together a survey tutorial on the subject of parallel learning for KDD , tomorrow. This tutorial is not limited to the book itself however, as several interesting new algorithms have come out since we started inviting chapters. This tutorial should interest anyone trying to use machine learning on significant quantities of data, anyone interested in developing algorithms for such, and of course who has bragging rights to the fastest learning algorithm on planet earth (Also note the Modeling with Hadoop tutorial just before ours which deals with one way of trying to speed up learning algorithms. We have almost no

6 0.47269517 492 hunch net-2013-12-01-NIPS tutorials and Vowpal Wabbit 7.4

7 0.44837907 234 hunch net-2007-02-22-Create Your Own ICML Workshop

8 0.44569287 124 hunch net-2005-10-19-Workshop: Atomic Learning

9 0.43814251 180 hunch net-2006-05-21-NIPS paper evaluation criteria

10 0.4284313 198 hunch net-2006-07-25-Upcoming conference

11 0.40413055 404 hunch net-2010-08-20-The Workshop on Cores, Clusters, and Clouds

12 0.40070975 472 hunch net-2012-08-27-NYAS ML 2012 and ICML 2013

13 0.39825746 184 hunch net-2006-06-15-IJCAI is out of season

14 0.39781123 142 hunch net-2005-12-22-Yes , I am applying

15 0.39544886 488 hunch net-2013-08-31-Extreme Classification workshop at NIPS

16 0.37869468 323 hunch net-2008-11-04-Rise of the Machines

17 0.3747353 421 hunch net-2011-01-03-Herman Goldstine 2011

18 0.36822924 306 hunch net-2008-07-02-Proprietary Data in Academic Research?

19 0.36700344 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

20 0.36255535 379 hunch net-2009-11-23-ICML 2009 Workshops (and Tutorials)


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.126), (38, 0.014), (55, 0.126), (71, 0.577)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.88868022 417 hunch net-2010-11-18-ICML 2011 – Call for Tutorials

Introduction: I would like to encourage people to consider giving a tutorial at next years ICML. The ideal tutorial attracts a wide audience, provides a gentle and easily taught introduction to the chosen research area, and also covers the most important contributions in depth. Submissions are due January 14  (about two weeks before paper deadline). http://www.icml-2011.org/tutorials.php Regards, Ulf

2 0.73725551 161 hunch net-2006-03-05-“Structural” Learning

Introduction: Fernando Pereira pointed out Ando and Zhang ‘s paper on “structural” learning. Structural learning is multitask learning on subproblems created from unlabeled data. The basic idea is to take a look at the unlabeled data and create many supervised problems. On text data, which they test on, these subproblems might be of the form “Given surrounding words predict the middle word”. The hope here is that successfully predicting on these subproblems is relevant to the prediction of your core problem. In the long run, the precise mechanism used (essentially, linear predictors with parameters tied by a common matrix) and the precise problems formed may not be critical. What seems critical is that the hope is realized: the technique provides a significant edge in practice. Some basic questions about this approach are: Are there effective automated mechanisms for creating the subproblems? Is it necessary to use a shared representation?

3 0.60122567 147 hunch net-2006-01-08-Debugging Your Brain

Introduction: One part of doing research is debugging your understanding of reality. This is hard work: How do you even discover where you misunderstand? If you discover a misunderstanding, how do you go about removing it? The process of debugging computer programs is quite analogous to debugging reality misunderstandings. This is natural—a bug in a computer program is a misunderstanding between you and the computer about what you said. Many of the familiar techniques from debugging have exact parallels. Details When programming, there are often signs that some bug exists like: “the graph my program output is shifted a little bit” = maybe you have an indexing error. In debugging yourself, we often have some impression that something is “not right”. These impressions should be addressed directly and immediately. (Some people have the habit of suppressing worries in favor of excess certainty. That’s not healthy for research.) Corner Cases A “corner case” is an input to a program wh

4 0.45715451 152 hunch net-2006-01-30-Should the Input Representation be a Vector?

Introduction: Let’s suppose that we are trying to create a general purpose machine learning box. The box is fed many examples of the function it is supposed to learn and (hopefully) succeeds. To date, most such attempts to produce a box of this form take a vector as input. The elements of the vector might be bits, real numbers, or ‘categorical’ data (a discrete set of values). On the other hand, there are a number of succesful applications of machine learning which do not seem to use a vector representation as input. For example, in vision, convolutional neural networks have been used to solve several vision problems. The input to the convolutional neural network is essentially the raw camera image as a matrix . In learning for natural languages, several people have had success on problems like parts-of-speech tagging using predictors restricted to a window surrounding the word to be predicted. A vector window and a matrix both imply a notion of locality which is being actively and

5 0.44731686 450 hunch net-2011-12-02-Hadoop AllReduce and Terascale Learning

Introduction: Suppose you have a dataset with 2 terafeatures (we only count nonzero entries in a datamatrix), and want to learn a good linear predictor in a reasonable amount of time. How do you do it? As a learning theorist, the first thing you do is pray that this is too much data for the number of parameters—but that’s not the case, there are around 16 billion examples, 16 million parameters, and people really care about a high quality predictor, so subsampling is not a good strategy. Alekh visited us last summer, and we had a breakthrough (see here for details), coming up with the first learning algorithm I’ve seen that is provably faster than any future single machine learning algorithm. The proof of this is simple: We can output a optimal-up-to-precision linear predictor faster than the data can be streamed through the network interface of any single machine involved in the computation. It is necessary but not sufficient to have an effective communication infrastructure. It is ne

6 0.34157786 379 hunch net-2009-11-23-ICML 2009 Workshops (and Tutorials)

7 0.28513938 270 hunch net-2007-11-02-The Machine Learning Award goes to …

8 0.28315884 395 hunch net-2010-04-26-Compassionate Reviewing

9 0.27878976 452 hunch net-2012-01-04-Why ICML? and the summer conferences

10 0.27818561 453 hunch net-2012-01-28-Why COLT?

11 0.2751458 90 hunch net-2005-07-07-The Limits of Learning Theory

12 0.27029961 484 hunch net-2013-06-16-Representative Reviewing

13 0.26976177 89 hunch net-2005-07-04-The Health of COLT

14 0.26925129 40 hunch net-2005-03-13-Avoiding Bad Reviewing

15 0.26853302 264 hunch net-2007-09-30-NIPS workshops are out.

16 0.2661185 116 hunch net-2005-09-30-Research in conferences

17 0.26514101 20 hunch net-2005-02-15-ESPgame and image labeling

18 0.2651062 375 hunch net-2009-10-26-NIPS workshops

19 0.26504904 65 hunch net-2005-05-02-Reviewing techniques for conferences

20 0.264842 448 hunch net-2011-10-24-2011 ML symposium and the bears