hunch_net hunch_net-2008 hunch_net-2008-321 knowledge-graph by maker-knowledge-mining

321 hunch net-2008-10-19-NIPS 2008 workshop on Kernel Learning


meta infos for this blog

Source: html

Introduction: We’d like to invite hunch.net readers to participate in the NIPS 2008 workshop on kernel learning. While the main focus is on automatically learning kernels from data, we are also also looking at the broader questions of feature selection, multi-task learning and multi-view learning. There are no restrictions on the learning problem being addressed (regression, classification, etc), and both theoretical and applied work will be considered. The deadline for submissions is October 24 . More detail can be found here . Corinna Cortes, Arthur Gretton, Gert Lanckriet, Mehryar Mohri, Afshin Rostamizadeh


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 net readers to participate in the NIPS 2008 workshop on kernel learning. [sent-2, score-0.613]

2 While the main focus is on automatically learning kernels from data, we are also also looking at the broader questions of feature selection, multi-task learning and multi-view learning. [sent-3, score-1.452]

3 There are no restrictions on the learning problem being addressed (regression, classification, etc), and both theoretical and applied work will be considered. [sent-4, score-0.782]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('restrictions', 0.251), ('arthur', 0.251), ('corinna', 0.251), ('cortes', 0.251), ('gretton', 0.251), ('mehryar', 0.232), ('mohri', 0.232), ('kernels', 0.201), ('readers', 0.201), ('broader', 0.194), ('invite', 0.182), ('detail', 0.177), ('participate', 0.177), ('october', 0.173), ('selection', 0.169), ('addressed', 0.169), ('submissions', 0.153), ('automatically', 0.151), ('main', 0.151), ('kernel', 0.138), ('focus', 0.138), ('regression', 0.136), ('looking', 0.129), ('deadline', 0.127), ('theoretical', 0.113), ('questions', 0.109), ('feature', 0.107), ('applied', 0.106), ('etc', 0.104), ('classification', 0.102), ('nips', 0.102), ('workshop', 0.097), ('found', 0.092), ('also', 0.091), ('data', 0.064), ('work', 0.053), ('like', 0.049), ('problem', 0.045), ('learning', 0.045)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 321 hunch net-2008-10-19-NIPS 2008 workshop on Kernel Learning

Introduction: We’d like to invite hunch.net readers to participate in the NIPS 2008 workshop on kernel learning. While the main focus is on automatically learning kernels from data, we are also also looking at the broader questions of feature selection, multi-task learning and multi-view learning. There are no restrictions on the learning problem being addressed (regression, classification, etc), and both theoretical and applied work will be considered. The deadline for submissions is October 24 . More detail can be found here . Corinna Cortes, Arthur Gretton, Gert Lanckriet, Mehryar Mohri, Afshin Rostamizadeh

2 0.19883913 372 hunch net-2009-09-29-Machine Learning Protests at the G20

Introduction: The machine learning department at CMU turned out en masse to protest the G20 summit in Pittsburgh. Arthur Gretton uploaded some great photos covering the event

3 0.1042513 443 hunch net-2011-09-03-Fall Machine Learning Events

Introduction: Many Machine Learning related events are coming up this fall. September 9 , abstracts for the New York Machine Learning Symposium are due. Send a 2 page pdf, if interested, and note that we: widened submissions to be from anybody rather than students. set aside a larger fraction of time for contributed submissions. September 15 , there is a machine learning meetup , where I’ll be discussing terascale learning at AOL. September 16 , there is a CS&Econ; day at New York Academy of Sciences. This is not ML focused, but it’s easy to imagine interest. September 23 and later NIPS workshop submissions start coming due. As usual, there are too many good ones, so I won’t be able to attend all those that interest me. I do hope some workshop makers consider ICML this coming summer, as we are increasing to a 2 day format for you. Here are a few that interest me: Big Learning is about dealing with lots of data. Abstracts are due September 30 . The Bayes

4 0.096964017 488 hunch net-2013-08-31-Extreme Classification workshop at NIPS

Introduction: Manik and I are organizing the extreme classification workshop at NIPS this year. We have a number of good speakers lined up, but I would further encourage anyone working in the area to submit an abstract by October 9. I believe this is an idea whose time has now come. The NIPS website doesn’t have other workshops listed yet, but I expect several others to be of significant interest.

5 0.095444843 196 hunch net-2006-07-13-Regression vs. Classification as a Primitive

Introduction: For learning reductions we have been concentrating on reducing various complex learning problems to binary classification. This choice needs to be actively questioned, because it was not carefully considered. Binary clasification is learning a classifier c:X -> {0,1} so as to minimize the probability of being wrong, Pr x,y~D (c(x) y) . The primary alternative candidate seems to be squared error regression. In squared error regression, you learn a regressor s:X -> [0,1] so as to minimize squared error, E x,y~D (s(x)-y) 2 . It is difficult to judge one primitive against another. The judgement must at least partially be made on nontheoretical grounds because (essentially) we are evaluating a choice between two axioms/assumptions. These two primitives are significantly related. Classification can be reduced to regression in the obvious way: you use the regressor to predict D(y=1|x) , then threshold at 0.5 . For this simple reduction a squared error regret of r

6 0.093022093 265 hunch net-2007-10-14-NIPS workshp: Learning Problem Design

7 0.090903722 433 hunch net-2011-04-23-ICML workshops due

8 0.088106275 198 hunch net-2006-07-25-Upcoming conference

9 0.085905217 310 hunch net-2008-07-15-Interesting papers at COLT (and a bit of UAI & workshops)

10 0.084932961 456 hunch net-2012-02-24-ICML+50%

11 0.081212938 234 hunch net-2007-02-22-Create Your Own ICML Workshop

12 0.080507532 6 hunch net-2005-01-27-Learning Complete Problems

13 0.078428388 472 hunch net-2012-08-27-NYAS ML 2012 and ICML 2013

14 0.07293202 277 hunch net-2007-12-12-Workshop Summary—Principles of Learning Problem Design

15 0.071304679 187 hunch net-2006-06-25-Presentation of Proofs is Hard.

16 0.063189469 100 hunch net-2005-08-04-Why Reinforcement Learning is Important

17 0.060660906 369 hunch net-2009-08-27-New York Area Machine Learning Events

18 0.058018446 375 hunch net-2009-10-26-NIPS workshops

19 0.05785735 71 hunch net-2005-05-14-NIPS

20 0.05768754 328 hunch net-2008-11-26-Efficient Reinforcement Learning in MDPs


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.102), (1, -0.019), (2, -0.052), (3, -0.1), (4, 0.046), (5, 0.094), (6, 0.046), (7, 0.01), (8, -0.006), (9, -0.094), (10, 0.003), (11, -0.098), (12, -0.007), (13, 0.058), (14, -0.021), (15, -0.03), (16, -0.027), (17, 0.04), (18, -0.112), (19, -0.049), (20, -0.077), (21, -0.005), (22, -0.069), (23, 0.02), (24, 0.041), (25, -0.012), (26, 0.062), (27, -0.005), (28, -0.087), (29, -0.038), (30, -0.024), (31, -0.052), (32, -0.063), (33, -0.098), (34, -0.065), (35, 0.097), (36, 0.035), (37, -0.016), (38, 0.11), (39, 0.03), (40, 0.025), (41, 0.078), (42, 0.035), (43, -0.013), (44, -0.077), (45, -0.011), (46, -0.007), (47, -0.036), (48, 0.076), (49, -0.039)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95643681 321 hunch net-2008-10-19-NIPS 2008 workshop on Kernel Learning

Introduction: We’d like to invite hunch.net readers to participate in the NIPS 2008 workshop on kernel learning. While the main focus is on automatically learning kernels from data, we are also also looking at the broader questions of feature selection, multi-task learning and multi-view learning. There are no restrictions on the learning problem being addressed (regression, classification, etc), and both theoretical and applied work will be considered. The deadline for submissions is October 24 . More detail can be found here . Corinna Cortes, Arthur Gretton, Gert Lanckriet, Mehryar Mohri, Afshin Rostamizadeh

2 0.70788509 198 hunch net-2006-07-25-Upcoming conference

Introduction: The Workshop for Women in Machine Learning will be held in San Diego on October 4, 2006. For details see the workshop website: http://www.seas.upenn.edu/~wiml/

3 0.64801151 319 hunch net-2008-10-01-NIPS 2008 workshop on ‘Learning over Empirical Hypothesis Spaces’

Introduction: This workshop asks for insights how far we may/can push the theoretical boundary of using data in the design of learning machines. Can we express our classification rule in terms of the sample, or do we have to stick to a core assumption of classical statistical learning theory, namely that the hypothesis space is to be defined independent from the sample? This workshop is particularly interested in – but not restricted to – the ‘luckiness framework’ and the recently introduced notion of ‘compatibility functions’ in a semi-supervised learning context (more information can be found at http://www.kuleuven.be/wehys ).

4 0.61540192 124 hunch net-2005-10-19-Workshop: Atomic Learning

Introduction: We are planning to have a workshop on atomic learning Jan 7 & 8 at TTI-Chicago. Details are here . The earlier request for interest is here . The primary deadline is abstracts due Nov. 20 to jl@tti-c.org.

5 0.60677207 234 hunch net-2007-02-22-Create Your Own ICML Workshop

Introduction: As usual ICML 2007 will be hosting a workshop program to be held this year on June 24th. The success of the program depends on having researchers like you propose interesting workshop topics and then organize the workshops. I’d like to encourage all of you to consider sending a workshop proposal. The proposal deadline has been extended to March 5. See the workshop web-site for details. Organizing a workshop is a unique way to gather an international group of researchers together to focus for an entire day on a topic of your choosing. I’ve always found that the cost of organizing a workshop is not so large, and very low compared to the benefits. The topic and format of a workshop are limited only by your imagination (and the attractiveness to potential participants) and need not follow the usual model of a mini-conference on a particular ML sub-area. Hope to see some interesting proposals rolling in.

6 0.54033673 417 hunch net-2010-11-18-ICML 2011 – Call for Tutorials

7 0.52601802 455 hunch net-2012-02-20-Berkeley Streaming Data Workshop

8 0.51789331 265 hunch net-2007-10-14-NIPS workshp: Learning Problem Design

9 0.51229882 488 hunch net-2013-08-31-Extreme Classification workshop at NIPS

10 0.50768226 443 hunch net-2011-09-03-Fall Machine Learning Events

11 0.49525914 404 hunch net-2010-08-20-The Workshop on Cores, Clusters, and Clouds

12 0.4778612 456 hunch net-2012-02-24-ICML+50%

13 0.47153911 372 hunch net-2009-09-29-Machine Learning Protests at the G20

14 0.46797997 433 hunch net-2011-04-23-ICML workshops due

15 0.44916996 476 hunch net-2012-12-29-Simons Institute Big Data Program

16 0.42879087 277 hunch net-2007-12-12-Workshop Summary—Principles of Learning Problem Design

17 0.40973729 279 hunch net-2007-12-19-Cool and interesting things seen at NIPS

18 0.40652853 444 hunch net-2011-09-07-KDD and MUCMD 2011

19 0.40463421 381 hunch net-2009-12-07-Vowpal Wabbit version 4.0, and a NIPS heresy

20 0.39350098 492 hunch net-2013-12-01-NIPS tutorials and Vowpal Wabbit 7.4


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.076), (53, 0.164), (55, 0.061), (83, 0.464), (95, 0.093)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.9054302 321 hunch net-2008-10-19-NIPS 2008 workshop on Kernel Learning

Introduction: We’d like to invite hunch.net readers to participate in the NIPS 2008 workshop on kernel learning. While the main focus is on automatically learning kernels from data, we are also also looking at the broader questions of feature selection, multi-task learning and multi-view learning. There are no restrictions on the learning problem being addressed (regression, classification, etc), and both theoretical and applied work will be considered. The deadline for submissions is October 24 . More detail can be found here . Corinna Cortes, Arthur Gretton, Gert Lanckriet, Mehryar Mohri, Afshin Rostamizadeh

2 0.90420735 261 hunch net-2007-08-28-Live ML Class

Introduction: Davor and Chunnan point out that MLSS 2007 in Tuebingen has live video for the majority of the world that is not there (heh).

3 0.67797607 52 hunch net-2005-04-04-Grounds for Rejection

Introduction: It’s reviewing season right now, so I thought I would list (at a high level) the sorts of problems which I see in papers. Hopefully, this will help us all write better papers. The following flaws are fatal to any paper: Incorrect theorem or lemma statements A typo might be “ok”, if it can be understood. Any theorem or lemma which indicates an incorrect understanding of reality must be rejected. Not doing so would severely harm the integrity of the conference. A paper rejected for this reason must be fixed. Lack of Understanding If a paper is understood by none of the (typically 3) reviewers then it must be rejected for the same reason. This is more controversial than it sounds because there are some people who maximize paper complexity in the hope of impressing the reviewer. The tactic sometimes succeeds with some reviewers (but not with me). As a reviewer, I sometimes get lost for stupid reasons. This is why an anonymized communication channel with the author can

4 0.62252241 135 hunch net-2005-12-04-Watchword: model

Introduction: In everyday use a model is a system which explains the behavior of some system, hopefully at the level where some alteration of the model predicts some alteration of the real-world system. In machine learning “model” has several variant definitions. Everyday . The common definition is sometimes used. Parameterized . Sometimes model is a short-hand for “parameterized model”. Here, it refers to a model with unspecified free parameters. In the Bayesian learning approach, you typically have a prior over (everyday) models. Predictive . Even further from everyday use is the predictive model. Examples of this are “my model is a decision tree” or “my model is a support vector machine”. Here, there is no real sense in which an SVM explains the underlying process. For example, an SVM tells us nothing in particular about how alterations to the real-world system would create a change. Which definition is being used at any particular time is important information. For examp

5 0.53430206 228 hunch net-2007-01-15-The Machine Learning Department

Introduction: Carnegie Mellon School of Computer Science has the first academic Machine Learning department . This department already existed as the Center for Automated Learning and Discovery , but recently changed it’s name. The reason for changing the name is obvious: very few people think of themselves as “Automated Learner and Discoverers”, but there are number of people who think of themselves as “Machine Learners”. Machine learning is both more succinct and recognizable—good properties for a name. A more interesting question is “Should there be a Machine Learning Department?”. Tom Mitchell has a relevant whitepaper claiming that machine learning is answering a different question than other fields or departments. The fundamental debate here is “Is machine learning different from statistics?” At a cultural level, there is no real debate: they are different. Machine learning is characterized by several very active large peer reviewed conferences, operating in a computer

6 0.45250612 98 hunch net-2005-07-27-Not goal metrics

7 0.36965424 367 hunch net-2009-08-16-Centmail comments

8 0.36716998 91 hunch net-2005-07-10-Thinking the Unthought

9 0.36439714 2 hunch net-2005-01-24-Holy grails of machine learning?

10 0.36386776 16 hunch net-2005-02-09-Intuitions from applied learning

11 0.35644913 145 hunch net-2005-12-29-Deadline Season

12 0.35643455 6 hunch net-2005-01-27-Learning Complete Problems

13 0.35608542 107 hunch net-2005-09-05-Site Update

14 0.3524211 56 hunch net-2005-04-14-Families of Learning Theory Statements

15 0.34898019 151 hunch net-2006-01-25-1 year

16 0.33854043 21 hunch net-2005-02-17-Learning Research Programs

17 0.33434847 141 hunch net-2005-12-17-Workshops as Franchise Conferences

18 0.31530637 191 hunch net-2006-07-08-MaxEnt contradicts Bayes Rule?

19 0.3150295 456 hunch net-2012-02-24-ICML+50%

20 0.3137913 265 hunch net-2007-10-14-NIPS workshp: Learning Problem Design