hunch_net hunch_net-2008 hunch_net-2008-319 knowledge-graph by maker-knowledge-mining

319 hunch net-2008-10-01-NIPS 2008 workshop on ‘Learning over Empirical Hypothesis Spaces’


meta infos for this blog

Source: html

Introduction: This workshop asks for insights how far we may/can push the theoretical boundary of using data in the design of learning machines. Can we express our classification rule in terms of the sample, or do we have to stick to a core assumption of classical statistical learning theory, namely that the hypothesis space is to be defined independent from the sample? This workshop is particularly interested in – but not restricted to – the ‘luckiness framework’ and the recently introduced notion of ‘compatibility functions’ in a semi-supervised learning context (more information can be found at http://www.kuleuven.be/wehys ).


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 This workshop asks for insights how far we may/can push the theoretical boundary of using data in the design of learning machines. [sent-1, score-1.503]

2 Can we express our classification rule in terms of the sample, or do we have to stick to a core assumption of classical statistical learning theory, namely that the hypothesis space is to be defined independent from the sample? [sent-2, score-2.175]

3 This workshop is particularly interested in – but not restricted to – the ‘luckiness framework’ and the recently introduced notion of ‘compatibility functions’ in a semi-supervised learning context (more information can be found at http://www. [sent-3, score-1.344]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('namely', 0.25), ('luckiness', 0.25), ('compatibility', 0.232), ('sample', 0.228), ('boundary', 0.219), ('insights', 0.208), ('stick', 0.208), ('asks', 0.2), ('push', 0.2), ('workshop', 0.194), ('classical', 0.193), ('introduced', 0.193), ('http', 0.187), ('express', 0.182), ('restricted', 0.182), ('framework', 0.162), ('hypothesis', 0.15), ('statistical', 0.146), ('defined', 0.143), ('rule', 0.143), ('recently', 0.143), ('independent', 0.139), ('assumption', 0.135), ('notion', 0.129), ('context', 0.126), ('functions', 0.123), ('core', 0.119), ('theoretical', 0.113), ('space', 0.111), ('terms', 0.109), ('classification', 0.102), ('design', 0.1), ('far', 0.099), ('found', 0.092), ('interested', 0.087), ('particularly', 0.08), ('theory', 0.075), ('information', 0.073), ('data', 0.064), ('using', 0.061), ('learning', 0.045)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 319 hunch net-2008-10-01-NIPS 2008 workshop on ‘Learning over Empirical Hypothesis Spaces’

Introduction: This workshop asks for insights how far we may/can push the theoretical boundary of using data in the design of learning machines. Can we express our classification rule in terms of the sample, or do we have to stick to a core assumption of classical statistical learning theory, namely that the hypothesis space is to be defined independent from the sample? This workshop is particularly interested in – but not restricted to – the ‘luckiness framework’ and the recently introduced notion of ‘compatibility functions’ in a semi-supervised learning context (more information can be found at http://www.kuleuven.be/wehys ).

2 0.13078384 198 hunch net-2006-07-25-Upcoming conference

Introduction: The Workshop for Women in Machine Learning will be held in San Diego on October 4, 2006. For details see the workshop website: http://www.seas.upenn.edu/~wiml/

3 0.099626958 234 hunch net-2007-02-22-Create Your Own ICML Workshop

Introduction: As usual ICML 2007 will be hosting a workshop program to be held this year on June 24th. The success of the program depends on having researchers like you propose interesting workshop topics and then organize the workshops. I’d like to encourage all of you to consider sending a workshop proposal. The proposal deadline has been extended to March 5. See the workshop web-site for details. Organizing a workshop is a unique way to gather an international group of researchers together to focus for an entire day on a topic of your choosing. I’ve always found that the cost of organizing a workshop is not so large, and very low compared to the benefits. The topic and format of a workshop are limited only by your imagination (and the attractiveness to potential participants) and need not follow the usual model of a mini-conference on a particular ML sub-area. Hope to see some interesting proposals rolling in.

4 0.098871291 7 hunch net-2005-01-31-Watchword: Assumption

Introduction: “Assumption” is another word to be careful with in machine learning because it is used in several ways. Assumption = Bias There are several ways to see that some form of ‘bias’ (= preferring of one solution over another) is necessary. This is obvious in an adversarial setting. A good bit of work has been expended explaining this in other settings with “ no free lunch ” theorems. This is a usage specialized to learning which is particularly common when talking about priors for Bayesian Learning. Assumption = “if” of a theorem The assumptions are the ‘if’ part of the ‘if-then’ in a theorem. This is a fairly common usage. Assumption = Axiom The assumptions are the things that we assume are true, but which we cannot verify. Examples are “the IID assumption” or “my problem is a DNF on a small number of bits”. This is the usage which I prefer. One difficulty with any use of the word “assumption” is that you often encounter “if assumption then conclusion so if no

5 0.096307568 345 hunch net-2009-03-08-Prediction Science

Introduction: One view of machine learning is that it’s about how to program computers to predict well. This suggests a broader research program centered around the more pervasive goal of simply predicting well. There are many distinct strands of this broader research program which are only partially unified. Here are the ones that I know of: Learning Theory . Learning theory focuses on several topics related to the dynamics and process of prediction. Convergence bounds like the VC bound give an intellectual foundation to many learning algorithms. Online learning algorithms like Weighted Majority provide an alternate purely game theoretic foundation for learning. Boosting algorithms yield algorithms for purifying prediction abiliity. Reduction algorithms provide means for changing esoteric problems into well known ones. Machine Learning . A great deal of experience has accumulated in practical algorithm design from a mixture of paradigms, including bayesian, biological, opt

6 0.089446574 332 hunch net-2008-12-23-Use of Learning Theory

7 0.080482788 455 hunch net-2012-02-20-Berkeley Streaming Data Workshop

8 0.078255951 235 hunch net-2007-03-03-All Models of Learning have Flaws

9 0.078019291 136 hunch net-2005-12-07-Is the Google way the way for machine learning?

10 0.075562701 41 hunch net-2005-03-15-The State of Tight Bounds

11 0.071032278 45 hunch net-2005-03-22-Active learning

12 0.068548232 213 hunch net-2006-10-08-Incompatibilities between classical confidence intervals and learning.

13 0.067560822 310 hunch net-2008-07-15-Interesting papers at COLT (and a bit of UAI & workshops)

14 0.066521421 277 hunch net-2007-12-12-Workshop Summary—Principles of Learning Problem Design

15 0.066164419 265 hunch net-2007-10-14-NIPS workshp: Learning Problem Design

16 0.065588281 12 hunch net-2005-02-03-Learning Theory, by assumption

17 0.06552285 230 hunch net-2007-02-02-Thoughts regarding “Is machine learning different from statistics?”

18 0.064037144 127 hunch net-2005-11-02-Progress in Active Learning

19 0.063885763 289 hunch net-2008-02-17-The Meaning of Confidence

20 0.062499769 456 hunch net-2012-02-24-ICML+50%


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.117), (1, 0.034), (2, -0.044), (3, -0.075), (4, 0.068), (5, 0.064), (6, 0.045), (7, 0.023), (8, 0.048), (9, -0.066), (10, 0.041), (11, -0.033), (12, 0.001), (13, 0.059), (14, -0.044), (15, -0.092), (16, -0.071), (17, 0.084), (18, -0.104), (19, -0.093), (20, -0.033), (21, -0.058), (22, -0.035), (23, 0.074), (24, -0.018), (25, 0.02), (26, 0.053), (27, -0.037), (28, 0.041), (29, 0.021), (30, -0.017), (31, -0.008), (32, -0.013), (33, -0.064), (34, -0.063), (35, -0.048), (36, -0.044), (37, -0.018), (38, 0.138), (39, -0.067), (40, -0.031), (41, 0.039), (42, 0.012), (43, -0.041), (44, -0.034), (45, 0.019), (46, -0.053), (47, -0.025), (48, 0.015), (49, -0.001)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96869409 319 hunch net-2008-10-01-NIPS 2008 workshop on ‘Learning over Empirical Hypothesis Spaces’

Introduction: This workshop asks for insights how far we may/can push the theoretical boundary of using data in the design of learning machines. Can we express our classification rule in terms of the sample, or do we have to stick to a core assumption of classical statistical learning theory, namely that the hypothesis space is to be defined independent from the sample? This workshop is particularly interested in – but not restricted to – the ‘luckiness framework’ and the recently introduced notion of ‘compatibility functions’ in a semi-supervised learning context (more information can be found at http://www.kuleuven.be/wehys ).

2 0.68733287 321 hunch net-2008-10-19-NIPS 2008 workshop on Kernel Learning

Introduction: We’d like to invite hunch.net readers to participate in the NIPS 2008 workshop on kernel learning. While the main focus is on automatically learning kernels from data, we are also also looking at the broader questions of feature selection, multi-task learning and multi-view learning. There are no restrictions on the learning problem being addressed (regression, classification, etc), and both theoretical and applied work will be considered. The deadline for submissions is October 24 . More detail can be found here . Corinna Cortes, Arthur Gretton, Gert Lanckriet, Mehryar Mohri, Afshin Rostamizadeh

3 0.68281704 198 hunch net-2006-07-25-Upcoming conference

Introduction: The Workshop for Women in Machine Learning will be held in San Diego on October 4, 2006. For details see the workshop website: http://www.seas.upenn.edu/~wiml/

4 0.65508229 234 hunch net-2007-02-22-Create Your Own ICML Workshop

Introduction: As usual ICML 2007 will be hosting a workshop program to be held this year on June 24th. The success of the program depends on having researchers like you propose interesting workshop topics and then organize the workshops. I’d like to encourage all of you to consider sending a workshop proposal. The proposal deadline has been extended to March 5. See the workshop web-site for details. Organizing a workshop is a unique way to gather an international group of researchers together to focus for an entire day on a topic of your choosing. I’ve always found that the cost of organizing a workshop is not so large, and very low compared to the benefits. The topic and format of a workshop are limited only by your imagination (and the attractiveness to potential participants) and need not follow the usual model of a mini-conference on a particular ML sub-area. Hope to see some interesting proposals rolling in.

5 0.62546283 455 hunch net-2012-02-20-Berkeley Streaming Data Workshop

Introduction: The From Data to Knowledge workshop May 7-11 at Berkeley should be of interest to the many people encountering streaming data in different disciplines. It’s run by a group of astronomers who encounter streaming data all the time. I met Josh Bloom recently and he is broadly interested in a workshop covering all aspects of Machine Learning on streaming data. The hope here is that techniques developed in one area turn out useful in another which seems quite plausible. Particularly if you are in the bay area, consider checking it out.

6 0.58541954 265 hunch net-2007-10-14-NIPS workshp: Learning Problem Design

7 0.55974185 277 hunch net-2007-12-12-Workshop Summary—Principles of Learning Problem Design

8 0.50480741 444 hunch net-2011-09-07-KDD and MUCMD 2011

9 0.48296916 124 hunch net-2005-10-19-Workshop: Atomic Learning

10 0.46990246 310 hunch net-2008-07-15-Interesting papers at COLT (and a bit of UAI & workshops)

11 0.45450017 476 hunch net-2012-12-29-Simons Institute Big Data Program

12 0.44638312 404 hunch net-2010-08-20-The Workshop on Cores, Clusters, and Clouds

13 0.44271857 45 hunch net-2005-03-22-Active learning

14 0.44231778 420 hunch net-2010-12-26-NIPS 2010

15 0.43967003 433 hunch net-2011-04-23-ICML workshops due

16 0.43675217 113 hunch net-2005-09-19-NIPS Workshops

17 0.43635434 456 hunch net-2012-02-24-ICML+50%

18 0.42662328 136 hunch net-2005-12-07-Is the Google way the way for machine learning?

19 0.42092717 345 hunch net-2009-03-08-Prediction Science

20 0.41804472 80 hunch net-2005-06-10-Workshops are not Conferences


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.147), (95, 0.713)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98358107 319 hunch net-2008-10-01-NIPS 2008 workshop on ‘Learning over Empirical Hypothesis Spaces’

Introduction: This workshop asks for insights how far we may/can push the theoretical boundary of using data in the design of learning machines. Can we express our classification rule in terms of the sample, or do we have to stick to a core assumption of classical statistical learning theory, namely that the hypothesis space is to be defined independent from the sample? This workshop is particularly interested in – but not restricted to – the ‘luckiness framework’ and the recently introduced notion of ‘compatibility functions’ in a semi-supervised learning context (more information can be found at http://www.kuleuven.be/wehys ).

2 0.97307742 479 hunch net-2013-01-31-Remote large scale learning class participation

Introduction: Yann and I have arranged so that people who are interested in our large scale machine learning class and not able to attend in person can follow along via two methods. Videos will be posted with about a 1 day delay on techtalks . This is a side-by-side capture of video+slides from Weyond . We are experimenting with Piazza as a discussion forum. Anyone is welcome to subscribe to Piazza and ask questions there, where I will be monitoring things. update2 : Sign up here . The first lecture is up now, including the revised version of the slides which fixes a few typos and rounds out references.

3 0.96724743 390 hunch net-2010-03-12-Netflix Challenge 2 Canceled

Introduction: The second Netflix prize is canceled due to privacy problems . I continue to believe my original assessment of this paper, that the privacy break was somewhat overstated. I still haven’t seen any serious privacy failures on the scale of the AOL search log release . I expect privacy concerns to continue to be a big issue when dealing with data releases by companies or governments. The theory of maintaining privacy while using data is improving, but it is not yet in a state where the limits of what’s possible are clear let alone how to achieve these limits in a manner friendly to a prediction competition.

4 0.90221971 30 hunch net-2005-02-25-Why Papers?

Introduction: Makc asked a good question in comments—”Why bother to make a paper, at all?” There are several reasons for writing papers which may not be immediately obvious to people not in academia. The basic idea is that papers have considerably more utility than the obvious “present an idea”. Papers are a formalized units of work. Academics (especially young ones) are often judged on the number of papers they produce. Papers have a formalized method of citing and crediting other—the bibliography. Academics (especially older ones) are often judged on the number of citations they receive. Papers enable a “more fair” anonymous review. Conferences receive many papers, from which a subset are selected. Discussion forums are inherently not anonymous for anyone who wants to build a reputation for good work. Papers are an excuse to meet your friends. Papers are the content of conferences, but much of what you do is talk to friends about interesting problems while there. Sometimes yo

5 0.8769297 389 hunch net-2010-02-26-Yahoo! ML events

Introduction: Yahoo! is sponsoring two machine learning events that might interest people. The Key Scientific Challenges program (due March 5) for Machine Learning and Statistics offers $5K (plus bonuses) for graduate students working on a core problem of interest to Y! If you are already working on one of these problems, there is no reason not to submit, and if you aren’t you might want to think about it for next year, as I am confident they all press the boundary of the possible in Machine Learning. There are 7 days left. The Learning to Rank challenge (due May 31) offers an $8K first prize for the best ranking algorithm on a real (and really used) dataset for search ranking, with presentations at an ICML workshop. Unlike the Netflix competition, there are prizes for 2nd, 3rd, and 4th place, perhaps avoiding the heartbreak the ensemble encountered. If you think you know how to rank, you should give it a try, and we might all learn something. There are 3 months left.

6 0.83693129 456 hunch net-2012-02-24-ICML+50%

7 0.83274263 127 hunch net-2005-11-02-Progress in Active Learning

8 0.75208515 344 hunch net-2009-02-22-Effective Research Funding

9 0.73864806 373 hunch net-2009-10-03-Static vs. Dynamic multiclass prediction

10 0.66279614 462 hunch net-2012-04-20-Both new: STOC workshops and NEML

11 0.61444539 234 hunch net-2007-02-22-Create Your Own ICML Workshop

12 0.59481567 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers

13 0.52486122 7 hunch net-2005-01-31-Watchword: Assumption

14 0.4793357 275 hunch net-2007-11-29-The Netflix Crack

15 0.47620183 455 hunch net-2012-02-20-Berkeley Streaming Data Workshop

16 0.45039999 445 hunch net-2011-09-28-Somebody’s Eating Your Lunch

17 0.44936007 290 hunch net-2008-02-27-The Stats Handicap

18 0.43606278 464 hunch net-2012-05-03-Microsoft Research, New York City

19 0.43584001 36 hunch net-2005-03-05-Funding Research

20 0.4313575 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class