hunch_net hunch_net-2009 hunch_net-2009-384 knowledge-graph by maker-knowledge-mining

384 hunch net-2009-12-24-Top graduates this season


meta infos for this blog

Source: html

Introduction: I would like to point out 3 graduates this season as having my confidence they are capable of doing great things. Daniel Hsu has diverse papers with diverse coauthors on {active learning, mulitlabeling, temporal learning, …} each covering new algorithms and methods of analysis. He is also a capable programmer, having helped me with some nitty-gritty details of cluster parallel Vowpal Wabbit this summer. He has an excellent tendency to just get things done. Nicolas Lambert doesn’t nominally work in machine learning, but I’ve found his work in elicitation relevant nevertheless. In essence, elicitable properties are closely related to learnable properties, and the elicitation complexity is related to a notion of learning complexity. See the Surrogate regret bounds paper for some related discussion. Few people successfully work at such a general level that it crosses fields, but he’s one of them. Yisong Yue is deeply focused on interactive learning, which he has a


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 I would like to point out 3 graduates this season as having my confidence they are capable of doing great things. [sent-1, score-0.603]

2 Daniel Hsu has diverse papers with diverse coauthors on {active learning, mulitlabeling, temporal learning, …} each covering new algorithms and methods of analysis. [sent-2, score-0.721]

3 He is also a capable programmer, having helped me with some nitty-gritty details of cluster parallel Vowpal Wabbit this summer. [sent-3, score-0.354]

4 He has an excellent tendency to just get things done. [sent-4, score-0.295]

5 Nicolas Lambert doesn’t nominally work in machine learning, but I’ve found his work in elicitation relevant nevertheless. [sent-5, score-0.664]

6 In essence, elicitable properties are closely related to learnable properties, and the elicitation complexity is related to a notion of learning complexity. [sent-6, score-1.039]

7 See the Surrogate regret bounds paper for some related discussion. [sent-7, score-0.17]

8 Few people successfully work at such a general level that it crosses fields, but he’s one of them. [sent-8, score-0.252]

9 Yisong Yue is deeply focused on interactive learning, which he has attacked at all levels: theory, algorithm adaptation, programming, and popular description . [sent-9, score-0.399]

10 I’ve seen a relentless multidimensional focus on a new real-world problem be an excellent strategy for research and expect he’ll succeed. [sent-10, score-0.439]

11 The obvious caveat applies—I don’t know or haven’t fully appreciated everyone’s work so I’m sure I missed people. [sent-11, score-0.598]

12 I’d like to particularly point out Percy Liang and David Sontag as plausibly such whom I’m sure others appreciate a great deal. [sent-12, score-0.312]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('elicitation', 0.261), ('properties', 0.206), ('excellent', 0.189), ('diverse', 0.183), ('related', 0.17), ('capable', 0.166), ('nominally', 0.149), ('adaptation', 0.149), ('sontag', 0.149), ('multidimensional', 0.149), ('percy', 0.149), ('nicolas', 0.138), ('surrogate', 0.138), ('lambert', 0.138), ('temporal', 0.138), ('sure', 0.131), ('yisong', 0.131), ('yue', 0.131), ('graduates', 0.131), ('work', 0.127), ('season', 0.125), ('programmer', 0.125), ('appreciated', 0.125), ('successfully', 0.125), ('coauthors', 0.12), ('learnable', 0.12), ('popular', 0.12), ('levels', 0.115), ('closely', 0.112), ('caveat', 0.112), ('tendency', 0.106), ('missed', 0.103), ('hsu', 0.103), ('strategy', 0.101), ('essence', 0.099), ('covering', 0.097), ('interactive', 0.097), ('cluster', 0.097), ('applies', 0.095), ('daniel', 0.093), ('description', 0.091), ('fields', 0.091), ('helped', 0.091), ('deeply', 0.091), ('vowpal', 0.091), ('wabbit', 0.091), ('great', 0.091), ('ve', 0.09), ('confidence', 0.09), ('appreciate', 0.09)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 384 hunch net-2009-12-24-Top graduates this season

Introduction: I would like to point out 3 graduates this season as having my confidence they are capable of doing great things. Daniel Hsu has diverse papers with diverse coauthors on {active learning, mulitlabeling, temporal learning, …} each covering new algorithms and methods of analysis. He is also a capable programmer, having helped me with some nitty-gritty details of cluster parallel Vowpal Wabbit this summer. He has an excellent tendency to just get things done. Nicolas Lambert doesn’t nominally work in machine learning, but I’ve found his work in elicitation relevant nevertheless. In essence, elicitable properties are closely related to learnable properties, and the elicitation complexity is related to a notion of learning complexity. See the Surrogate regret bounds paper for some related discussion. Few people successfully work at such a general level that it crosses fields, but he’s one of them. Yisong Yue is deeply focused on interactive learning, which he has a

2 0.15216003 376 hunch net-2009-11-06-Yisong Yue on Self-improving Systems

Introduction: I’d like to point out Yisong Yue ‘s post on Self-improving systems , which is a nicely readable description of the necessity and potential of interactive learning to deal with the information overload problem that is endemic to the modern internet.

3 0.11473976 361 hunch net-2009-06-24-Interesting papers at UAICMOLT 2009

Introduction: Here’s a list of papers that I found interesting at ICML / COLT / UAI in 2009. Elad Hazan and Comandur Seshadhri Efficient learning algorithms for changing environments at ICML. This paper shows how to adapt learning algorithms that compete with fixed predictors to compete with changing policies. The definition of regret they deal with seems particularly useful in many situation. Hal Daume , Unsupervised Search-based Structured Prediction at ICML. This paper shows a technique for reducing unsupervised learning to supervised learning which (a) make a fast unsupervised learning algorithm and (b) makes semisupervised learning both easy and highly effective. There were two papers with similar results on active learning in the KWIK framework for linear regression, both reducing the sample complexity to . One was Nicolo Cesa-Bianchi , Claudio Gentile , and Francesco Orabona Robust Bounds for Classification via Selective Sampling at ICML and the other was Thoma

4 0.10119346 329 hunch net-2008-11-28-A Bumper Crop of Machine Learning Graduates

Introduction: My impression is that this is a particularly strong year for machine learning graduates. Here’s my short list of the strong graduates I know. Analpha (for perversity’s sake) by last name: Jenn Wortmann . When Jenn visited us for the summer, she had one , two , three , four papers. That is typical—she’s smart, capable, and follows up many directions of research. I believe approximately all of her many papers are on different subjects. Ruslan Salakhutdinov . A Science paper on bijective dimensionality reduction , mastered and improved on deep belief nets which seems like an important flavor of nonlinear learning, and in my experience he’s very fast, capable and creative at problem solving. Marc’Aurelio Ranzato . I haven’t spoken with Marc very much, but he had a great visit at Yahoo! this summer, and has an impressive portfolio of applications and improvements on convolutional neural networks and other deep learning algorithms. Lihong Li . Lihong developed the

5 0.095111214 432 hunch net-2011-04-20-The End of the Beginning of Active Learning

Introduction: This post is by Daniel Hsu and John Langford. In selective sampling style active learning, a learning algorithm chooses which examples to label. We now have an active learning algorithm that is: Efficient in label complexity, unlabeled complexity, and computational complexity. Competitive with supervised learning anywhere that supervised learning works. Compatible with online learning, with any optimization-based learning algorithm, with any loss function, with offline testing, and even with changing learning algorithms. Empirically effective. The basic idea is to combine disagreement region-based sampling with importance weighting : an example is selected to be labeled with probability proportional to how useful it is for distinguishing among near-optimal classifiers, and labeled examples are importance-weighted by the inverse of these probabilities. The combination of these simple ideas removes the sampling bias problem that has plagued many previous he

6 0.094920307 293 hunch net-2008-03-23-Interactive Machine Learning

7 0.090007983 347 hunch net-2009-03-26-Machine Learning is too easy

8 0.088029511 309 hunch net-2008-07-10-Interesting papers, ICML 2008

9 0.086665884 454 hunch net-2012-01-30-ICML Posters and Scope

10 0.085457131 451 hunch net-2011-12-13-Vowpal Wabbit version 6.1 & the NIPS tutorial

11 0.083888575 41 hunch net-2005-03-15-The State of Tight Bounds

12 0.083308101 332 hunch net-2008-12-23-Use of Learning Theory

13 0.082631335 490 hunch net-2013-11-09-Graduates and Postdocs

14 0.081894711 279 hunch net-2007-12-19-Cool and interesting things seen at NIPS

15 0.078177914 381 hunch net-2009-12-07-Vowpal Wabbit version 4.0, and a NIPS heresy

16 0.078030817 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

17 0.076913297 346 hunch net-2009-03-18-Parallel ML primitives

18 0.074350193 475 hunch net-2012-10-26-ML Symposium and Strata-Hadoop World

19 0.072920315 215 hunch net-2006-10-22-Exemplar programming

20 0.071810387 213 hunch net-2006-10-08-Incompatibilities between classical confidence intervals and learning.


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.184), (1, 0.001), (2, -0.052), (3, -0.01), (4, 0.063), (5, 0.014), (6, 0.015), (7, -0.054), (8, -0.047), (9, 0.002), (10, 0.049), (11, 0.005), (12, -0.037), (13, 0.059), (14, -0.027), (15, -0.048), (16, -0.009), (17, 0.072), (18, -0.004), (19, -0.058), (20, 0.062), (21, 0.111), (22, -0.042), (23, 0.058), (24, -0.022), (25, -0.013), (26, -0.044), (27, -0.001), (28, -0.039), (29, 0.071), (30, 0.031), (31, -0.028), (32, -0.069), (33, 0.008), (34, 0.058), (35, -0.086), (36, -0.014), (37, -0.152), (38, -0.057), (39, 0.096), (40, 0.048), (41, 0.043), (42, -0.041), (43, 0.108), (44, -0.093), (45, -0.018), (46, -0.129), (47, 0.035), (48, -0.009), (49, 0.019)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95036185 384 hunch net-2009-12-24-Top graduates this season

Introduction: I would like to point out 3 graduates this season as having my confidence they are capable of doing great things. Daniel Hsu has diverse papers with diverse coauthors on {active learning, mulitlabeling, temporal learning, …} each covering new algorithms and methods of analysis. He is also a capable programmer, having helped me with some nitty-gritty details of cluster parallel Vowpal Wabbit this summer. He has an excellent tendency to just get things done. Nicolas Lambert doesn’t nominally work in machine learning, but I’ve found his work in elicitation relevant nevertheless. In essence, elicitable properties are closely related to learnable properties, and the elicitation complexity is related to a notion of learning complexity. See the Surrogate regret bounds paper for some related discussion. Few people successfully work at such a general level that it crosses fields, but he’s one of them. Yisong Yue is deeply focused on interactive learning, which he has a

2 0.6170128 376 hunch net-2009-11-06-Yisong Yue on Self-improving Systems

Introduction: I’d like to point out Yisong Yue ‘s post on Self-improving systems , which is a nicely readable description of the necessity and potential of interactive learning to deal with the information overload problem that is endemic to the modern internet.

3 0.5995729 329 hunch net-2008-11-28-A Bumper Crop of Machine Learning Graduates

Introduction: My impression is that this is a particularly strong year for machine learning graduates. Here’s my short list of the strong graduates I know. Analpha (for perversity’s sake) by last name: Jenn Wortmann . When Jenn visited us for the summer, she had one , two , three , four papers. That is typical—she’s smart, capable, and follows up many directions of research. I believe approximately all of her many papers are on different subjects. Ruslan Salakhutdinov . A Science paper on bijective dimensionality reduction , mastered and improved on deep belief nets which seems like an important flavor of nonlinear learning, and in my experience he’s very fast, capable and creative at problem solving. Marc’Aurelio Ranzato . I haven’t spoken with Marc very much, but he had a great visit at Yahoo! this summer, and has an impressive portfolio of applications and improvements on convolutional neural networks and other deep learning algorithms. Lihong Li . Lihong developed the

4 0.59614909 386 hunch net-2010-01-13-Sam Roweis died

Introduction: and I can’t help but remember him. I first met Sam as an undergraduate at Caltech where he was TA for Hopfield ‘s class, and again when I visited Gatsby , when he invited me to visit Toronto , and at too many conferences to recount. His personality was a combination of enthusiastic and thoughtful, with a great ability to phrase a problem so it’s solution must be understood. With respect to my own work, Sam was the one who advised me to make my first tutorial , leading to others, and to other things, all of which I’m grateful to him for. In fact, my every interaction with Sam was positive, and that was his way. His death is being called a suicide which is so incompatible with my understanding of Sam that it strains my credibility. But we know that his many responsibilities were great, and it is well understood that basically all sane researchers have legions of inner doubts. Having been depressed now and then myself, it’s helpful to understand at least intellectually

5 0.53604323 279 hunch net-2007-12-19-Cool and interesting things seen at NIPS

Introduction: I learned a number of things at NIPS . The financial people were there in greater force than previously. Two Sigma sponsored NIPS while DRW Trading had a booth. The adversarial machine learning workshop had a number of talks about interesting applications where an adversary really is out to try and mess up your learning algorithm. This is very different from the situation we often think of where the world is oblivious to our learning. This may present new and convincing applications for the learning-against-an-adversary work common at COLT . There were several interesing papers. Sanjoy Dasgupta , Daniel Hsu , and Claire Monteleoni had a paper on General Agnostic Active Learning . The basic idea is that active learning can be done via reduction to a form of supervised learning problem. This is great, because we have many supervised learning algorithms from which the benefits of active learning may be derived. Joseph Bradley and Robert Schapire had a P

6 0.51452434 361 hunch net-2009-06-24-Interesting papers at UAICMOLT 2009

7 0.50511819 412 hunch net-2010-09-28-Machined Learnings

8 0.5005002 366 hunch net-2009-08-03-Carbon in Computer Science Research

9 0.49685436 293 hunch net-2008-03-23-Interactive Machine Learning

10 0.4950493 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer

11 0.47609529 406 hunch net-2010-08-22-KDD 2010

12 0.46478933 368 hunch net-2009-08-26-Another 10-year paper in Machine Learning

13 0.45757449 199 hunch net-2006-07-26-Two more UAI papers of interest

14 0.45714059 250 hunch net-2007-06-23-Machine Learning Jobs are Growing on Trees

15 0.44781917 421 hunch net-2011-01-03-Herman Goldstine 2011

16 0.44402552 432 hunch net-2011-04-20-The End of the Beginning of Active Learning

17 0.43705824 21 hunch net-2005-02-17-Learning Research Programs

18 0.43664822 280 hunch net-2007-12-20-Cool and Interesting things at NIPS, take three

19 0.42654198 281 hunch net-2007-12-21-Vowpal Wabbit Code Release

20 0.42628562 310 hunch net-2008-07-15-Interesting papers at COLT (and a bit of UAI & workshops)


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.194), (33, 0.408), (38, 0.068), (53, 0.041), (54, 0.046), (55, 0.051), (94, 0.076), (95, 0.023)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.90435165 61 hunch net-2005-04-25-Embeddings: what are they good for?

Introduction: I’ve been looking at some recent embeddings work, and am struck by how beautiful the theory and algorithms are. It also makes me wonder, what are embeddings good for? A few things immediately come to mind: (1) For visualization of high-dimensional data sets. In this case, one would like good algorithms for embedding specifically into 2- and 3-dimensional Euclidean spaces. (2) For nonparametric modeling. The usual nonparametric models (histograms, nearest neighbor) often require resources which are exponential in the dimension. So if the data actually lie close to some low-dimensional surface, it might be a good idea to first identify this surface and embed the data before applying the model. Incidentally, for applications like these, it’s important to have a functional mapping from high to low dimension, which some techniques do not yield up. (3) As a prelude to classifier learning. The hope here is presumably that learning will be easier in the low-dimensional space,

2 0.88050771 350 hunch net-2009-04-23-Jonathan Chang at Slycoder

Introduction: Jonathan Chang has a research blog on aspects of machine learning.

same-blog 3 0.84424645 384 hunch net-2009-12-24-Top graduates this season

Introduction: I would like to point out 3 graduates this season as having my confidence they are capable of doing great things. Daniel Hsu has diverse papers with diverse coauthors on {active learning, mulitlabeling, temporal learning, …} each covering new algorithms and methods of analysis. He is also a capable programmer, having helped me with some nitty-gritty details of cluster parallel Vowpal Wabbit this summer. He has an excellent tendency to just get things done. Nicolas Lambert doesn’t nominally work in machine learning, but I’ve found his work in elicitation relevant nevertheless. In essence, elicitable properties are closely related to learnable properties, and the elicitation complexity is related to a notion of learning complexity. See the Surrogate regret bounds paper for some related discussion. Few people successfully work at such a general level that it crosses fields, but he’s one of them. Yisong Yue is deeply focused on interactive learning, which he has a

4 0.73334706 493 hunch net-2014-02-16-Metacademy: a package manager for knowledge

Introduction: In recent years, there’s been an explosion of free educational resources that make high-level knowledge and skills accessible to an ever-wider group of people. In your own field, you probably have a good idea of where to look for the answer to any particular question. But outside your areas of expertise, sifting through textbooks, Wikipedia articles, research papers, and online lectures can be bewildering (unless you’re fortunate enough to have a knowledgeable colleague to consult). What are the key concepts in the field, how do they relate to each other, which ones should you learn, and where should you learn them? Courses are a major vehicle for packaging educational materials for a broad audience. The trouble is that they’re typically meant to be consumed linearly, regardless of your specific background or goals. Also, unless thousands of other people have had the same background and learning goals, there may not even be a course that fits your needs. Recently, we ( Roger Grosse

5 0.62350273 234 hunch net-2007-02-22-Create Your Own ICML Workshop

Introduction: As usual ICML 2007 will be hosting a workshop program to be held this year on June 24th. The success of the program depends on having researchers like you propose interesting workshop topics and then organize the workshops. I’d like to encourage all of you to consider sending a workshop proposal. The proposal deadline has been extended to March 5. See the workshop web-site for details. Organizing a workshop is a unique way to gather an international group of researchers together to focus for an entire day on a topic of your choosing. I’ve always found that the cost of organizing a workshop is not so large, and very low compared to the benefits. The topic and format of a workshop are limited only by your imagination (and the attractiveness to potential participants) and need not follow the usual model of a mini-conference on a particular ML sub-area. Hope to see some interesting proposals rolling in.

6 0.52044201 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

7 0.50756347 248 hunch net-2007-06-19-How is Compressed Sensing going to change Machine Learning ?

8 0.47806805 33 hunch net-2005-02-28-Regularization

9 0.47760424 14 hunch net-2005-02-07-The State of the Reduction

10 0.47723618 95 hunch net-2005-07-14-What Learning Theory might do

11 0.47549838 351 hunch net-2009-05-02-Wielding a New Abstraction

12 0.47507873 235 hunch net-2007-03-03-All Models of Learning have Flaws

13 0.47506639 131 hunch net-2005-11-16-The Everything Ensemble Edge

14 0.47406316 230 hunch net-2007-02-02-Thoughts regarding “Is machine learning different from statistics?”

15 0.47328436 259 hunch net-2007-08-19-Choice of Metrics

16 0.47282144 359 hunch net-2009-06-03-Functionally defined Nonlinear Dynamic Models

17 0.47269392 360 hunch net-2009-06-15-In Active Learning, the question changes

18 0.47269088 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

19 0.47234946 79 hunch net-2005-06-08-Question: “When is the right time to insert the loss function?”

20 0.47197869 41 hunch net-2005-03-15-The State of Tight Bounds