hunch_net hunch_net-2010 hunch_net-2010-412 knowledge-graph by maker-knowledge-mining

412 hunch net-2010-09-28-Machined Learnings


meta infos for this blog

Source: html

Introduction: Paul Mineiro has started Machined Learnings where he’s seriously attempting to do ML research in public. I personally need to read through in greater detail, as much of it is learning reduction related, trying to deal with the sorts of complex source problems that come up in practice.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Paul Mineiro has started Machined Learnings where he’s seriously attempting to do ML research in public. [sent-1, score-0.933]

2 I personally need to read through in greater detail, as much of it is learning reduction related, trying to deal with the sorts of complex source problems that come up in practice. [sent-2, score-2.393]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('attempting', 0.334), ('paul', 0.304), ('seriously', 0.304), ('detail', 0.296), ('sorts', 0.26), ('personally', 0.247), ('greater', 0.239), ('read', 0.208), ('complex', 0.203), ('started', 0.203), ('reduction', 0.201), ('practice', 0.197), ('ml', 0.189), ('source', 0.182), ('trying', 0.178), ('deal', 0.169), ('related', 0.159), ('need', 0.155), ('come', 0.15), ('research', 0.092), ('problems', 0.091), ('much', 0.085), ('learning', 0.025)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 412 hunch net-2010-09-28-Machined Learnings

Introduction: Paul Mineiro has started Machined Learnings where he’s seriously attempting to do ML research in public. I personally need to read through in greater detail, as much of it is learning reduction related, trying to deal with the sorts of complex source problems that come up in practice.

2 0.13750234 187 hunch net-2006-06-25-Presentation of Proofs is Hard.

Introduction: When presenting part of the Reinforcement Learning theory tutorial at ICML 2006 , I was forcibly reminded of this. There are several difficulties. When creating the presentation, the correct level of detail is tricky. With too much detail, the proof takes too much time and people may be lost to boredom. With too little detail, the steps of the proof involve too-great a jump. This is very difficult to judge. What may be an easy step in the careful thought of a quiet room is not so easy when you are occupied by the process of presentation. What may be easy after having gone over this (and other) proofs is not so easy to follow in the first pass by a viewer. These problems seem only correctable by process of repeated test-and-revise. When presenting the proof, simply speaking with sufficient precision is substantially harder than in normal conversation (where precision is not so critical). Practice can help here. When presenting the proof, going at the right p

3 0.12247958 370 hunch net-2009-09-18-Necessary and Sufficient Research

Introduction: Researchers are typically confronted with big problems that they have no idea how to solve. In trying to come up with a solution, a natural approach is to decompose the big problem into a set of subproblems whose solution yields a solution to the larger problem. This approach can go wrong in several ways. Decomposition failure . The solution to the decomposition does not in fact yield a solution to the overall problem. Artificial hardness . The subproblems created are sufficient if solved to solve the overall problem, but they are harder than necessary. As you can see, computational complexity forms a relatively new (in research-history) razor by which to judge an approach sufficient but not necessary. In my experience, the artificial hardness problem is very common. Many researchers abdicate the responsibility of choosing a problem to work on to other people. This process starts very naturally as a graduate student, when an incoming student might have relatively l

4 0.10725865 328 hunch net-2008-11-26-Efficient Reinforcement Learning in MDPs

Introduction: Claude Sammut is attempting to put together an Encyclopedia of Machine Learning . I volunteered to write one article on Efficient RL in MDPs , which I would like to invite comment on. Is something critical missing?

5 0.10276197 351 hunch net-2009-05-02-Wielding a New Abstraction

Introduction: This post is partly meant as an advertisement for the reductions tutorial Alina , Bianca , and I are planning to do at ICML . Please come, if you are interested. Many research programs can be thought of as finding and building new useful abstractions. The running example I’ll use is learning reductions where I have experience. The basic abstraction here is that we can build a learning algorithm capable of solving classification problems up to a small expected regret. This is used repeatedly to solve more complex problems. In working on a new abstraction, I think you typically run into many substantial problems of understanding, which make publishing particularly difficult. It is difficult to seriously discuss the reason behind or mechanism for abstraction in a conference paper with small page limits. People rarely see such discussions and hence have little basis on which to think about new abstractions. Another difficulty is that when building an abstraction, yo

6 0.087962419 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers

7 0.083962545 181 hunch net-2006-05-23-What is the best regret transform reduction from multiclass to binary?

8 0.081453986 22 hunch net-2005-02-18-What it means to do research.

9 0.080861919 166 hunch net-2006-03-24-NLPers

10 0.074096441 14 hunch net-2005-02-07-The State of the Reduction

11 0.073305458 59 hunch net-2005-04-22-New Blog: [Lowerbounds,Upperbounds]

12 0.070472024 170 hunch net-2006-04-06-Bounds greater than 1

13 0.070324764 142 hunch net-2005-12-22-Yes , I am applying

14 0.070260853 10 hunch net-2005-02-02-Kolmogorov Complexity and Googling

15 0.066739708 313 hunch net-2008-08-18-Radford Neal starts a blog

16 0.066417493 448 hunch net-2011-10-24-2011 ML symposium and the bears

17 0.066003658 441 hunch net-2011-08-15-Vowpal Wabbit 6.0

18 0.065528348 49 hunch net-2005-03-30-What can Type Theory teach us about Machine Learning?

19 0.062233903 164 hunch net-2006-03-17-Multitask learning is Black-Boxable

20 0.058845095 51 hunch net-2005-04-01-The Producer-Consumer Model of Research


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.114), (1, 0.002), (2, -0.069), (3, 0.048), (4, -0.045), (5, -0.002), (6, 0.063), (7, -0.085), (8, -0.052), (9, -0.01), (10, -0.026), (11, -0.033), (12, 0.02), (13, 0.057), (14, -0.026), (15, 0.026), (16, 0.044), (17, 0.058), (18, -0.059), (19, -0.065), (20, 0.03), (21, 0.018), (22, 0.014), (23, -0.007), (24, 0.027), (25, -0.119), (26, -0.07), (27, 0.014), (28, -0.036), (29, 0.069), (30, 0.022), (31, -0.03), (32, -0.03), (33, 0.02), (34, -0.051), (35, -0.007), (36, 0.013), (37, -0.0), (38, 0.001), (39, 0.063), (40, 0.014), (41, 0.048), (42, -0.012), (43, 0.032), (44, -0.01), (45, -0.102), (46, -0.029), (47, 0.147), (48, 0.039), (49, 0.064)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97950464 412 hunch net-2010-09-28-Machined Learnings

Introduction: Paul Mineiro has started Machined Learnings where he’s seriously attempting to do ML research in public. I personally need to read through in greater detail, as much of it is learning reduction related, trying to deal with the sorts of complex source problems that come up in practice.

2 0.59273344 29 hunch net-2005-02-25-Solution: Reinforcement Learning with Classification

Introduction: I realized that the tools needed to solve the problem just posted were just created. I tried to sketch out the solution here (also in .lyx and .tex ). It is still quite sketchy (and probably only the few people who understand reductions well can follow). One of the reasons why I started this weblog was to experiment with “research in the open”, and this is an opportunity to do so. Over the next few days, I’ll be filling in details and trying to get things to make sense. If you have additions or ideas, please propose them.

3 0.56425107 313 hunch net-2008-08-18-Radford Neal starts a blog

Introduction: here on statistics, ML, CS, and other things he knows well.

4 0.51575553 384 hunch net-2009-12-24-Top graduates this season

Introduction: I would like to point out 3 graduates this season as having my confidence they are capable of doing great things. Daniel Hsu has diverse papers with diverse coauthors on {active learning, mulitlabeling, temporal learning, …} each covering new algorithms and methods of analysis. He is also a capable programmer, having helped me with some nitty-gritty details of cluster parallel Vowpal Wabbit this summer. He has an excellent tendency to just get things done. Nicolas Lambert doesn’t nominally work in machine learning, but I’ve found his work in elicitation relevant nevertheless. In essence, elicitable properties are closely related to learnable properties, and the elicitation complexity is related to a notion of learning complexity. See the Surrogate regret bounds paper for some related discussion. Few people successfully work at such a general level that it crosses fields, but he’s one of them. Yisong Yue is deeply focused on interactive learning, which he has a

5 0.51118106 296 hunch net-2008-04-21-The Science 2.0 article

Introduction: I found the article about science using modern tools interesting , especially the part about ‘blogophobia’, which in my experience is often a substantial issue: many potential guest posters aren’t quite ready, because of the fear of a permanent public mistake, because it is particularly hard to write about the unknown (the essence of research), and because the system for public credit doesn’t yet really handle blog posts. So far, science has been relatively resistant to discussing research on blogs. Some things need to change to get there. Public tolerance of the occasional mistake is essential, as is a willingness to cite (and credit) blogs as freely as papers. I’ve often run into another reason for holding back myself: I don’t want to overtalk my own research. Nevertheless, I’m slowly changing to the opinion that I’m holding back too much: the real power of a blog in research is that it can be used to confer with many people, and that just makes research work better.

6 0.48553166 492 hunch net-2013-12-01-NIPS tutorials and Vowpal Wabbit 7.4

7 0.48068538 49 hunch net-2005-03-30-What can Type Theory teach us about Machine Learning?

8 0.47762501 22 hunch net-2005-02-18-What it means to do research.

9 0.46652943 76 hunch net-2005-05-29-Bad ideas

10 0.45328993 370 hunch net-2009-09-18-Necessary and Sufficient Research

11 0.44401401 405 hunch net-2010-08-21-Rob Schapire at NYC ML Meetup

12 0.44061792 270 hunch net-2007-11-02-The Machine Learning Award goes to …

13 0.44033274 2 hunch net-2005-01-24-Holy grails of machine learning?

14 0.43790674 187 hunch net-2006-06-25-Presentation of Proofs is Hard.

15 0.43488777 464 hunch net-2012-05-03-Microsoft Research, New York City

16 0.4340812 255 hunch net-2007-07-13-The View From China

17 0.43007812 351 hunch net-2009-05-02-Wielding a New Abstraction

18 0.42904359 386 hunch net-2010-01-13-Sam Roweis died

19 0.42798069 414 hunch net-2010-10-17-Partha Niyogi has died

20 0.42014125 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.195), (53, 0.127), (61, 0.5)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.77871245 412 hunch net-2010-09-28-Machined Learnings

Introduction: Paul Mineiro has started Machined Learnings where he’s seriously attempting to do ML research in public. I personally need to read through in greater detail, as much of it is learning reduction related, trying to deal with the sorts of complex source problems that come up in practice.

2 0.67136896 106 hunch net-2005-09-04-Science in the Government

Introduction: I found the article on “ Political Science ” at the New York Times interesting. Essentially the article is about allegations that the US government has been systematically distorting scientific views. With a petition by some 7000+ scientists alleging such behavior this is clearly a significant concern. One thing not mentioned explicitly in this discussion is that there are fundamental cultural differences between academic research and the rest of the world. In academic research, careful, clear thought is valued. This value is achieved by both formal and informal mechanisms. One example of a formal mechanism is peer review. In contrast, in the land of politics, the basic value is agreement. It is only with some amount of agreement that a new law can be passed or other actions can be taken. Since Science (with a capitol ‘S’) has accomplished many things, it can be a significant tool in persuading people. This makes it compelling for a politician to use science as a mec

3 0.62714112 416 hunch net-2010-10-29-To Vidoelecture or not

Introduction: (update: cross-posted on CACM ) For the first time in several years, ICML 2010 did not have videolectures attending. Luckily, the tutorial on exploration and learning which Alina and I put together can be viewed , since we also presented at KDD 2010 , which included videolecture support. ICML didn’t cover the cost of a videolecture, because PASCAL didn’t provide a grant for it this year. On the other hand, KDD covered it out of registration costs. The cost of videolectures isn’t cheap. For a workshop the baseline quote we have is 270 euro per hour, plus a similar cost for the cameraman’s travel and accomodation. This can be reduced substantially by having a volunteer with a camera handle the cameraman duties, uploading the video and slides to be processed for a quoted 216 euro per hour. Youtube is the most predominant free video site with a cost of $0, but it turns out to be a poor alternative. 15 minute upload limits do not match typical talk lengths.

4 0.54899186 201 hunch net-2006-08-07-The Call of the Deep

Introduction: Many learning algorithms used in practice are fairly simple. Viewed representationally, many prediction algorithms either compute a linear separator of basic features (perceptron, winnow, weighted majority, SVM) or perhaps a linear separator of slightly more complex features (2-layer neural networks or kernelized SVMs). Should we go beyond this, and start using “deep” representations? What is deep learning? Intuitively, deep learning is about learning to predict in ways which can involve complex dependencies between the input (observed) features. Specifying this more rigorously turns out to be rather difficult. Consider the following cases: SVM with Gaussian Kernel. This is not considered deep learning, because an SVM with a gaussian kernel can’t succinctly represent certain decision surfaces. One of Yann LeCun ‘s examples is recognizing objects based on pixel values. An SVM will need a new support vector for each significantly different background. Since the number

5 0.39810497 6 hunch net-2005-01-27-Learning Complete Problems

Introduction: Let’s define a learning problem as making predictions given past data. There are several ways to attack the learning problem which seem to be equivalent to solving the learning problem. Find the Invariant This viewpoint says that learning is all about learning (or incorporating) transformations of objects that do not change the correct prediction. The best possible invariant is the one which says “all things of the same class are the same”. Finding this is equivalent to learning. This viewpoint is particularly common when working with image features. Feature Selection This viewpoint says that the way to learn is by finding the right features to input to a learning algorithm. The best feature is the one which is the class to predict. Finding this is equivalent to learning for all reasonable learning algorithms. This viewpoint is common in several applications of machine learning. See Gilad’s and Bianca’s comments . Find the Representation This is almost the same a

6 0.39135665 483 hunch net-2013-06-10-The Large Scale Learning class notes

7 0.38414311 227 hunch net-2007-01-10-A Deep Belief Net Learning Problem

8 0.38279158 152 hunch net-2006-01-30-Should the Input Representation be a Vector?

9 0.38216224 60 hunch net-2005-04-23-Advantages and Disadvantages of Bayesian Learning

10 0.37976512 9 hunch net-2005-02-01-Watchword: Loss

11 0.37938532 158 hunch net-2006-02-24-A Fundamentalist Organization of Machine Learning

12 0.377749 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

13 0.37691477 67 hunch net-2005-05-06-Don’t mix the solution into the problem

14 0.37472695 41 hunch net-2005-03-15-The State of Tight Bounds

15 0.37458235 27 hunch net-2005-02-23-Problem: Reinforcement Learning with Classification

16 0.37439787 12 hunch net-2005-02-03-Learning Theory, by assumption

17 0.37418139 347 hunch net-2009-03-26-Machine Learning is too easy

18 0.37415677 131 hunch net-2005-11-16-The Everything Ensemble Edge

19 0.37278551 367 hunch net-2009-08-16-Centmail comments

20 0.37153113 3 hunch net-2005-01-24-The Humanloop Spectrum of Machine Learning