hunch_net hunch_net-2005 hunch_net-2005-13 knowledge-graph by maker-knowledge-mining

13 hunch net-2005-02-04-JMLG


meta infos for this blog

Source: html

Introduction: The Journal of Machine Learning Gossip has some fine satire about learning research. In particular, the guides are amusing and remarkably true. As in all things, it’s easy to criticize the way things are and harder to make them better.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 The Journal of Machine Learning Gossip has some fine satire about learning research. [sent-1, score-0.381]

2 In particular, the guides are amusing and remarkably true. [sent-2, score-1.275]

3 As in all things, it’s easy to criticize the way things are and harder to make them better. [sent-3, score-0.891]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('amusing', 0.489), ('guides', 0.408), ('remarkably', 0.378), ('journal', 0.346), ('fine', 0.323), ('harder', 0.258), ('things', 0.256), ('particular', 0.156), ('easy', 0.146), ('way', 0.123), ('better', 0.121), ('make', 0.108), ('machine', 0.071), ('learning', 0.058)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 13 hunch net-2005-02-04-JMLG

Introduction: The Journal of Machine Learning Gossip has some fine satire about learning research. In particular, the guides are amusing and remarkably true. As in all things, it’s easy to criticize the way things are and harder to make them better.

2 0.080907039 382 hunch net-2009-12-09-Future Publication Models @ NIPS

Introduction: Yesterday, there was a discussion about future publication models at NIPS . Yann and Zoubin have specific detailed proposals which I’ll add links to when I get them ( Yann’s proposal and Zoubin’s proposal ). What struck me about the discussion is that there are many simultaneous concerns as well as many simultaneous proposals, which makes it difficult to keep all the distinctions straight in a verbal conversation. It also seemed like people were serious enough about this that we may see some real movement. Certainly, my personal experience motivates that as I’ve posted many times about the substantial flaws in our review process, including some very poor personal experiences. Concerns include the following: (Several) Reviewers are overloaded, boosting the noise in decision making. ( Yann ) A new system should run with as little built-in delay and friction to the process of research as possible. ( Hanna Wallach (updated)) Double-blind review is particularly impor

3 0.078303352 172 hunch net-2006-04-14-JMLR is a success

Introduction: In 2001, the “ Journal of Machine Learning Research ” was created in reaction to unadaptive publisher policies at MLJ . Essentially, with the creation of the internet, the bottleneck in publishing research shifted from publishing to research. The declaration of independence accompanying this move expresses the reasons why in greater detail. MLJ has strongly changed its policy in reaction to this. In particular, there is no longer an assignment of copyright to the publisher (*), and MLJ regularly sponsors many student “best paper awards” across several conferences with cash prizes. This is an advantage of MLJ over JMLR: MLJ can afford to sponsor cash prizes for the machine learning community. The remaining disadvantage is that reading papers in MLJ sometimes requires searching for the author’s website where the free version is available. In contrast, JMLR articles are freely available to everyone off the JMLR website. Whether or not this disadvantage cancels the advantage i

4 0.076708794 48 hunch net-2005-03-29-Academic Mechanism Design

Introduction: From game theory, there is a notion of “mechanism design”: setting up the structure of the world so that participants have some incentive to do sane things (rather than obviously counterproductive things). Application of this principle to academic research may be fruitful. What is misdesigned about academic research? The JMLG guides give many hints. The common nature of bad reviewing also suggests the system isn’t working optimally. There are many ways to experimentally “cheat” in machine learning . Funding Prisoner’s Delimma. Good researchers often write grant proposals for funding rather than doing research. Since the pool of grant money is finite, this means that grant proposals are often rejected, implying that more must be written. This is essentially a “prisoner’s delimma”: anyone not writing grant proposals loses, but the entire process of doing research is slowed by distraction. If everyone wrote 1/2 as many grant proposals, roughly the same distribution

5 0.064099386 224 hunch net-2006-12-12-Interesting Papers at NIPS 2006

Introduction: Here are some papers that I found surprisingly interesting. Yoshua Bengio , Pascal Lamblin, Dan Popovici, Hugo Larochelle, Greedy Layer-wise Training of Deep Networks . Empirically investigates some of the design choices behind deep belief networks. Long Zhu , Yuanhao Chen, Alan Yuille Unsupervised Learning of a Probabilistic Grammar for Object Detection and Parsing. An unsupervised method for detecting objects using simple feature filters that works remarkably well on the (supervised) caltech-101 dataset . Shai Ben-David , John Blitzer , Koby Crammer , and Fernando Pereira , Analysis of Representations for Domain Adaptation . This is the first analysis I’ve seen of learning with respect to samples drawn differently from the evaluation distribution which depends on reasonable measurable quantities. All of these papers turn out to have a common theme—the power of unlabeled data to do generically useful things.

6 0.062267121 437 hunch net-2011-07-10-ICML 2011 and the future

7 0.060925901 202 hunch net-2006-08-10-Precision is not accuracy

8 0.059656315 288 hunch net-2008-02-10-Complexity Illness

9 0.057168517 472 hunch net-2012-08-27-NYAS ML 2012 and ICML 2013

10 0.055660203 347 hunch net-2009-03-26-Machine Learning is too easy

11 0.053307466 60 hunch net-2005-04-23-Advantages and Disadvantages of Bayesian Learning

12 0.052671246 420 hunch net-2010-12-26-NIPS 2010

13 0.052400596 445 hunch net-2011-09-28-Somebody’s Eating Your Lunch

14 0.052319355 313 hunch net-2008-08-18-Radford Neal starts a blog

15 0.049508959 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

16 0.049162269 75 hunch net-2005-05-28-Running A Machine Learning Summer School

17 0.047448188 368 hunch net-2009-08-26-Another 10-year paper in Machine Learning

18 0.047137409 101 hunch net-2005-08-08-Apprenticeship Reinforcement Learning for Control

19 0.045804393 93 hunch net-2005-07-13-“Sister Conference” presentations

20 0.043154258 22 hunch net-2005-02-18-What it means to do research.


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.075), (1, -0.006), (2, -0.02), (3, 0.035), (4, -0.014), (5, -0.011), (6, 0.018), (7, 0.014), (8, 0.003), (9, 0.0), (10, 0.012), (11, -0.015), (12, 0.044), (13, -0.025), (14, 0.031), (15, -0.005), (16, -0.008), (17, 0.024), (18, 0.014), (19, 0.006), (20, 0.028), (21, -0.035), (22, -0.006), (23, 0.004), (24, 0.017), (25, -0.002), (26, -0.002), (27, 0.027), (28, -0.015), (29, 0.023), (30, -0.024), (31, -0.012), (32, 0.029), (33, 0.024), (34, -0.06), (35, -0.042), (36, -0.0), (37, -0.014), (38, 0.016), (39, 0.029), (40, 0.004), (41, -0.039), (42, -0.004), (43, 0.014), (44, 0.062), (45, 0.021), (46, 0.082), (47, -0.042), (48, 0.015), (49, 0.032)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.80551505 13 hunch net-2005-02-04-JMLG

Introduction: The Journal of Machine Learning Gossip has some fine satire about learning research. In particular, the guides are amusing and remarkably true. As in all things, it’s easy to criticize the way things are and harder to make them better.

2 0.54619914 382 hunch net-2009-12-09-Future Publication Models @ NIPS

Introduction: Yesterday, there was a discussion about future publication models at NIPS . Yann and Zoubin have specific detailed proposals which I’ll add links to when I get them ( Yann’s proposal and Zoubin’s proposal ). What struck me about the discussion is that there are many simultaneous concerns as well as many simultaneous proposals, which makes it difficult to keep all the distinctions straight in a verbal conversation. It also seemed like people were serious enough about this that we may see some real movement. Certainly, my personal experience motivates that as I’ve posted many times about the substantial flaws in our review process, including some very poor personal experiences. Concerns include the following: (Several) Reviewers are overloaded, boosting the noise in decision making. ( Yann ) A new system should run with as little built-in delay and friction to the process of research as possible. ( Hanna Wallach (updated)) Double-blind review is particularly impor

3 0.51762658 39 hunch net-2005-03-10-Breaking Abstractions

Introduction: Sam Roweis ‘s comment reminds me of a more general issue that comes up in doing research: abstractions always break. Real number’s aren’t. Most real numbers can not be represented with any machine. One implication of this is that many real-number based algorithms have difficulties when implemented with floating point numbers. The box on your desk is not a turing machine. A turing machine can compute anything computable, given sufficient time. A typical computer fails terribly when the state required for the computation exceeds some limit. Nash equilibria aren’t equilibria. This comes up when trying to predict human behavior based on the result of the equilibria computation. Often, it doesn’t work. The probability isn’t. Probability is an abstraction expressing either our lack of knowledge (the Bayesian viewpoint) or fundamental randomization (the frequentist viewpoint). From the frequentist viewpoint the lack of knowledge typically precludes actually knowing the fu

4 0.49541506 479 hunch net-2013-01-31-Remote large scale learning class participation

Introduction: Yann and I have arranged so that people who are interested in our large scale machine learning class and not able to attend in person can follow along via two methods. Videos will be posted with about a 1 day delay on techtalks . This is a side-by-side capture of video+slides from Weyond . We are experimenting with Piazza as a discussion forum. Anyone is welcome to subscribe to Piazza and ask questions there, where I will be monitoring things. update2 : Sign up here . The first lecture is up now, including the revised version of the slides which fixes a few typos and rounds out references.

5 0.48040068 75 hunch net-2005-05-28-Running A Machine Learning Summer School

Introduction: We just finished the Chicago 2005 Machine Learning Summer School . The school was 2 weeks long with about 130 (or 140 counting the speakers) participants. For perspective, this is perhaps the largest graduate level machine learning class I am aware of anywhere and anytime (previous MLSS s have been close). Overall, it seemed to go well, although the students are the real authority on this. For those who missed it, DVDs will be available from our Slovenian friends. Email Mrs Spela Sitar of the Jozsef Stefan Institute for details. The following are some notes for future planning and those interested. Good Decisions Acquiring the larger-than-necessary “Assembly Hall” at International House . Our attendance came in well above our expectations, so this was a critical early decision that made a huge difference. The invited speakers were key. They made a huge difference in the quality of the content. Delegating early and often was important. One key difficulty here

6 0.47405535 449 hunch net-2011-11-26-Giving Thanks

7 0.47305498 257 hunch net-2007-07-28-Asking questions

8 0.44913375 60 hunch net-2005-04-23-Advantages and Disadvantages of Bayesian Learning

9 0.43693087 42 hunch net-2005-03-17-Going all the Way, Sometimes

10 0.43454537 146 hunch net-2006-01-06-MLTV

11 0.42543492 37 hunch net-2005-03-08-Fast Physics for Learning

12 0.42444518 123 hunch net-2005-10-16-Complexity: It’s all in your head

13 0.41944581 48 hunch net-2005-03-29-Academic Mechanism Design

14 0.41739401 172 hunch net-2006-04-14-JMLR is a success

15 0.41165915 445 hunch net-2011-09-28-Somebody’s Eating Your Lunch

16 0.40984282 414 hunch net-2010-10-17-Partha Niyogi has died

17 0.39849153 73 hunch net-2005-05-17-A Short Guide to PhD Graduate Study

18 0.39456236 469 hunch net-2012-07-09-Videolectures

19 0.39404494 395 hunch net-2010-04-26-Compassionate Reviewing

20 0.39400983 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.033), (88, 0.558), (94, 0.181)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.75432014 13 hunch net-2005-02-04-JMLG

Introduction: The Journal of Machine Learning Gossip has some fine satire about learning research. In particular, the guides are amusing and remarkably true. As in all things, it’s easy to criticize the way things are and harder to make them better.

2 0.66664493 469 hunch net-2012-07-09-Videolectures

Introduction: Yaser points out some nicely videotaped machine learning lectures at Caltech . Yaser taught me machine learning, and I always found the lectures clear and interesting, so I expect many people can benefit from watching. Relative to Andrew Ng ‘s ML class there are somewhat different areas of emphasis but the topic is the same, so picking and choosing the union may be helpful.

3 0.57668227 93 hunch net-2005-07-13-“Sister Conference” presentations

Introduction: Some of the “sister conference” presentations at AAAI have been great. Roughly speaking, the conference organizers asked other conference organizers to come give a summary of their conference. Many different AI-related conferences accepted. The presenters typically discuss some of the background and goals of the conference then mention the results from a few papers they liked. This is great because it provides a mechanism to get a digested overview of the work of several thousand researchers—something which is simply available nowhere else. Based on these presentations, it looks like there is a significant component of (and opportunity for) applied machine learning in AIIDE , IUI , and ACL . There was also some discussion of having a super-colocation event similar to FCRC , but centered on AI & Learning. This seems like a fine idea. The field is fractured across so many different conferences that the mixing of a supercolocation seems likely helpful for research.

4 0.55592573 168 hunch net-2006-04-02-Mad (Neuro)science

Introduction: One of the questions facing machine learning as a field is “Can we produce a generalized learning system that can solve a wide array of standard learning problems?” The answer is trivial: “yes, just have children”. Of course, that wasn’t really the question. The refined question is “Are there simple-to-implement generalized learning systems that can solve a wide array of standard learning problems?” The answer to this is less clear. The ability of animals (and people ) to learn might be due to megabytes encoded in the DNA. If this algorithmic complexity is necessary to solve machine learning, the field faces a daunting task in replicating it on a computer. This observation suggests a possibility: if you can show that few bits of DNA are needed for learning in animals, then this provides evidence that machine learning (as a field) has a hope of big success with relatively little effort. It is well known that specific portions of the brain have specific functionality across

5 0.49997637 295 hunch net-2008-04-12-It Doesn’t Stop

Introduction: I’ve enjoyed the Terminator movies and show. Neglecting the whacky aspects (time travel and associated paradoxes), there is an enduring topic of discussion: how do people deal with intelligent machines (and vice versa)? In Terminator-land, the primary method for dealing with intelligent machines is to prevent them from being made. This approach works pretty badly, because a new angle on building an intelligent machine keeps coming up. This is partly a ploy for writer’s to avoid writing themselves out of a job, but there is a fundamental truth to it as well: preventing progress in research is hard. The United States, has been experimenting with trying to stop research on stem cells . It hasn’t worked very well—the net effect has been retarding research programs a bit, and exporting some research to other countries. Another less recent example was encryption technology, for which the United States generally did not encourage early public research and even discouraged as a mu

6 0.33082119 371 hunch net-2009-09-21-Netflix finishes (and starts)

7 0.30451426 42 hunch net-2005-03-17-Going all the Way, Sometimes

8 0.29510599 35 hunch net-2005-03-04-The Big O and Constants in Learning

9 0.29195118 346 hunch net-2009-03-18-Parallel ML primitives

10 0.29111663 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops

11 0.29032478 115 hunch net-2005-09-26-Prediction Bounds as the Mathematics of Science

12 0.26987413 120 hunch net-2005-10-10-Predictive Search is Coming

13 0.25502264 276 hunch net-2007-12-10-Learning Track of International Planning Competition

14 0.23374656 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

15 0.21282522 136 hunch net-2005-12-07-Is the Google way the way for machine learning?

16 0.21090272 229 hunch net-2007-01-26-Parallel Machine Learning Problems

17 0.21039948 441 hunch net-2011-08-15-Vowpal Wabbit 6.0

18 0.20192555 408 hunch net-2010-08-24-Alex Smola starts a blog

19 0.19004768 200 hunch net-2006-08-03-AOL’s data drop

20 0.18896851 471 hunch net-2012-08-24-Patterns for research in machine learning