hunch_net hunch_net-2009 hunch_net-2009-340 knowledge-graph by maker-knowledge-mining
Source: html
Introduction: I wanted to point to Michael Nielsen’s talk about blogging science, which I found interesting.
sentIndex sentText sentNum sentScore
1 I wanted to point to Michael Nielsen’s talk about blogging science, which I found interesting. [sent-1, score-1.467]
wordName wordTfidf (topN-words)
[('nielsen', 0.547), ('blogging', 0.547), ('wanted', 0.318), ('michael', 0.313), ('talk', 0.238), ('science', 0.231), ('found', 0.201), ('point', 0.163), ('interesting', 0.161)]
simIndex simValue blogId blogTitle
same-blog 1 0.99999994 340 hunch net-2009-01-28-Nielsen’s talk
Introduction: I wanted to point to Michael Nielsen’s talk about blogging science, which I found interesting.
2 0.30598086 108 hunch net-2005-09-06-A link
Introduction: I read through some of the essays of Michael Nielsen today, and recommend them. Principles of Effective Research and Extreme Thinking are both relevant to several discussions here.
3 0.089465827 64 hunch net-2005-04-28-Science Fiction and Research
Introduction: A big part of doing research is imagining how things could be different, and then trying to figure out how to get there. A big part of science fiction is imagining how things could be different, and then working through the implications. Because of the similarity here, reading science fiction can sometimes be helpful in understanding and doing research. (And, hey, it’s fun.) Here’s some list of science fiction books I enjoyed which seem particularly relevant to computer science and (sometimes) learning systems: Vernor Vinge, “True Names”, “A Fire Upon the Deep” Marc Stiegler, “David’s Sling”, “Earthweb” Charles Stross, “Singularity Sky” Greg Egan, “Diaspora” Joe Haldeman, “Forever Peace” (There are surely many others.) Incidentally, the nature of science fiction itself has changed. Decades ago, science fiction projected great increases in the power humans control (example: E.E. Smith Lensman series). That didn’t really happen in the last 50 years. Inste
4 0.081264623 254 hunch net-2007-07-12-ICML Trends
Introduction: Mark Reid did a post on ICML trends that I found interesting.
5 0.0784183 310 hunch net-2008-07-15-Interesting papers at COLT (and a bit of UAI & workshops)
Introduction: Here are a few papers from COLT 2008 that I found interesting. Maria-Florina Balcan , Steve Hanneke , and Jenn Wortman , The True Sample Complexity of Active Learning . This paper shows that in an asymptotic setting, active learning is always better than supervised learning (although the gap may be small). This is evidence that the only thing in the way of universal active learning is us knowing how to do it properly. Nir Ailon and Mehryar Mohri , An Efficient Reduction of Ranking to Classification . This paper shows how to robustly rank n objects with n log(n) classifications using a quicksort based algorithm. The result is applicable to many ranking loss functions and has implications for others. Michael Kearns and Jennifer Wortman . Learning from Collective Behavior . This is about learning in a new model, where the goal is to predict how a collection of interacting agents behave. One claim is that learning in this setting can be reduced to IID lear
6 0.075053237 476 hunch net-2012-12-29-Simons Institute Big Data Program
7 0.072572485 325 hunch net-2008-11-10-ICML Reviewing Criteria
8 0.066307798 444 hunch net-2011-09-07-KDD and MUCMD 2011
9 0.059998631 216 hunch net-2006-11-02-2006 NIPS workshops
10 0.054528255 296 hunch net-2008-04-21-The Science 2.0 article
11 0.053739119 282 hunch net-2008-01-06-Research Political Issues
12 0.053429507 330 hunch net-2008-12-07-A NIPS paper
13 0.053334702 228 hunch net-2007-01-15-The Machine Learning Department
14 0.047946706 106 hunch net-2005-09-04-Science in the Government
15 0.043461725 17 hunch net-2005-02-10-Conferences, Dates, Locations
16 0.042314634 177 hunch net-2006-05-05-An ICML reject
17 0.041989781 265 hunch net-2007-10-14-NIPS workshp: Learning Problem Design
18 0.040430248 112 hunch net-2005-09-14-The Predictionist Viewpoint
19 0.040408123 354 hunch net-2009-05-17-Server Update
20 0.039228745 144 hunch net-2005-12-28-Yet more nips thoughts
topicId topicWeight
[(0, 0.051), (1, -0.024), (2, -0.033), (3, 0.014), (4, 0.007), (5, 0.02), (6, 0.007), (7, -0.016), (8, 0.0), (9, -0.016), (10, 0.052), (11, 0.006), (12, -0.09), (13, -0.048), (14, 0.018), (15, -0.079), (16, -0.058), (17, 0.093), (18, 0.029), (19, 0.03), (20, -0.034), (21, -0.032), (22, -0.047), (23, 0.018), (24, -0.002), (25, -0.096), (26, -0.003), (27, 0.013), (28, 0.008), (29, -0.001), (30, -0.11), (31, 0.05), (32, -0.026), (33, -0.052), (34, 0.038), (35, 0.07), (36, -0.117), (37, 0.078), (38, -0.047), (39, -0.05), (40, 0.039), (41, 0.18), (42, -0.016), (43, 0.095), (44, 0.092), (45, 0.099), (46, 0.026), (47, -0.035), (48, 0.053), (49, 0.067)]
simIndex simValue blogId blogTitle
same-blog 1 0.98966426 340 hunch net-2009-01-28-Nielsen’s talk
Introduction: I wanted to point to Michael Nielsen’s talk about blogging science, which I found interesting.
2 0.74219579 108 hunch net-2005-09-06-A link
Introduction: I read through some of the essays of Michael Nielsen today, and recommend them. Principles of Effective Research and Extreme Thinking are both relevant to several discussions here.
3 0.5318169 64 hunch net-2005-04-28-Science Fiction and Research
Introduction: A big part of doing research is imagining how things could be different, and then trying to figure out how to get there. A big part of science fiction is imagining how things could be different, and then working through the implications. Because of the similarity here, reading science fiction can sometimes be helpful in understanding and doing research. (And, hey, it’s fun.) Here’s some list of science fiction books I enjoyed which seem particularly relevant to computer science and (sometimes) learning systems: Vernor Vinge, “True Names”, “A Fire Upon the Deep” Marc Stiegler, “David’s Sling”, “Earthweb” Charles Stross, “Singularity Sky” Greg Egan, “Diaspora” Joe Haldeman, “Forever Peace” (There are surely many others.) Incidentally, the nature of science fiction itself has changed. Decades ago, science fiction projected great increases in the power humans control (example: E.E. Smith Lensman series). That didn’t really happen in the last 50 years. Inste
4 0.45479208 282 hunch net-2008-01-06-Research Political Issues
Introduction: I’ve avoided discussing politics here, although not for lack of interest. The problem with discussing politics is that it’s customary for people to say much based upon little information. Nevertheless, politics can have a substantial impact on science (and we might hope for the vice-versa). It’s primary election time in the United States, so the topic is timely, although the issues are not. There are several policy decisions which substantially effect development of science and technology in the US. Education The US has great contrasts in education. The top universities are very good places, yet the grade school education system produces mediocre results. For me, the contrast between a public education and Caltech was bracing. For many others attending Caltech, it clearly was not. Upgrading the k-12 education system in the US is a long-standing chronic problem which I know relatively little about. My own experience is that a basic attitude of “no child unrealized” i
5 0.40273938 144 hunch net-2005-12-28-Yet more nips thoughts
Introduction: I only managed to make it out to the NIPS workshops this year so I’ll give my comments on what I saw there. The Learing and Robotics workshops lives again. I hope it continues and gets more high quality papers in the future. The most interesting talk for me was Larry Jackel’s on the LAGR program (see John’s previous post on said program). I got some ideas as to what progress has been made. Larry really explained the types of benchmarks and the tradeoffs that had to be made to make the goals achievable but challenging. Hal Daume gave a very interesting talk about structured prediction using RL techniques, something near and dear to my own heart. He achieved rather impressive results using only a very greedy search. The non-parametric Bayes workshop was great. I enjoyed the entire morning session I spent there, and particularly (the usually desultory) discussion periods. One interesting topic was the Gibbs/Variational inference divide. I won’t try to summarize espe
6 0.38258246 296 hunch net-2008-04-21-The Science 2.0 article
7 0.37696451 106 hunch net-2005-09-04-Science in the Government
8 0.37360567 112 hunch net-2005-09-14-The Predictionist Viewpoint
9 0.36347461 487 hunch net-2013-07-24-ICML 2012 videos lost
10 0.34379774 107 hunch net-2005-09-05-Site Update
11 0.34072289 140 hunch net-2005-12-14-More NIPS Papers II
12 0.33259374 265 hunch net-2007-10-14-NIPS workshp: Learning Problem Design
13 0.31367338 254 hunch net-2007-07-12-ICML Trends
14 0.30593613 246 hunch net-2007-06-13-Not Posting
15 0.30423242 306 hunch net-2008-07-02-Proprietary Data in Academic Research?
16 0.29538 411 hunch net-2010-09-21-Regretting the dead
17 0.29270023 330 hunch net-2008-12-07-A NIPS paper
18 0.28184295 248 hunch net-2007-06-19-How is Compressed Sensing going to change Machine Learning ?
19 0.27855292 354 hunch net-2009-05-17-Server Update
20 0.2683447 88 hunch net-2005-07-01-The Role of Impromptu Talks
topicId topicWeight
[(90, 0.734)]
simIndex simValue blogId blogTitle
1 0.92294157 323 hunch net-2008-11-04-Rise of the Machines
Introduction: On the enduring topic of how people deal with intelligent machines , we have this important election bulletin .
same-blog 2 0.89859974 340 hunch net-2009-01-28-Nielsen’s talk
Introduction: I wanted to point to Michael Nielsen’s talk about blogging science, which I found interesting.
3 0.78501713 284 hunch net-2008-01-18-Datasets
Introduction: David Pennock notes the impressive set of datasets at datawrangling .
4 0.7049492 32 hunch net-2005-02-27-Antilearning: When proximity goes bad
Introduction: Joel Predd mentioned “ Antilearning ” by Adam Kowalczyk , which is interesting from a foundational intuitions viewpoint. There is a pervasive intuition that “nearby things tend to have the same label”. This intuition is instantiated in SVMs, nearest neighbor classifiers, decision trees, and neural networks. It turns out there are natural problems where this intuition is opposite of the truth. One natural situation where this occurs is in competition. For example, when Intel fails to meet its earnings estimate, is this evidence that AMD is doing badly also? Or evidence that AMD is doing well? This violation of the proximity intuition means that when the number of examples is few, negating a classifier which attempts to exploit proximity can provide predictive power (thus, the term “antilearning”).
5 0.68387479 239 hunch net-2007-04-18-$50K Spock Challenge
Introduction: Apparently, the company Spock is setting up a $50k entity resolution challenge . $50k is much less than the Netflix challenge, but it’s effectively the same as Netflix until someone reaches 10% . It’s also nice that the Spock challenge has a short duration. The (visible) test set is of size 25k and the training set has size 75k.
6 0.52578551 139 hunch net-2005-12-11-More NIPS Papers
7 0.48276007 144 hunch net-2005-12-28-Yet more nips thoughts
8 0.28467268 333 hunch net-2008-12-27-Adversarial Academia
9 0.037052784 422 hunch net-2011-01-16-2011 Summer Conference Deadline Season
10 0.032866511 134 hunch net-2005-12-01-The Webscience Future
11 0.024599608 95 hunch net-2005-07-14-What Learning Theory might do
12 0.0 1 hunch net-2005-01-19-Why I decided to run a weblog.
13 0.0 2 hunch net-2005-01-24-Holy grails of machine learning?
14 0.0 3 hunch net-2005-01-24-The Humanloop Spectrum of Machine Learning
15 0.0 4 hunch net-2005-01-26-Summer Schools
16 0.0 5 hunch net-2005-01-26-Watchword: Probability
17 0.0 6 hunch net-2005-01-27-Learning Complete Problems
18 0.0 7 hunch net-2005-01-31-Watchword: Assumption
19 0.0 8 hunch net-2005-02-01-NIPS: Online Bayes
20 0.0 9 hunch net-2005-02-01-Watchword: Loss