hunch_net hunch_net-2008 hunch_net-2008-323 knowledge-graph by maker-knowledge-mining

323 hunch net-2008-11-04-Rise of the Machines


meta infos for this blog

Source: html

Introduction: On the enduring topic of how people deal with intelligent machines , we have this important election bulletin .


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 On the enduring topic of how people deal with intelligent machines , we have this important election bulletin . [sent-1, score-2.72]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('bulletin', 0.497), ('election', 0.461), ('enduring', 0.461), ('intelligent', 0.336), ('machines', 0.29), ('topic', 0.253), ('deal', 0.202), ('important', 0.143), ('people', 0.077)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 323 hunch net-2008-11-04-Rise of the Machines

Introduction: On the enduring topic of how people deal with intelligent machines , we have this important election bulletin .

2 0.1137107 295 hunch net-2008-04-12-It Doesn’t Stop

Introduction: I’ve enjoyed the Terminator movies and show. Neglecting the whacky aspects (time travel and associated paradoxes), there is an enduring topic of discussion: how do people deal with intelligent machines (and vice versa)? In Terminator-land, the primary method for dealing with intelligent machines is to prevent them from being made. This approach works pretty badly, because a new angle on building an intelligent machine keeps coming up. This is partly a ploy for writer’s to avoid writing themselves out of a job, but there is a fundamental truth to it as well: preventing progress in research is hard. The United States, has been experimenting with trying to stop research on stem cells . It hasn’t worked very well—the net effect has been retarding research programs a bit, and exporting some research to other countries. Another less recent example was encryption technology, for which the United States generally did not encourage early public research and even discouraged as a mu

3 0.06171684 134 hunch net-2005-12-01-The Webscience Future

Introduction: The internet has significantly effected the way we do research but it’s capabilities have not yet been fully realized. First, let’s acknowledge some known effects. Self-publishing By default, all researchers in machine learning (and more generally computer science and physics) place their papers online for anyone to download. The exact mechanism differs—physicists tend to use a central repository ( Arxiv ) while computer scientists tend to place the papers on their webpage. Arxiv has been slowly growing in subject breadth so it now sometimes used by computer scientists. Collaboration Email has enabled working remotely with coauthors. This has allowed collaborationis which would not otherwise have been possible and generally speeds research. Now, let’s look at attempts to go further. Blogs (like this one) allow public discussion about topics which are not easily categorized as “a new idea in machine learning” (like this topic). Organization of some subfield

4 0.060114227 288 hunch net-2008-02-10-Complexity Illness

Introduction: One of the enduring stereotypes of academia is that people spend a great deal of intelligence, time, and effort finding complexity rather than simplicity. This is at least anecdotally true in my experience. Math++ Several people have found that adding useless math makes their paper more publishable as evidenced by a reject-add-accept sequence. 8 page minimum Who submitted a paper to ICML violating the 8 page minimum? Every author fears that the reviewers won’t take their work seriously unless the allowed length is fully used. The best minimum violation I know is Adam ‘s paper at SODA on generating random factored numbers , but this is deeply exceptional. It’s a fair bet that 90% of papers submitted are exactly at the page limit. We could imagine that this is because papers naturally take more space, but few people seem to be clamoring for more space. Journalong Has anyone been asked to review a 100 page journal paper? I have. Journal papers can be nice, becaus

5 0.057870008 173 hunch net-2006-04-17-Rexa is live

Introduction: Rexa is now publicly available. Anyone can create an account and login. Rexa is similar to Citeseer and Google Scholar in functionality with more emphasis on the use of machine learning for intelligent information extraction. For example, Rexa can automatically display a picture on an author’s homepage when the author is searched for.

6 0.05684704 436 hunch net-2011-06-22-Ultra LDA

7 0.049313232 4 hunch net-2005-01-26-Summer Schools

8 0.045524463 234 hunch net-2007-02-22-Create Your Own ICML Workshop

9 0.043466326 136 hunch net-2005-12-07-Is the Google way the way for machine learning?

10 0.040677402 458 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions

11 0.039725058 212 hunch net-2006-10-04-Health of Conferences Wiki

12 0.038538612 37 hunch net-2005-03-08-Fast Physics for Learning

13 0.036324911 106 hunch net-2005-09-04-Science in the Government

14 0.035963163 352 hunch net-2009-05-06-Machine Learning to AI

15 0.034178387 469 hunch net-2012-07-09-Videolectures

16 0.034122992 412 hunch net-2010-09-28-Machined Learnings

17 0.031893346 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

18 0.029823158 376 hunch net-2009-11-06-Yisong Yue on Self-improving Systems

19 0.029160727 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops

20 0.029115023 219 hunch net-2006-11-22-Explicit Randomization in Learning algorithms


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.035), (1, -0.008), (2, -0.017), (3, 0.031), (4, 0.002), (5, -0.004), (6, 0.004), (7, -0.006), (8, 0.016), (9, 0.002), (10, -0.031), (11, -0.018), (12, -0.002), (13, -0.012), (14, -0.002), (15, -0.014), (16, 0.008), (17, 0.025), (18, -0.004), (19, 0.001), (20, 0.01), (21, 0.031), (22, -0.056), (23, 0.04), (24, 0.024), (25, -0.034), (26, 0.02), (27, 0.032), (28, -0.023), (29, 0.026), (30, 0.008), (31, -0.044), (32, 0.068), (33, -0.06), (34, -0.01), (35, 0.028), (36, 0.017), (37, -0.014), (38, -0.009), (39, 0.033), (40, 0.019), (41, 0.044), (42, -0.057), (43, -0.071), (44, -0.046), (45, -0.008), (46, -0.028), (47, 0.009), (48, 0.001), (49, 0.009)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96231192 323 hunch net-2008-11-04-Rise of the Machines

Introduction: On the enduring topic of how people deal with intelligent machines , we have this important election bulletin .

2 0.44913691 295 hunch net-2008-04-12-It Doesn’t Stop

Introduction: I’ve enjoyed the Terminator movies and show. Neglecting the whacky aspects (time travel and associated paradoxes), there is an enduring topic of discussion: how do people deal with intelligent machines (and vice versa)? In Terminator-land, the primary method for dealing with intelligent machines is to prevent them from being made. This approach works pretty badly, because a new angle on building an intelligent machine keeps coming up. This is partly a ploy for writer’s to avoid writing themselves out of a job, but there is a fundamental truth to it as well: preventing progress in research is hard. The United States, has been experimenting with trying to stop research on stem cells . It hasn’t worked very well—the net effect has been retarding research programs a bit, and exporting some research to other countries. Another less recent example was encryption technology, for which the United States generally did not encourage early public research and even discouraged as a mu

3 0.44365627 134 hunch net-2005-12-01-The Webscience Future

Introduction: The internet has significantly effected the way we do research but it’s capabilities have not yet been fully realized. First, let’s acknowledge some known effects. Self-publishing By default, all researchers in machine learning (and more generally computer science and physics) place their papers online for anyone to download. The exact mechanism differs—physicists tend to use a central repository ( Arxiv ) while computer scientists tend to place the papers on their webpage. Arxiv has been slowly growing in subject breadth so it now sometimes used by computer scientists. Collaboration Email has enabled working remotely with coauthors. This has allowed collaborationis which would not otherwise have been possible and generally speeds research. Now, let’s look at attempts to go further. Blogs (like this one) allow public discussion about topics which are not easily categorized as “a new idea in machine learning” (like this topic). Organization of some subfield

4 0.42199686 208 hunch net-2006-09-18-What is missing for online collaborative research?

Introduction: The internet has recently made the research process much smoother: papers are easy to obtain, citations are easy to follow, and unpublished “tutorials” are often available. Yet, new research fields can look very complicated to outsiders or newcomers. Every paper is like a small piece of an unfinished jigsaw puzzle: to understand just one publication, a researcher without experience in the field will typically have to follow several layers of citations, and many of the papers he encounters have a great deal of repeated information. Furthermore, from one publication to the next, notation and terminology may not be consistent which can further confuse the reader. But the internet is now proving to be an extremely useful medium for collaboration and knowledge aggregation. Online forums allow users to ask and answer questions and to share ideas. The recent phenomenon of Wikipedia provides a proof-of-concept for the “anyone can edit” system. Can such models be used to facilitate research a

5 0.40336493 173 hunch net-2006-04-17-Rexa is live

Introduction: Rexa is now publicly available. Anyone can create an account and login. Rexa is similar to Citeseer and Google Scholar in functionality with more emphasis on the use of machine learning for intelligent information extraction. For example, Rexa can automatically display a picture on an author’s homepage when the author is searched for.

6 0.37299716 492 hunch net-2013-12-01-NIPS tutorials and Vowpal Wabbit 7.4

7 0.36661723 491 hunch net-2013-11-21-Ben Taskar is gone

8 0.36445281 386 hunch net-2010-01-13-Sam Roweis died

9 0.35648474 380 hunch net-2009-11-29-AI Safety

10 0.34477943 231 hunch net-2007-02-10-Best Practices for Collaboration

11 0.34254947 417 hunch net-2010-11-18-ICML 2011 – Call for Tutorials

12 0.33986628 381 hunch net-2009-12-07-Vowpal Wabbit version 4.0, and a NIPS heresy

13 0.33523062 376 hunch net-2009-11-06-Yisong Yue on Self-improving Systems

14 0.32493916 328 hunch net-2008-11-26-Efficient Reinforcement Learning in MDPs

15 0.31191692 193 hunch net-2006-07-09-The Stock Prediction Machine Learning Problem

16 0.31090775 2 hunch net-2005-01-24-Holy grails of machine learning?

17 0.31033927 353 hunch net-2009-05-08-Computability in Artificial Intelligence

18 0.31029767 288 hunch net-2008-02-10-Complexity Illness

19 0.30905777 139 hunch net-2005-12-11-More NIPS Papers

20 0.30631351 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(90, 0.734)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.92294157 323 hunch net-2008-11-04-Rise of the Machines

Introduction: On the enduring topic of how people deal with intelligent machines , we have this important election bulletin .

2 0.89859974 340 hunch net-2009-01-28-Nielsen’s talk

Introduction: I wanted to point to Michael Nielsen’s talk about blogging science, which I found interesting.

3 0.78501713 284 hunch net-2008-01-18-Datasets

Introduction: David Pennock notes the impressive set of datasets at datawrangling .

4 0.7049492 32 hunch net-2005-02-27-Antilearning: When proximity goes bad

Introduction: Joel Predd mentioned “ Antilearning ” by Adam Kowalczyk , which is interesting from a foundational intuitions viewpoint. There is a pervasive intuition that “nearby things tend to have the same label”. This intuition is instantiated in SVMs, nearest neighbor classifiers, decision trees, and neural networks. It turns out there are natural problems where this intuition is opposite of the truth. One natural situation where this occurs is in competition. For example, when Intel fails to meet its earnings estimate, is this evidence that AMD is doing badly also? Or evidence that AMD is doing well? This violation of the proximity intuition means that when the number of examples is few, negating a classifier which attempts to exploit proximity can provide predictive power (thus, the term “antilearning”).

5 0.68387479 239 hunch net-2007-04-18-$50K Spock Challenge

Introduction: Apparently, the company Spock is setting up a $50k entity resolution challenge . $50k is much less than the Netflix challenge, but it’s effectively the same as Netflix until someone reaches 10% . It’s also nice that the Spock challenge has a short duration. The (visible) test set is of size 25k and the training set has size 75k.

6 0.52578551 139 hunch net-2005-12-11-More NIPS Papers

7 0.48276007 144 hunch net-2005-12-28-Yet more nips thoughts

8 0.28467268 333 hunch net-2008-12-27-Adversarial Academia

9 0.037052784 422 hunch net-2011-01-16-2011 Summer Conference Deadline Season

10 0.032866511 134 hunch net-2005-12-01-The Webscience Future

11 0.024599608 95 hunch net-2005-07-14-What Learning Theory might do

12 0.0 1 hunch net-2005-01-19-Why I decided to run a weblog.

13 0.0 2 hunch net-2005-01-24-Holy grails of machine learning?

14 0.0 3 hunch net-2005-01-24-The Humanloop Spectrum of Machine Learning

15 0.0 4 hunch net-2005-01-26-Summer Schools

16 0.0 5 hunch net-2005-01-26-Watchword: Probability

17 0.0 6 hunch net-2005-01-27-Learning Complete Problems

18 0.0 7 hunch net-2005-01-31-Watchword: Assumption

19 0.0 8 hunch net-2005-02-01-NIPS: Online Bayes

20 0.0 9 hunch net-2005-02-01-Watchword: Loss