hunch_net hunch_net-2009 hunch_net-2009-342 knowledge-graph by maker-knowledge-mining

342 hunch net-2009-02-16-KDNuggets


meta infos for this blog

Source: html

Introduction: Eric Zaetsch points out KDNuggets which is a well-developed mailing list/news site with a KDD flavor. This might particularly interest people looking for industrial jobs in machine learning, as the mailing list has many such.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Eric Zaetsch points out KDNuggets which is a well-developed mailing list/news site with a KDD flavor. [sent-1, score-1.048]

2 This might particularly interest people looking for industrial jobs in machine learning, as the mailing list has many such. [sent-2, score-2.205]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('mailing', 0.637), ('eric', 0.36), ('jobs', 0.318), ('industrial', 0.308), ('site', 0.244), ('looking', 0.212), ('kdd', 0.21), ('list', 0.179), ('points', 0.167), ('interest', 0.154), ('particularly', 0.133), ('might', 0.09), ('people', 0.064), ('machine', 0.06), ('many', 0.05), ('learning', 0.025)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 342 hunch net-2009-02-16-KDNuggets

Introduction: Eric Zaetsch points out KDNuggets which is a well-developed mailing list/news site with a KDD flavor. This might particularly interest people looking for industrial jobs in machine learning, as the mailing list has many such.

2 0.23878744 278 hunch net-2007-12-17-New Machine Learning mailing list

Introduction: IMLS (which is the nonprofit running ICML) has setup a new mailing list for Machine Learning News . The list address is ML-news@googlegroups.com, and signup requires a google account (which you can create). Only members can send messages.

3 0.12523831 10 hunch net-2005-02-02-Kolmogorov Complexity and Googling

Introduction: Machine learning makes the New Scientist . From the article: COMPUTERS can learn the meaning of words simply by plugging into Google. The finding could bring forward the day that true artificial intelligence is developed‌. But Paul Vitanyi and Rudi Cilibrasi of the National Institute for Mathematics and Computer Science in Amsterdam, the Netherlands, realised that a Google search can be used to measure how closely two words relate to each other. For instance, imagine a computer needs to understand what a hat is. You can read the paper at KC Google . Hat tip: Kolmogorov Mailing List Any thoughts on the paper?

4 0.10869544 428 hunch net-2011-03-27-Vowpal Wabbit, v5.1

Introduction: I just created version 5.1 of vowpal wabbit . This almost entirely a bugfix release, so it’s an easy upgrade from v5.0. In addition: There is now a mailing list , which I and several other developers are subscribed to. The main website has shifted to the wiki on github. This means that anyone with a github account can now edit it. I’m planning to give a tutorial tomorrow on it at eHarmony / the LA machine learning meetup at 10am. Drop by if you’re interested. The status of VW amongst other open source projects has changed. When VW first came out, it was relatively unique amongst existing projects in terms of features. At this point, many other projects have started to appreciate the value of the design choices here. This includes: Mahout , which now has an SGD implementation. Shogun , where Soeren is keen on incorporating features . LibLinear , where they won the KDD best paper award for out-of-core learning . This is expected—any open sourc

5 0.090447612 335 hunch net-2009-01-08-Predictive Analytics World

Introduction: Carla Vicens and Eric Siegel contacted me about Predictive Analytics World in San Francisco February 18&19, which I wasn’t familiar with. A quick look at the agenda reveals several people I know working on applications of machine learning in businesses, covering deployed applications topics. It’s interesting to see a business-focused machine learning conference, as it says that we are succeeding as a field. If you are interested in deployed applications, you might attend. Eric and I did a quick interview by email. John > I’ve mostly published and participated in academic machine learning conferences like ICML, COLT, and NIPS. When I look at the set of speakers and subjects for your conference I think “machine learning for business”. Is that your understanding of things? What I’m trying to ask is: what do you view as the primary goal for this conference? Eric > You got it. This is the business event focused on the commercial deployment of technology developed at

6 0.089577205 363 hunch net-2009-07-09-The Machine Learning Forum

7 0.085070267 1 hunch net-2005-01-19-Why I decided to run a weblog.

8 0.084662125 250 hunch net-2007-06-23-Machine Learning Jobs are Growing on Trees

9 0.081050448 107 hunch net-2005-09-05-Site Update

10 0.06873329 132 hunch net-2005-11-26-The Design of an Optimal Research Environment

11 0.063836835 441 hunch net-2011-08-15-Vowpal Wabbit 6.0

12 0.063353173 452 hunch net-2012-01-04-Why ICML? and the summer conferences

13 0.062369037 406 hunch net-2010-08-22-KDD 2010

14 0.056583982 326 hunch net-2008-11-11-COLT CFP

15 0.05285297 240 hunch net-2007-04-21-Videolectures.net

16 0.052057549 475 hunch net-2012-10-26-ML Symposium and Strata-Hadoop World

17 0.050550826 24 hunch net-2005-02-19-Machine learning reading groups

18 0.050169736 464 hunch net-2012-05-03-Microsoft Research, New York City

19 0.047875986 46 hunch net-2005-03-24-The Role of Workshops

20 0.047342181 416 hunch net-2010-10-29-To Vidoelecture or not


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.073), (1, -0.059), (2, -0.064), (3, -0.025), (4, -0.018), (5, 0.008), (6, -0.029), (7, -0.018), (8, -0.026), (9, -0.022), (10, -0.02), (11, 0.017), (12, -0.048), (13, 0.018), (14, 0.005), (15, -0.027), (16, -0.039), (17, -0.045), (18, 0.01), (19, 0.02), (20, 0.077), (21, 0.003), (22, -0.071), (23, -0.008), (24, -0.12), (25, -0.043), (26, 0.057), (27, 0.066), (28, -0.141), (29, 0.017), (30, 0.013), (31, 0.08), (32, 0.015), (33, 0.039), (34, 0.094), (35, -0.017), (36, -0.101), (37, 0.057), (38, 0.083), (39, -0.049), (40, -0.188), (41, -0.03), (42, 0.001), (43, -0.098), (44, -0.032), (45, -0.036), (46, -0.111), (47, -0.01), (48, -0.028), (49, -0.065)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95724201 342 hunch net-2009-02-16-KDNuggets

Introduction: Eric Zaetsch points out KDNuggets which is a well-developed mailing list/news site with a KDD flavor. This might particularly interest people looking for industrial jobs in machine learning, as the mailing list has many such.

2 0.80562955 278 hunch net-2007-12-17-New Machine Learning mailing list

Introduction: IMLS (which is the nonprofit running ICML) has setup a new mailing list for Machine Learning News . The list address is ML-news@googlegroups.com, and signup requires a google account (which you can create). Only members can send messages.

3 0.66659957 24 hunch net-2005-02-19-Machine learning reading groups

Introduction: Yaroslav collected an extensive list of machine learning reading groups .

4 0.58611858 10 hunch net-2005-02-02-Kolmogorov Complexity and Googling

Introduction: Machine learning makes the New Scientist . From the article: COMPUTERS can learn the meaning of words simply by plugging into Google. The finding could bring forward the day that true artificial intelligence is developed‌. But Paul Vitanyi and Rudi Cilibrasi of the National Institute for Mathematics and Computer Science in Amsterdam, the Netherlands, realised that a Google search can be used to measure how closely two words relate to each other. For instance, imagine a computer needs to understand what a hat is. You can read the paper at KC Google . Hat tip: Kolmogorov Mailing List Any thoughts on the paper?

5 0.48902345 428 hunch net-2011-03-27-Vowpal Wabbit, v5.1

Introduction: I just created version 5.1 of vowpal wabbit . This almost entirely a bugfix release, so it’s an easy upgrade from v5.0. In addition: There is now a mailing list , which I and several other developers are subscribed to. The main website has shifted to the wiki on github. This means that anyone with a github account can now edit it. I’m planning to give a tutorial tomorrow on it at eHarmony / the LA machine learning meetup at 10am. Drop by if you’re interested. The status of VW amongst other open source projects has changed. When VW first came out, it was relatively unique amongst existing projects in terms of features. At this point, many other projects have started to appreciate the value of the design choices here. This includes: Mahout , which now has an SGD implementation. Shogun , where Soeren is keen on incorporating features . LibLinear , where they won the KDD best paper award for out-of-core learning . This is expected—any open sourc

6 0.46551165 240 hunch net-2007-04-21-Videolectures.net

7 0.44132853 354 hunch net-2009-05-17-Server Update

8 0.42861384 173 hunch net-2006-04-17-Rexa is live

9 0.41811362 212 hunch net-2006-10-04-Health of Conferences Wiki

10 0.41675192 335 hunch net-2009-01-08-Predictive Analytics World

11 0.41129583 446 hunch net-2011-10-03-Monday announcements

12 0.4068175 481 hunch net-2013-04-15-NEML II

13 0.40472403 137 hunch net-2005-12-09-Machine Learning Thoughts

14 0.40213183 1 hunch net-2005-01-19-Why I decided to run a weblog.

15 0.37729076 418 hunch net-2010-12-02-Traffic Prediction Problem

16 0.36731476 250 hunch net-2007-06-23-Machine Learning Jobs are Growing on Trees

17 0.36232615 433 hunch net-2011-04-23-ICML workshops due

18 0.35395807 15 hunch net-2005-02-08-Some Links

19 0.3508631 113 hunch net-2005-09-19-NIPS Workshops

20 0.34782398 328 hunch net-2008-11-26-Efficient Reinforcement Learning in MDPs


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.221), (53, 0.16), (58, 0.388)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.86702657 342 hunch net-2009-02-16-KDNuggets

Introduction: Eric Zaetsch points out KDNuggets which is a well-developed mailing list/news site with a KDD flavor. This might particularly interest people looking for industrial jobs in machine learning, as the mailing list has many such.

2 0.8385796 349 hunch net-2009-04-21-Interesting Presentations at Snowbird

Introduction: Here are a few of presentations interesting me at the snowbird learning workshop (which, amusingly, was in Florida with AIStat ). Thomas Breuel described machine learning problems within OCR and an open source OCR software/research platform with modular learning components as well has a 60Million size dataset derived from Google ‘s scanned books. Kristen Grauman and Fei-Fei Li discussed using active learning with different cost labels and large datasets for image ontology . Both of them used Mechanical Turk as a labeling system , which looks to become routine, at least for vision problems. Russ Tedrake discussed using machine learning for control, with a basic claim that it was the way to go for problems involving a medium Reynold’s number such as in bird flight, where simulation is extremely intense. Yann LeCun presented a poster on an FPGA for convolutional neural networks yielding a factor of 100 speedup in processing. In addition to the graphi

3 0.80821526 470 hunch net-2012-07-17-MUCMD and BayLearn

Introduction: The workshop on the Meaningful Use of Complex Medical Data is happening again, August 9-12 in LA, near UAI on Catalina Island August 15-17. I enjoyed my visit last year, and expect this year to be interesting also. The first Bay Area Machine Learning Symposium is August 30 at Google . Abstracts are due July 30.

4 0.68118417 149 hunch net-2006-01-18-Is Multitask Learning Black-Boxable?

Introduction: Multitask learning is the learning to predict multiple outputs given the same input. Mathematically, we might think of this as trying to learn a function f:X -> {0,1} n . Structured learning is similar at this level of abstraction. Many people have worked on solving multitask learning (for example Rich Caruana ) using methods which share an internal representation. On other words, the the computation and learning of the i th prediction is shared with the computation and learning of the j th prediction. Another way to ask this question is: can we avoid sharing the internal representation? For example, it might be feasible to solve multitask learning by some process feeding the i th prediction f(x) i into the j th predictor f(x,f(x) i ) j , If the answer is “no”, then it implies we can not take binary classification as a basic primitive in the process of solving prediction problems. If the answer is “yes”, then we can reuse binary classification algorithms to

5 0.5511148 6 hunch net-2005-01-27-Learning Complete Problems

Introduction: Let’s define a learning problem as making predictions given past data. There are several ways to attack the learning problem which seem to be equivalent to solving the learning problem. Find the Invariant This viewpoint says that learning is all about learning (or incorporating) transformations of objects that do not change the correct prediction. The best possible invariant is the one which says “all things of the same class are the same”. Finding this is equivalent to learning. This viewpoint is particularly common when working with image features. Feature Selection This viewpoint says that the way to learn is by finding the right features to input to a learning algorithm. The best feature is the one which is the class to predict. Finding this is equivalent to learning for all reasonable learning algorithms. This viewpoint is common in several applications of machine learning. See Gilad’s and Bianca’s comments . Find the Representation This is almost the same a

6 0.53520131 201 hunch net-2006-08-07-The Call of the Deep

7 0.52342534 483 hunch net-2013-06-10-The Large Scale Learning class notes

8 0.52012831 367 hunch net-2009-08-16-Centmail comments

9 0.51872838 60 hunch net-2005-04-23-Advantages and Disadvantages of Bayesian Learning

10 0.51776701 152 hunch net-2006-01-30-Should the Input Representation be a Vector?

11 0.51711434 227 hunch net-2007-01-10-A Deep Belief Net Learning Problem

12 0.51202869 2 hunch net-2005-01-24-Holy grails of machine learning?

13 0.50938863 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

14 0.50855255 158 hunch net-2006-02-24-A Fundamentalist Organization of Machine Learning

15 0.50479466 9 hunch net-2005-02-01-Watchword: Loss

16 0.50375229 131 hunch net-2005-11-16-The Everything Ensemble Edge

17 0.50344348 91 hunch net-2005-07-10-Thinking the Unthought

18 0.50332135 27 hunch net-2005-02-23-Problem: Reinforcement Learning with Classification

19 0.50307399 67 hunch net-2005-05-06-Don’t mix the solution into the problem

20 0.5023241 12 hunch net-2005-02-03-Learning Theory, by assumption