hunch_net hunch_net-2005 hunch_net-2005-24 knowledge-graph by maker-knowledge-mining

24 hunch net-2005-02-19-Machine learning reading groups


meta infos for this blog

Source: html

Introduction: Yaroslav collected an extensive list of machine learning reading groups .


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Yaroslav collected an extensive list of machine learning reading groups . [sent-1, score-1.987]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('yaroslav', 0.502), ('collected', 0.479), ('extensive', 0.479), ('groups', 0.351), ('reading', 0.311), ('list', 0.25), ('machine', 0.083), ('learning', 0.034)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 24 hunch net-2005-02-19-Machine learning reading groups

Introduction: Yaroslav collected an extensive list of machine learning reading groups .

2 0.21243118 15 hunch net-2005-02-08-Some Links

Introduction: Yaroslav Bulatov collects some links to other technical blogs.

3 0.083916813 278 hunch net-2007-12-17-New Machine Learning mailing list

Introduction: IMLS (which is the nonprofit running ICML) has setup a new mailing list for Machine Learning News . The list address is ML-news@googlegroups.com, and signup requires a google account (which you can create). Only members can send messages.

4 0.066110864 415 hunch net-2010-10-28-NY ML Symposium 2010

Introduction: About 200 people attended the 2010 NYAS ML Symposium this year. (It was about 170 last year .) I particularly enjoyed several talks. Yann has a new live demo of (limited) real-time object recognition learning. Sanjoy gave a fairly convincing and comprehensible explanation of why a modified form of single-linkage clustering is consistent in higher dimensions, and why consistency is a critical feature for clustering algorithms. I’m curious how well this algorithm works in practice. Matt Hoffman ‘s poster covering online LDA seemed pretty convincing to me as an algorithmic improvement. This year, we allocated more time towards posters & poster spotlights. For next year, we are considering some further changes. The format has traditionally been 4 invited Professor speakers, with posters and poster spotlight for students. Demand from other parties to participate is growing, for example from postdocs and startups in the area. Another growing concern is the fa

5 0.060634878 50 hunch net-2005-04-01-Basic computer science research takes a hit

Introduction: The New York Times has an interesting article about how DARPA has dropped funding for computer science to universities by about a factor of 2 over the last 5 years and become less directed towards basic research. Partially in response, the number of grant submissions to NSF has grown by a factor of 3 (with the NSF budget staying approximately constant in the interim). This is the sort of policy decision which may make sense for the defense department, but which means a large hit for basic research on information technology development in the US. For example “darpa funded the invention of the internet” is reasonably correct. This policy decision is particularly painful in the context of NSF budget cuts and the end of extensive phone monopoly funded research at Bell labs. The good news from a learning perspective is that (based on anecdotal evidence) much of the remaining funding is aimed at learning and learning-related fields. Methods of making good automated predictions obv

6 0.058099672 463 hunch net-2012-05-02-ICML: Behind the Scenes

7 0.050550826 342 hunch net-2009-02-16-KDNuggets

8 0.049642306 207 hunch net-2006-09-12-Incentive Compatible Reviewing

9 0.048937511 33 hunch net-2005-02-28-Regularization

10 0.044292618 339 hunch net-2009-01-27-Key Scientific Challenges

11 0.042879198 260 hunch net-2007-08-25-The Privacy Problem

12 0.041281443 202 hunch net-2006-08-10-Precision is not accuracy

13 0.040643379 134 hunch net-2005-12-01-The Webscience Future

14 0.039517168 192 hunch net-2006-07-08-Some recent papers

15 0.039350417 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers

16 0.036760911 210 hunch net-2006-09-28-Programming Languages for Machine Learning Implementations

17 0.036423817 12 hunch net-2005-02-03-Learning Theory, by assumption

18 0.034759112 113 hunch net-2005-09-19-NIPS Workshops

19 0.03186284 64 hunch net-2005-04-28-Science Fiction and Research

20 0.028878309 262 hunch net-2007-09-16-Optimizing Machine Learning Programs


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.033), (1, -0.011), (2, -0.011), (3, 0.016), (4, -0.009), (5, 0.02), (6, -0.004), (7, 0.014), (8, -0.016), (9, -0.019), (10, 0.011), (11, 0.005), (12, 0.011), (13, -0.01), (14, -0.005), (15, -0.013), (16, -0.033), (17, -0.003), (18, 0.018), (19, -0.004), (20, 0.032), (21, -0.028), (22, -0.025), (23, -0.034), (24, -0.061), (25, -0.05), (26, 0.004), (27, 0.039), (28, -0.019), (29, 0.056), (30, 0.039), (31, 0.061), (32, 0.056), (33, 0.054), (34, -0.039), (35, -0.006), (36, 0.047), (37, 0.024), (38, -0.017), (39, -0.015), (40, -0.102), (41, -0.065), (42, -0.035), (43, -0.098), (44, -0.029), (45, 0.001), (46, -0.124), (47, -0.002), (48, -0.024), (49, -0.023)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.90323293 24 hunch net-2005-02-19-Machine learning reading groups

Introduction: Yaroslav collected an extensive list of machine learning reading groups .

2 0.65335023 15 hunch net-2005-02-08-Some Links

Introduction: Yaroslav Bulatov collects some links to other technical blogs.

3 0.64411795 278 hunch net-2007-12-17-New Machine Learning mailing list

Introduction: IMLS (which is the nonprofit running ICML) has setup a new mailing list for Machine Learning News . The list address is ML-news@googlegroups.com, and signup requires a google account (which you can create). Only members can send messages.

4 0.57876384 342 hunch net-2009-02-16-KDNuggets

Introduction: Eric Zaetsch points out KDNuggets which is a well-developed mailing list/news site with a KDD flavor. This might particularly interest people looking for industrial jobs in machine learning, as the mailing list has many such.

5 0.42288595 428 hunch net-2011-03-27-Vowpal Wabbit, v5.1

Introduction: I just created version 5.1 of vowpal wabbit . This almost entirely a bugfix release, so it’s an easy upgrade from v5.0. In addition: There is now a mailing list , which I and several other developers are subscribed to. The main website has shifted to the wiki on github. This means that anyone with a github account can now edit it. I’m planning to give a tutorial tomorrow on it at eHarmony / the LA machine learning meetup at 10am. Drop by if you’re interested. The status of VW amongst other open source projects has changed. When VW first came out, it was relatively unique amongst existing projects in terms of features. At this point, many other projects have started to appreciate the value of the design choices here. This includes: Mahout , which now has an SGD implementation. Shogun , where Soeren is keen on incorporating features . LibLinear , where they won the KDD best paper award for out-of-core learning . This is expected—any open sourc

6 0.38242009 33 hunch net-2005-02-28-Regularization

7 0.36143374 173 hunch net-2006-04-17-Rexa is live

8 0.36105385 10 hunch net-2005-02-02-Kolmogorov Complexity and Googling

9 0.33890784 193 hunch net-2006-07-09-The Stock Prediction Machine Learning Problem

10 0.32381785 463 hunch net-2012-05-02-ICML: Behind the Scenes

11 0.29786339 207 hunch net-2006-09-12-Incentive Compatible Reviewing

12 0.29743281 1 hunch net-2005-01-19-Why I decided to run a weblog.

13 0.29289564 210 hunch net-2006-09-28-Programming Languages for Machine Learning Implementations

14 0.28902832 84 hunch net-2005-06-22-Languages of Learning

15 0.2836692 338 hunch net-2009-01-23-An Active Learning Survey

16 0.28364757 302 hunch net-2008-05-25-Inappropriate Mathematics for Machine Learning

17 0.27787468 50 hunch net-2005-04-01-Basic computer science research takes a hit

18 0.27739295 190 hunch net-2006-07-06-Branch Prediction Competition

19 0.27632862 113 hunch net-2005-09-19-NIPS Workshops

20 0.27295265 55 hunch net-2005-04-10-Is the Goal Understanding or Prediction?


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(12, 0.682), (27, 0.037)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.89977211 24 hunch net-2005-02-19-Machine learning reading groups

Introduction: Yaroslav collected an extensive list of machine learning reading groups .

2 0.81445467 421 hunch net-2011-01-03-Herman Goldstine 2011

Introduction: Vikas points out the Herman Goldstine Fellowship at IBM . I was a Herman Goldstine Fellow, and benefited from the experience a great deal—that’s where work on learning reductions started. If you can do research independently, it’s recommended. Applications are due January 6.

3 0.63799077 482 hunch net-2013-05-04-COLT and ICML registration

Introduction: Sebastien Bubeck points out COLT registration with a May 13 early registration deadline. The local organizers have done an admirable job of containing costs with a $300 registration fee. ICML registration is also available, at about an x3 higher cost. My understanding is that this is partly due to the costs of a larger conference being harder to contain, partly due to ICML lasting twice as long with tutorials and workshops, and partly because the conference organizers were a bit over-conservative in various ways.

4 0.45194098 438 hunch net-2011-07-11-Interesting Neural Network Papers at ICML 2011

Introduction: Maybe it’s too early to call, but with four separate Neural Network sessions at this year’s ICML , it looks like Neural Networks are making a comeback. Here are my highlights of these sessions. In general, my feeling is that these papers both demystify deep learning and show its broader applicability. The first observation I made is that the once disreputable “Neural” nomenclature is being used again in lieu of “deep learning”. Maybe it’s because Adam Coates et al. showed that single layer networks can work surprisingly well. An Analysis of Single-Layer Networks in Unsupervised Feature Learning , Adam Coates , Honglak Lee , Andrew Y. Ng (AISTATS 2011) The Importance of Encoding Versus Training with Sparse Coding and Vector Quantization , Adam Coates , Andrew Y. Ng (ICML 2011) Another surprising result out of Andrew Ng’s group comes from Andrew Saxe et al. who show that certain convolutional pooling architectures can obtain close to state-of-the-art pe

5 0.21049249 311 hunch net-2008-07-26-Compositional Machine Learning Algorithm Design

Introduction: There were two papers at ICML presenting learning algorithms for a contextual bandit -style setting, where the loss for all labels is not known, but the loss for one label is known. (The first might require a exploration scavenging viewpoint to understand if the experimental assignment was nonrandom.) I strongly approve of these papers and further work in this setting and its variants, because I expect it to become more important than supervised learning. As a quick review, we are thinking about situations where repeatedly: The world reveals feature values (aka context information). A policy chooses an action. The world provides a reward. Sometimes this is done in an online fashion where the policy can change based on immediate feedback and sometimes it’s done in a batch setting where many samples are collected before the policy can change. If you haven’t spent time thinking about the setting, you might want to because there are many natural applications. I’m g

6 0.16491111 259 hunch net-2007-08-19-Choice of Metrics

7 0.075059421 18 hunch net-2005-02-12-ROC vs. Accuracy vs. AROC

8 0.070559144 74 hunch net-2005-05-21-What is the right form of modularity in structured prediction?

9 0.070453569 449 hunch net-2011-11-26-Giving Thanks

10 0.06957908 343 hunch net-2009-02-18-Decision by Vetocracy

11 0.066711351 411 hunch net-2010-09-21-Regretting the dead

12 0.053609975 166 hunch net-2006-03-24-NLPers

13 0.053609975 246 hunch net-2007-06-13-Not Posting

14 0.053609975 418 hunch net-2010-12-02-Traffic Prediction Problem

15 0.053554732 274 hunch net-2007-11-28-Computational Consequences of Classification

16 0.05346638 247 hunch net-2007-06-14-Interesting Papers at COLT 2007

17 0.053346187 308 hunch net-2008-07-06-To Dual or Not

18 0.053240072 400 hunch net-2010-06-13-The Good News on Exploration and Learning

19 0.053217154 245 hunch net-2007-05-12-Loss Function Semantics

20 0.053207465 172 hunch net-2006-04-14-JMLR is a success