hunch_net hunch_net-2006 hunch_net-2006-166 knowledge-graph by maker-knowledge-mining

166 hunch net-2006-03-24-NLPers


meta infos for this blog

Source: html

Introduction: Hal Daume has started the NLPers blog to discuss learning for language problems.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Hal Daume has started the NLPers blog to discuss learning for language problems. [sent-1, score-1.445]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('daume', 0.539), ('hal', 0.436), ('discuss', 0.386), ('blog', 0.365), ('started', 0.327), ('language', 0.327), ('problems', 0.147), ('learning', 0.04)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999994 166 hunch net-2006-03-24-NLPers

Introduction: Hal Daume has started the NLPers blog to discuss learning for language problems.

2 0.26505208 59 hunch net-2005-04-22-New Blog: [Lowerbounds,Upperbounds]

Introduction: Maverick Woo and the Aladdin group at CMU have started a CS theory-related blog here .

3 0.17824109 225 hunch net-2007-01-02-Retrospective

Introduction: It’s been almost two years since this blog began. In that time, I’ve learned enough to shift my expectations in several ways. Initially, the idea was for a general purpose ML blog where different people could contribute posts. What has actually happened is most posts come from me, with a few guest posts that I greatly value. There are a few reasons I see for this. Overload . A couple years ago, I had not fully appreciated just how busy life gets for a researcher. Making a post is not simply a matter of getting to it, but rather of prioritizing between {writing a grant, finishing an overdue review, writing a paper, teaching a class, writing a program, etc…}. This is a substantial transition away from what life as a graduate student is like. At some point the question is not “when will I get to it?” but rather “will I get to it?” and the answer starts to become “no” most of the time. Feedback failure . This blog currently receives about 3K unique visitors per day from

4 0.16848539 214 hunch net-2006-10-13-David Pennock starts Oddhead

Introduction: his blog on information markets and other research topics .

5 0.15164636 486 hunch net-2013-07-10-Thoughts on Artificial Intelligence

Introduction: David McAllester starts a blog .

6 0.14091074 96 hunch net-2005-07-21-Six Months

7 0.13807607 383 hunch net-2009-12-09-Inherent Uncertainty

8 0.12555657 92 hunch net-2005-07-11-AAAI blog

9 0.12369435 350 hunch net-2009-04-23-Jonathan Chang at Slycoder

10 0.12179583 467 hunch net-2012-06-15-Normal Deviate and the UCSC Machine Learning Summer School

11 0.11438545 114 hunch net-2005-09-20-Workshop Proposal: Atomic Learning

12 0.10721032 434 hunch net-2011-05-09-CI Fellows, again

13 0.10495805 84 hunch net-2005-06-22-Languages of Learning

14 0.099308193 480 hunch net-2013-03-22-I’m a bandit

15 0.084521383 492 hunch net-2013-12-01-NIPS tutorials and Vowpal Wabbit 7.4

16 0.080861919 412 hunch net-2010-09-28-Machined Learnings

17 0.079869211 280 hunch net-2007-12-20-Cool and Interesting things at NIPS, take three

18 0.078478113 402 hunch net-2010-07-02-MetaOptimize

19 0.075630233 361 hunch net-2009-06-24-Interesting papers at UAICMOLT 2009

20 0.07526169 8 hunch net-2005-02-01-NIPS: Online Bayes


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.06), (1, -0.003), (2, -0.072), (3, 0.05), (4, -0.068), (5, -0.013), (6, 0.048), (7, -0.378), (8, 0.167), (9, -0.023), (10, -0.01), (11, -0.036), (12, 0.01), (13, -0.026), (14, -0.065), (15, -0.041), (16, 0.059), (17, -0.0), (18, -0.079), (19, -0.095), (20, 0.046), (21, -0.003), (22, 0.043), (23, -0.008), (24, -0.012), (25, 0.034), (26, -0.067), (27, -0.05), (28, 0.024), (29, 0.1), (30, -0.011), (31, 0.046), (32, 0.091), (33, 0.09), (34, -0.056), (35, 0.051), (36, 0.013), (37, 0.027), (38, 0.073), (39, 0.022), (40, 0.065), (41, -0.018), (42, -0.048), (43, -0.086), (44, -0.001), (45, -0.004), (46, -0.028), (47, 0.001), (48, 0.11), (49, -0.017)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97710603 166 hunch net-2006-03-24-NLPers

Introduction: Hal Daume has started the NLPers blog to discuss learning for language problems.

2 0.84275663 486 hunch net-2013-07-10-Thoughts on Artificial Intelligence

Introduction: David McAllester starts a blog .

3 0.82272083 59 hunch net-2005-04-22-New Blog: [Lowerbounds,Upperbounds]

Introduction: Maverick Woo and the Aladdin group at CMU have started a CS theory-related blog here .

4 0.76676565 350 hunch net-2009-04-23-Jonathan Chang at Slycoder

Introduction: Jonathan Chang has a research blog on aspects of machine learning.

5 0.74170184 214 hunch net-2006-10-13-David Pennock starts Oddhead

Introduction: his blog on information markets and other research topics .

6 0.63628292 383 hunch net-2009-12-09-Inherent Uncertainty

7 0.61922139 225 hunch net-2007-01-02-Retrospective

8 0.57262814 480 hunch net-2013-03-22-I’m a bandit

9 0.55234957 96 hunch net-2005-07-21-Six Months

10 0.54171282 92 hunch net-2005-07-11-AAAI blog

11 0.50356269 402 hunch net-2010-07-02-MetaOptimize

12 0.49795124 467 hunch net-2012-06-15-Normal Deviate and the UCSC Machine Learning Summer School

13 0.4209533 296 hunch net-2008-04-21-The Science 2.0 article

14 0.4070777 182 hunch net-2006-06-05-Server Shift, Site Tweaks, Suggestions?

15 0.34849924 84 hunch net-2005-06-22-Languages of Learning

16 0.34805003 412 hunch net-2010-09-28-Machined Learnings

17 0.27443725 151 hunch net-2006-01-25-1 year

18 0.26277748 210 hunch net-2006-09-28-Programming Languages for Machine Learning Implementations

19 0.26186937 70 hunch net-2005-05-12-Math on the Web

20 0.25776258 122 hunch net-2005-10-13-Site tweak


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.722)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 166 hunch net-2006-03-24-NLPers

Introduction: Hal Daume has started the NLPers blog to discuss learning for language problems.

2 1.0 246 hunch net-2007-06-13-Not Posting

Introduction: If you have been disappointed by the lack of a post for the last month, consider contributing your own (I’ve been busy+uninspired). Also, keep in mind that there is a community of machine learning blogs (see the sidebar).

3 1.0 418 hunch net-2010-12-02-Traffic Prediction Problem

Introduction: Slashdot points out the Traffic Prediction Challenge which looks pretty fun. The temporal aspect seems to be very common in many real-world problems and somewhat understudied.

4 0.99896955 274 hunch net-2007-11-28-Computational Consequences of Classification

Introduction: In the regression vs classification debate , I’m adding a new “pro” to classification. It seems there are computational shortcuts available for classification which simply aren’t available for regression. This arises in several situations. In active learning it is sometimes possible to find an e error classifier with just log(e) labeled samples. Only much more modest improvements appear to be achievable for squared loss regression. The essential reason is that the loss function on many examples is flat with respect to large variations in the parameter spaces of a learned classifier, which implies that many of these classifiers do not need to be considered. In contrast, for squared loss regression, most substantial variations in the parameter space influence the loss at most points. In budgeted learning, where there is either a computational time constraint or a feature cost constraint, a classifier can sometimes be learned to very high accuracy under the constraints

5 0.99732149 247 hunch net-2007-06-14-Interesting Papers at COLT 2007

Introduction: Here are two papers that seem particularly interesting at this year’s COLT. Gilles Blanchard and François Fleuret , Occam’s Hammer . When we are interested in very tight bounds on the true error rate of a classifier, it is tempting to use a PAC-Bayes bound which can (empirically) be quite tight . A disadvantage of the PAC-Bayes bound is that it applies to a classifier which is randomized over a set of base classifiers rather than a single classifier. This paper shows that a similar bound can be proved which holds for a single classifier drawn from the set. The ability to safely use a single classifier is very nice. This technique applies generically to any base bound, so it has other applications covered in the paper. Adam Tauman Kalai . Learning Nested Halfspaces and Uphill Decision Trees . Classification PAC-learning, where you prove that any problem amongst some set is polytime learnable with respect to any distribution over the input X is extraordinarily ch

6 0.99507952 308 hunch net-2008-07-06-To Dual or Not

7 0.99310011 400 hunch net-2010-06-13-The Good News on Exploration and Learning

8 0.99267262 245 hunch net-2007-05-12-Loss Function Semantics

9 0.9924919 172 hunch net-2006-04-14-JMLR is a success

10 0.99159312 288 hunch net-2008-02-10-Complexity Illness

11 0.98636121 45 hunch net-2005-03-22-Active learning

12 0.97499895 9 hunch net-2005-02-01-Watchword: Loss

13 0.96963954 341 hunch net-2009-02-04-Optimal Proxy Loss for Classification

14 0.96788663 352 hunch net-2009-05-06-Machine Learning to AI

15 0.95890832 304 hunch net-2008-06-27-Reviewing Horror Stories

16 0.95277888 196 hunch net-2006-07-13-Regression vs. Classification as a Primitive

17 0.94472241 483 hunch net-2013-06-10-The Large Scale Learning class notes

18 0.94137728 244 hunch net-2007-05-09-The Missing Bound

19 0.93332106 293 hunch net-2008-03-23-Interactive Machine Learning

20 0.92975235 8 hunch net-2005-02-01-NIPS: Online Bayes