hunch_net hunch_net-2007 hunch_net-2007-246 knowledge-graph by maker-knowledge-mining

246 hunch net-2007-06-13-Not Posting


meta infos for this blog

Source: html

Introduction: If you have been disappointed by the lack of a post for the last month, consider contributing your own (I’ve been busy+uninspired). Also, keep in mind that there is a community of machine learning blogs (see the sidebar).


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 If you have been disappointed by the lack of a post for the last month, consider contributing your own (I’ve been busy+uninspired). [sent-1, score-1.506]

2 Also, keep in mind that there is a community of machine learning blogs (see the sidebar). [sent-2, score-1.161]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('contributing', 0.417), ('disappointed', 0.394), ('blogs', 0.327), ('month', 0.327), ('busy', 0.319), ('mind', 0.271), ('keep', 0.266), ('lack', 0.221), ('community', 0.205), ('post', 0.167), ('consider', 0.156), ('last', 0.151), ('ve', 0.136), ('see', 0.119), ('also', 0.082), ('machine', 0.065), ('learning', 0.027)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 246 hunch net-2007-06-13-Not Posting

Introduction: If you have been disappointed by the lack of a post for the last month, consider contributing your own (I’ve been busy+uninspired). Also, keep in mind that there is a community of machine learning blogs (see the sidebar).

2 0.11797123 225 hunch net-2007-01-02-Retrospective

Introduction: It’s been almost two years since this blog began. In that time, I’ve learned enough to shift my expectations in several ways. Initially, the idea was for a general purpose ML blog where different people could contribute posts. What has actually happened is most posts come from me, with a few guest posts that I greatly value. There are a few reasons I see for this. Overload . A couple years ago, I had not fully appreciated just how busy life gets for a researcher. Making a post is not simply a matter of getting to it, but rather of prioritizing between {writing a grant, finishing an overdue review, writing a paper, teaching a class, writing a program, etc…}. This is a substantial transition away from what life as a graduate student is like. At some point the question is not “when will I get to it?” but rather “will I get to it?” and the answer starts to become “no” most of the time. Feedback failure . This blog currently receives about 3K unique visitors per day from

3 0.11527989 15 hunch net-2005-02-08-Some Links

Introduction: Yaroslav Bulatov collects some links to other technical blogs.

4 0.072171785 354 hunch net-2009-05-17-Server Update

Introduction: The hunch.net server has been updated. I’ve taken the opportunity to upgrade the version of wordpress which caused cascading changes. Old threaded comments are now flattened. The system we used to use ( Brian’s threaded comments ) appears incompatible with the new threading system built into wordpress. I haven’t yet figured out a workaround. I setup a feedburner account . I added an RSS aggregator for both Machine Learning and other research blogs that I like to follow. This is something that I’ve wanted to do for awhile. Many other minor changes in font and format, with some help from Alina . If you have any suggestions for site tweaks, please speak up.

5 0.069161452 39 hunch net-2005-03-10-Breaking Abstractions

Introduction: Sam Roweis ‘s comment reminds me of a more general issue that comes up in doing research: abstractions always break. Real number’s aren’t. Most real numbers can not be represented with any machine. One implication of this is that many real-number based algorithms have difficulties when implemented with floating point numbers. The box on your desk is not a turing machine. A turing machine can compute anything computable, given sufficient time. A typical computer fails terribly when the state required for the computation exceeds some limit. Nash equilibria aren’t equilibria. This comes up when trying to predict human behavior based on the result of the equilibria computation. Often, it doesn’t work. The probability isn’t. Probability is an abstraction expressing either our lack of knowledge (the Bayesian viewpoint) or fundamental randomization (the frequentist viewpoint). From the frequentist viewpoint the lack of knowledge typically precludes actually knowing the fu

6 0.066026017 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

7 0.064820088 25 hunch net-2005-02-20-At One Month

8 0.05968713 89 hunch net-2005-07-04-The Health of COLT

9 0.059684306 96 hunch net-2005-07-21-Six Months

10 0.059272237 151 hunch net-2006-01-25-1 year

11 0.058872584 468 hunch net-2012-06-29-ICML survey and comments

12 0.057841484 395 hunch net-2010-04-26-Compassionate Reviewing

13 0.053872876 461 hunch net-2012-04-09-ICML author feedback is open

14 0.047971368 296 hunch net-2008-04-21-The Science 2.0 article

15 0.047062725 492 hunch net-2013-12-01-NIPS tutorials and Vowpal Wabbit 7.4

16 0.046648413 22 hunch net-2005-02-18-What it means to do research.

17 0.046430171 116 hunch net-2005-09-30-Research in conferences

18 0.046144385 282 hunch net-2008-01-06-Research Political Issues

19 0.046049379 305 hunch net-2008-06-30-ICML has a comment system

20 0.045089878 280 hunch net-2007-12-20-Cool and Interesting things at NIPS, take three


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.074), (1, -0.037), (2, -0.01), (3, 0.032), (4, -0.011), (5, 0.002), (6, 0.007), (7, -0.075), (8, 0.005), (9, -0.027), (10, 0.012), (11, -0.034), (12, -0.013), (13, 0.015), (14, 0.024), (15, -0.024), (16, -0.064), (17, -0.069), (18, 0.021), (19, -0.004), (20, -0.035), (21, 0.002), (22, -0.05), (23, -0.008), (24, -0.045), (25, -0.026), (26, 0.01), (27, -0.016), (28, 0.041), (29, 0.021), (30, -0.017), (31, -0.017), (32, -0.029), (33, -0.026), (34, -0.004), (35, 0.089), (36, 0.067), (37, -0.039), (38, -0.078), (39, 0.038), (40, -0.042), (41, 0.075), (42, 0.007), (43, -0.009), (44, 0.037), (45, 0.125), (46, -0.001), (47, -0.029), (48, 0.029), (49, 0.011)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.93077564 246 hunch net-2007-06-13-Not Posting

Introduction: If you have been disappointed by the lack of a post for the last month, consider contributing your own (I’ve been busy+uninspired). Also, keep in mind that there is a community of machine learning blogs (see the sidebar).

2 0.51719832 354 hunch net-2009-05-17-Server Update

Introduction: The hunch.net server has been updated. I’ve taken the opportunity to upgrade the version of wordpress which caused cascading changes. Old threaded comments are now flattened. The system we used to use ( Brian’s threaded comments ) appears incompatible with the new threading system built into wordpress. I haven’t yet figured out a workaround. I setup a feedburner account . I added an RSS aggregator for both Machine Learning and other research blogs that I like to follow. This is something that I’ve wanted to do for awhile. Many other minor changes in font and format, with some help from Alina . If you have any suggestions for site tweaks, please speak up.

3 0.49307626 107 hunch net-2005-09-05-Site Update

Introduction: I tweaked the site in a number of ways today, including: Updating to WordPress 1.5. Installing and heavily tweaking the Geekniche theme. Update: I switched back to a tweaked version of the old theme. Adding the Customizable Post Listings plugin. Installing the StatTraq plugin. Updating some of the links. I particularly recommend looking at the computer research policy blog. Adding threaded comments . This doesn’t thread old comments obviously, but the extra structure may be helpful for new ones. Overall, I think this is an improvement, and it addresses a few of my earlier problems . If you have any difficulties or anything seems “not quite right”, please speak up. A few other tweaks to the site may happen in the near future.

4 0.47694132 151 hunch net-2006-01-25-1 year

Introduction: At the one year (+5 days) anniversary, the natural question is: “Was it helpful for research?” Answer: Yes, and so it shall continue. Some evidence is provided by noticing that I am about a factor of 2 more overloaded with paper ideas than I’ve ever previously been. It is always hard to estimate counterfactual worlds, but I expect that this is also a factor of 2 more than “What if I had not started the blog?” As for “Why?”, there seem to be two primary effects. A blog is a mechanism for connecting with people who either think like you or are interested in the same problems. This allows for concentration of thinking which is very helpful in solving problems. The process of stating things you don’t understand publicly is very helpful in understanding them. Sometimes you are simply forced to express them in a way which aids understanding. Sometimes someone else says something which helps. And sometimes you discover that someone else has already solved the problem. The

5 0.47165972 225 hunch net-2007-01-02-Retrospective

Introduction: It’s been almost two years since this blog began. In that time, I’ve learned enough to shift my expectations in several ways. Initially, the idea was for a general purpose ML blog where different people could contribute posts. What has actually happened is most posts come from me, with a few guest posts that I greatly value. There are a few reasons I see for this. Overload . A couple years ago, I had not fully appreciated just how busy life gets for a researcher. Making a post is not simply a matter of getting to it, but rather of prioritizing between {writing a grant, finishing an overdue review, writing a paper, teaching a class, writing a program, etc…}. This is a substantial transition away from what life as a graduate student is like. At some point the question is not “when will I get to it?” but rather “will I get to it?” and the answer starts to become “no” most of the time. Feedback failure . This blog currently receives about 3K unique visitors per day from

6 0.46157646 25 hunch net-2005-02-20-At One Month

7 0.44015604 15 hunch net-2005-02-08-Some Links

8 0.4260588 487 hunch net-2013-07-24-ICML 2012 videos lost

9 0.41814402 401 hunch net-2010-06-20-2010 ICML discussion site

10 0.40590835 96 hunch net-2005-07-21-Six Months

11 0.39786822 182 hunch net-2006-06-05-Server Shift, Site Tweaks, Suggestions?

12 0.39663789 231 hunch net-2007-02-10-Best Practices for Collaboration

13 0.38717198 254 hunch net-2007-07-12-ICML Trends

14 0.37914389 340 hunch net-2009-01-28-Nielsen’s talk

15 0.35531226 296 hunch net-2008-04-21-The Science 2.0 article

16 0.35007215 39 hunch net-2005-03-10-Breaking Abstractions

17 0.34571373 305 hunch net-2008-06-30-ICML has a comment system

18 0.34218615 490 hunch net-2013-11-09-Graduates and Postdocs

19 0.33533496 383 hunch net-2009-12-09-Inherent Uncertainty

20 0.33041364 280 hunch net-2007-12-20-Cool and Interesting things at NIPS, take three


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.787)]

similar blogs list:

simIndex simValue blogId blogTitle

1 1.0 166 hunch net-2006-03-24-NLPers

Introduction: Hal Daume has started the NLPers blog to discuss learning for language problems.

same-blog 2 1.0 246 hunch net-2007-06-13-Not Posting

Introduction: If you have been disappointed by the lack of a post for the last month, consider contributing your own (I’ve been busy+uninspired). Also, keep in mind that there is a community of machine learning blogs (see the sidebar).

3 1.0 418 hunch net-2010-12-02-Traffic Prediction Problem

Introduction: Slashdot points out the Traffic Prediction Challenge which looks pretty fun. The temporal aspect seems to be very common in many real-world problems and somewhat understudied.

4 0.99896955 274 hunch net-2007-11-28-Computational Consequences of Classification

Introduction: In the regression vs classification debate , I’m adding a new “pro” to classification. It seems there are computational shortcuts available for classification which simply aren’t available for regression. This arises in several situations. In active learning it is sometimes possible to find an e error classifier with just log(e) labeled samples. Only much more modest improvements appear to be achievable for squared loss regression. The essential reason is that the loss function on many examples is flat with respect to large variations in the parameter spaces of a learned classifier, which implies that many of these classifiers do not need to be considered. In contrast, for squared loss regression, most substantial variations in the parameter space influence the loss at most points. In budgeted learning, where there is either a computational time constraint or a feature cost constraint, a classifier can sometimes be learned to very high accuracy under the constraints

5 0.99732149 247 hunch net-2007-06-14-Interesting Papers at COLT 2007

Introduction: Here are two papers that seem particularly interesting at this year’s COLT. Gilles Blanchard and François Fleuret , Occam’s Hammer . When we are interested in very tight bounds on the true error rate of a classifier, it is tempting to use a PAC-Bayes bound which can (empirically) be quite tight . A disadvantage of the PAC-Bayes bound is that it applies to a classifier which is randomized over a set of base classifiers rather than a single classifier. This paper shows that a similar bound can be proved which holds for a single classifier drawn from the set. The ability to safely use a single classifier is very nice. This technique applies generically to any base bound, so it has other applications covered in the paper. Adam Tauman Kalai . Learning Nested Halfspaces and Uphill Decision Trees . Classification PAC-learning, where you prove that any problem amongst some set is polytime learnable with respect to any distribution over the input X is extraordinarily ch

6 0.99507952 308 hunch net-2008-07-06-To Dual or Not

7 0.99310011 400 hunch net-2010-06-13-The Good News on Exploration and Learning

8 0.99267262 245 hunch net-2007-05-12-Loss Function Semantics

9 0.9924919 172 hunch net-2006-04-14-JMLR is a success

10 0.99159312 288 hunch net-2008-02-10-Complexity Illness

11 0.98636121 45 hunch net-2005-03-22-Active learning

12 0.97499895 9 hunch net-2005-02-01-Watchword: Loss

13 0.96963954 341 hunch net-2009-02-04-Optimal Proxy Loss for Classification

14 0.96788663 352 hunch net-2009-05-06-Machine Learning to AI

15 0.95890832 304 hunch net-2008-06-27-Reviewing Horror Stories

16 0.95277888 196 hunch net-2006-07-13-Regression vs. Classification as a Primitive

17 0.94472241 483 hunch net-2013-06-10-The Large Scale Learning class notes

18 0.94137728 244 hunch net-2007-05-09-The Missing Bound

19 0.93332106 293 hunch net-2008-03-23-Interactive Machine Learning

20 0.92975235 8 hunch net-2005-02-01-NIPS: Online Bayes