hunch_net hunch_net-2005 hunch_net-2005-122 knowledge-graph by maker-knowledge-mining

122 hunch net-2005-10-13-Site tweak


meta infos for this blog

Source: html

Introduction: Several people have had difficulty with comments which seem to have an allowed language significantly poorer than posts. The set of allowed html tags has been increased and the markdown filter has been put in place to try to make commenting easier. I’ll put some examples into the comments of this post.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Several people have had difficulty with comments which seem to have an allowed language significantly poorer than posts. [sent-1, score-1.433]

2 The set of allowed html tags has been increased and the markdown filter has been put in place to try to make commenting easier. [sent-2, score-2.433]

3 I’ll put some examples into the comments of this post. [sent-3, score-0.848]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('allowed', 0.421), ('put', 0.389), ('comments', 0.359), ('commenting', 0.344), ('html', 0.319), ('filter', 0.244), ('increased', 0.238), ('place', 0.169), ('language', 0.167), ('significantly', 0.164), ('difficulty', 0.157), ('try', 0.142), ('ll', 0.136), ('post', 0.128), ('seem', 0.112), ('examples', 0.1), ('set', 0.091), ('make', 0.076), ('several', 0.066), ('people', 0.053)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999994 122 hunch net-2005-10-13-Site tweak

Introduction: Several people have had difficulty with comments which seem to have an allowed language significantly poorer than posts. The set of allowed html tags has been increased and the markdown filter has been put in place to try to make commenting easier. I’ll put some examples into the comments of this post.

2 0.13421284 297 hunch net-2008-04-22-Taking the next step

Introduction: At the last ICML , Tom Dietterich asked me to look into systems for commenting on papers. I’ve been slow getting to this, but it’s relevant now. The essential observation is that we now have many tools for online collaboration, but they are not yet much used in academic research. If we can find the right way to use them, then perhaps great things might happen, with extra kudos to the first conference that manages to really create an online community. Various conferences have been poking at this. For example, UAI has setup a wiki , COLT has started using Joomla , with some dynamic content, and AAAI has been setting up a “ student blog “. Similarly, Dinoj Surendran setup a twiki for the Chicago Machine Learning Summer School , which was quite useful for coordinating events and other things. I believe the most important thing is a willingness to experiment. A good place to start seems to be enhancing existing conference websites. For example, the ICML 2007 papers pag

3 0.11789598 401 hunch net-2010-06-20-2010 ICML discussion site

Introduction: A substantial difficulty with the 2009 and 2008 ICML discussion system was a communication vacuum, where authors were not informed of comments, and commenters were not informed of responses to their comments without explicit monitoring. Mark Reid has setup a new discussion system for 2010 with the goal of addressing this. Mark didn’t want to make it to intrusive, so you must opt-in. As an author, find your paper and “Subscribe by email” to the comments. As a commenter, you have the option of providing an email for follow-up notification.

4 0.1062701 70 hunch net-2005-05-12-Math on the Web

Introduction: Andrej Bauer has setup a Mathematics and Computation Blog. As a first step he has tried to address the persistent and annoying problem of math on the web. As a basic tool for precisely stating and transfering understanding of technical subjects, mathematics is very necessary. Despite this necessity, every mechanism for expressing mathematics on the web seems unnaturally clumsy. Here are some of the methods and their drawbacks: MathML This was supposed to be the answer, but it has two severe drawbacks: “Internet Explorer” doesn’t read it and the language is an example of push-XML-to-the-limit which no one would ever consider writing in. (In contrast, html is easy to write in.) It’s also very annoying that math fonts must be installed independent of the browser, even for mozilla based browsers. Create inline images. This has several big drawbacks: font size is fixed for all viewers, you can’t cut & paste inside the images, and you can’t hyperlink from (say) symbol to de

5 0.090856783 25 hunch net-2005-02-20-At One Month

Introduction: This is near the one month point, so it seems appropriate to consider meta-issues for the moment. The number of posts is a bit over 20. The number of people speaking up in discussions is about 10. The number of people viewing the site is somewhat more than 100. I am (naturally) dissatisfied with many things. Many of the potential uses haven’t been realized. This is partly a matter of opportunity (no conferences in the last month), partly a matter of will (no open problems because it’s hard to give them up), and partly a matter of tradition. In academia, there is a strong tradition of trying to get everything perfectly right before presentation. This is somewhat contradictory to the nature of making many posts, and it’s definitely contradictory to the idea of doing “public research”. If that sort of idea is to pay off, it must be significantly more succesful than previous methods. In an effort to continue experimenting, I’m going to use the next week as “open problems we

6 0.087474056 328 hunch net-2008-11-26-Efficient Reinforcement Learning in MDPs

7 0.081540063 354 hunch net-2009-05-17-Server Update

8 0.081149131 271 hunch net-2007-11-05-CMU wins DARPA Urban Challenge

9 0.08083079 225 hunch net-2007-01-02-Retrospective

10 0.078225806 442 hunch net-2011-08-20-The Large Scale Learning Survey Tutorial

11 0.077581808 107 hunch net-2005-09-05-Site Update

12 0.076452926 142 hunch net-2005-12-22-Yes , I am applying

13 0.062969588 116 hunch net-2005-09-30-Research in conferences

14 0.062048212 314 hunch net-2008-08-24-Mass Customized Medicine in the Future?

15 0.056506034 294 hunch net-2008-04-12-Blog compromised

16 0.055997718 84 hunch net-2005-06-22-Languages of Learning

17 0.055437401 492 hunch net-2013-12-01-NIPS tutorials and Vowpal Wabbit 7.4

18 0.054992273 403 hunch net-2010-07-18-ICML & COLT 2010

19 0.054708336 166 hunch net-2006-03-24-NLPers

20 0.053520482 454 hunch net-2012-01-30-ICML Posters and Scope


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.086), (1, -0.011), (2, -0.016), (3, 0.048), (4, -0.026), (5, -0.012), (6, 0.008), (7, -0.062), (8, 0.022), (9, 0.006), (10, -0.073), (11, -0.019), (12, -0.043), (13, 0.068), (14, 0.067), (15, -0.032), (16, -0.094), (17, -0.029), (18, -0.025), (19, 0.059), (20, -0.016), (21, -0.017), (22, -0.036), (23, -0.085), (24, -0.038), (25, 0.004), (26, 0.005), (27, 0.002), (28, 0.007), (29, 0.062), (30, -0.008), (31, 0.052), (32, 0.067), (33, 0.035), (34, 0.083), (35, 0.095), (36, 0.054), (37, 0.025), (38, -0.013), (39, 0.041), (40, 0.008), (41, -0.068), (42, 0.064), (43, -0.069), (44, -0.087), (45, -0.029), (46, 0.14), (47, 0.085), (48, 0.041), (49, 0.018)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98674399 122 hunch net-2005-10-13-Site tweak

Introduction: Several people have had difficulty with comments which seem to have an allowed language significantly poorer than posts. The set of allowed html tags has been increased and the markdown filter has been put in place to try to make commenting easier. I’ll put some examples into the comments of this post.

2 0.61211312 354 hunch net-2009-05-17-Server Update

Introduction: The hunch.net server has been updated. I’ve taken the opportunity to upgrade the version of wordpress which caused cascading changes. Old threaded comments are now flattened. The system we used to use ( Brian’s threaded comments ) appears incompatible with the new threading system built into wordpress. I haven’t yet figured out a workaround. I setup a feedburner account . I added an RSS aggregator for both Machine Learning and other research blogs that I like to follow. This is something that I’ve wanted to do for awhile. Many other minor changes in font and format, with some help from Alina . If you have any suggestions for site tweaks, please speak up.

3 0.58095819 107 hunch net-2005-09-05-Site Update

Introduction: I tweaked the site in a number of ways today, including: Updating to WordPress 1.5. Installing and heavily tweaking the Geekniche theme. Update: I switched back to a tweaked version of the old theme. Adding the Customizable Post Listings plugin. Installing the StatTraq plugin. Updating some of the links. I particularly recommend looking at the computer research policy blog. Adding threaded comments . This doesn’t thread old comments obviously, but the extra structure may be helpful for new ones. Overall, I think this is an improvement, and it addresses a few of my earlier problems . If you have any difficulties or anything seems “not quite right”, please speak up. A few other tweaks to the site may happen in the near future.

4 0.56072652 297 hunch net-2008-04-22-Taking the next step

Introduction: At the last ICML , Tom Dietterich asked me to look into systems for commenting on papers. I’ve been slow getting to this, but it’s relevant now. The essential observation is that we now have many tools for online collaboration, but they are not yet much used in academic research. If we can find the right way to use them, then perhaps great things might happen, with extra kudos to the first conference that manages to really create an online community. Various conferences have been poking at this. For example, UAI has setup a wiki , COLT has started using Joomla , with some dynamic content, and AAAI has been setting up a “ student blog “. Similarly, Dinoj Surendran setup a twiki for the Chicago Machine Learning Summer School , which was quite useful for coordinating events and other things. I believe the most important thing is a willingness to experiment. A good place to start seems to be enhancing existing conference websites. For example, the ICML 2007 papers pag

5 0.5252949 294 hunch net-2008-04-12-Blog compromised

Introduction: Iain noticed that hunch.net had zero width divs hiding spammy URLs. Some investigation reveals that the wordpress version being used (2.0.3) had security flaws. I’ve upgraded to the latest, rotated passwords, and removed the spammy URLs. I don’t believe any content was lost. You can check your own and other sites for a similar problem by greping for “width:0″ or “width: 0″ in the delivered html source.

6 0.49862677 25 hunch net-2005-02-20-At One Month

7 0.43677735 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops

8 0.43117741 401 hunch net-2010-06-20-2010 ICML discussion site

9 0.42717075 328 hunch net-2008-11-26-Efficient Reinforcement Learning in MDPs

10 0.41437986 70 hunch net-2005-05-12-Math on the Web

11 0.40630713 210 hunch net-2006-09-28-Programming Languages for Machine Learning Implementations

12 0.37138575 84 hunch net-2005-06-22-Languages of Learning

13 0.36751601 367 hunch net-2009-08-16-Centmail comments

14 0.36663443 128 hunch net-2005-11-05-The design of a computing cluster

15 0.35235557 225 hunch net-2007-01-02-Retrospective

16 0.34868678 151 hunch net-2006-01-25-1 year

17 0.32014588 223 hunch net-2006-12-06-The Spam Problem

18 0.31766275 363 hunch net-2009-07-09-The Machine Learning Forum

19 0.3152698 173 hunch net-2006-04-17-Rexa is live

20 0.3133809 191 hunch net-2006-07-08-MaxEnt contradicts Bayes Rule?


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.202), (49, 0.517), (55, 0.082)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.98396283 338 hunch net-2009-01-23-An Active Learning Survey

Introduction: Burr Settles wrote a fairly comprehensive survey of active learning . He intends to maintain and update the survey, so send him any suggestions you have.

2 0.95435226 224 hunch net-2006-12-12-Interesting Papers at NIPS 2006

Introduction: Here are some papers that I found surprisingly interesting. Yoshua Bengio , Pascal Lamblin, Dan Popovici, Hugo Larochelle, Greedy Layer-wise Training of Deep Networks . Empirically investigates some of the design choices behind deep belief networks. Long Zhu , Yuanhao Chen, Alan Yuille Unsupervised Learning of a Probabilistic Grammar for Object Detection and Parsing. An unsupervised method for detecting objects using simple feature filters that works remarkably well on the (supervised) caltech-101 dataset . Shai Ben-David , John Blitzer , Koby Crammer , and Fernando Pereira , Analysis of Representations for Domain Adaptation . This is the first analysis I’ve seen of learning with respect to samples drawn differently from the evaluation distribution which depends on reasonable measurable quantities. All of these papers turn out to have a common theme—the power of unlabeled data to do generically useful things.

same-blog 3 0.95326978 122 hunch net-2005-10-13-Site tweak

Introduction: Several people have had difficulty with comments which seem to have an allowed language significantly poorer than posts. The set of allowed html tags has been increased and the markdown filter has been put in place to try to make commenting easier. I’ll put some examples into the comments of this post.

4 0.69326866 365 hunch net-2009-07-31-Vowpal Wabbit Open Source Project

Introduction: Today brings a new release of the Vowpal Wabbit fast online learning software. This time, unlike the previous release, the project itself is going open source, developing via github . For example, the lastest and greatest can be downloaded via: git clone git://github.com/JohnLangford/vowpal_wabbit.git If you aren’t familiar with git , it’s a distributed version control system which supports quick and easy branching, as well as reconciliation. This version of the code is confirmed to compile without complaint on at least some flavors of OSX as well as Linux boxes. As much of the point of this project is pushing the limits of fast and effective machine learning, let me mention a few datapoints from my experience. The program can effectively scale up to batch-style training on sparse terafeature (i.e. 10 12 sparse feature) size datasets. The limiting factor is typically i/o. I started using the the real datasets from the large-scale learning workshop as a conve

5 0.68683761 37 hunch net-2005-03-08-Fast Physics for Learning

Introduction: While everyone is silently working on ICML submissions, I found this discussion about a fast physics simulator chip interesting from a learning viewpoint. In many cases, learning attempts to predict the outcome of physical processes. Access to a fast simulator for these processes might be quite helpful in predicting the outcome. Bayesian learning in particular may directly benefit while many other algorithms (like support vector machines) might have their speed greatly increased. The biggest drawback is that writing software for these odd architectures is always difficult and time consuming, but a several-orders-of-magnitude speedup might make that worthwhile.

6 0.67220098 23 hunch net-2005-02-19-Loss Functions for Discriminative Training of Energy-Based Models

7 0.64144015 348 hunch net-2009-04-02-Asymmophobia

8 0.47635457 359 hunch net-2009-06-03-Functionally defined Nonlinear Dynamic Models

9 0.40727809 426 hunch net-2011-03-19-The Ideal Large Scale Learning Class

10 0.40612704 438 hunch net-2011-07-11-Interesting Neural Network Papers at ICML 2011

11 0.40607274 280 hunch net-2007-12-20-Cool and Interesting things at NIPS, take three

12 0.39342979 493 hunch net-2014-02-16-Metacademy: a package manager for knowledge

13 0.38307336 194 hunch net-2006-07-11-New Models

14 0.37986314 304 hunch net-2008-06-27-Reviewing Horror Stories

15 0.37569201 385 hunch net-2009-12-27-Interesting things at NIPS 2009

16 0.37560999 288 hunch net-2008-02-10-Complexity Illness

17 0.37386626 148 hunch net-2006-01-13-Benchmarks for RL

18 0.37372148 227 hunch net-2007-01-10-A Deep Belief Net Learning Problem

19 0.37228745 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

20 0.37144893 172 hunch net-2006-04-14-JMLR is a success