hunch_net hunch_net-2009 hunch_net-2009-350 knowledge-graph by maker-knowledge-mining

350 hunch net-2009-04-23-Jonathan Chang at Slycoder


meta infos for this blog

Source: html

Introduction: Jonathan Chang has a research blog on aspects of machine learning.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Jonathan Chang has a research blog on aspects of machine learning. [sent-1, score-0.946]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('chang', 0.619), ('jonathan', 0.573), ('aspects', 0.385), ('blog', 0.335), ('research', 0.136), ('machine', 0.09), ('learning', 0.037)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 350 hunch net-2009-04-23-Jonathan Chang at Slycoder

Introduction: Jonathan Chang has a research blog on aspects of machine learning.

2 0.18032509 214 hunch net-2006-10-13-David Pennock starts Oddhead

Introduction: his blog on information markets and other research topics .

3 0.13931128 486 hunch net-2013-07-10-Thoughts on Artificial Intelligence

Introduction: David McAllester starts a blog .

4 0.13869309 225 hunch net-2007-01-02-Retrospective

Introduction: It’s been almost two years since this blog began. In that time, I’ve learned enough to shift my expectations in several ways. Initially, the idea was for a general purpose ML blog where different people could contribute posts. What has actually happened is most posts come from me, with a few guest posts that I greatly value. There are a few reasons I see for this. Overload . A couple years ago, I had not fully appreciated just how busy life gets for a researcher. Making a post is not simply a matter of getting to it, but rather of prioritizing between {writing a grant, finishing an overdue review, writing a paper, teaching a class, writing a program, etc…}. This is a substantial transition away from what life as a graduate student is like. At some point the question is not “when will I get to it?” but rather “will I get to it?” and the answer starts to become “no” most of the time. Feedback failure . This blog currently receives about 3K unique visitors per day from

5 0.13493563 59 hunch net-2005-04-22-New Blog: [Lowerbounds,Upperbounds]

Introduction: Maverick Woo and the Aladdin group at CMU have started a CS theory-related blog here .

6 0.13449712 96 hunch net-2005-07-21-Six Months

7 0.12684482 383 hunch net-2009-12-09-Inherent Uncertainty

8 0.12369435 166 hunch net-2006-03-24-NLPers

9 0.11534367 92 hunch net-2005-07-11-AAAI blog

10 0.091230355 480 hunch net-2013-03-22-I’m a bandit

11 0.083033741 296 hunch net-2008-04-21-The Science 2.0 article

12 0.072094619 402 hunch net-2010-07-02-MetaOptimize

13 0.067981526 182 hunch net-2006-06-05-Server Shift, Site Tweaks, Suggestions?

14 0.06676276 467 hunch net-2012-06-15-Normal Deviate and the UCSC Machine Learning Summer School

15 0.060375676 353 hunch net-2009-05-08-Computability in Artificial Intelligence

16 0.060150906 455 hunch net-2012-02-20-Berkeley Streaming Data Workshop

17 0.055634417 377 hunch net-2009-11-09-NYAS ML Symposium this year.

18 0.051112346 151 hunch net-2006-01-25-1 year

19 0.049645532 444 hunch net-2011-09-07-KDD and MUCMD 2011

20 0.049548164 36 hunch net-2005-03-05-Funding Research


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.047), (1, -0.028), (2, -0.09), (3, 0.074), (4, -0.1), (5, -0.016), (6, 0.034), (7, -0.285), (8, 0.123), (9, -0.007), (10, 0.01), (11, 0.006), (12, 0.038), (13, -0.061), (14, -0.045), (15, 0.034), (16, 0.027), (17, 0.014), (18, -0.05), (19, -0.085), (20, 0.013), (21, -0.036), (22, 0.048), (23, 0.042), (24, -0.002), (25, 0.025), (26, -0.019), (27, -0.005), (28, 0.025), (29, -0.006), (30, 0.018), (31, 0.026), (32, -0.009), (33, 0.046), (34, -0.046), (35, 0.044), (36, 0.004), (37, -0.031), (38, 0.018), (39, 0.003), (40, 0.064), (41, -0.072), (42, -0.04), (43, -0.038), (44, -0.021), (45, 0.02), (46, -0.036), (47, -0.004), (48, 0.062), (49, -0.069)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.92635041 350 hunch net-2009-04-23-Jonathan Chang at Slycoder

Introduction: Jonathan Chang has a research blog on aspects of machine learning.

2 0.87587577 486 hunch net-2013-07-10-Thoughts on Artificial Intelligence

Introduction: David McAllester starts a blog .

3 0.87486917 214 hunch net-2006-10-13-David Pennock starts Oddhead

Introduction: his blog on information markets and other research topics .

4 0.83554566 166 hunch net-2006-03-24-NLPers

Introduction: Hal Daume has started the NLPers blog to discuss learning for language problems.

5 0.79657173 59 hunch net-2005-04-22-New Blog: [Lowerbounds,Upperbounds]

Introduction: Maverick Woo and the Aladdin group at CMU have started a CS theory-related blog here .

6 0.70199478 480 hunch net-2013-03-22-I’m a bandit

7 0.67761242 383 hunch net-2009-12-09-Inherent Uncertainty

8 0.65941137 96 hunch net-2005-07-21-Six Months

9 0.64430219 225 hunch net-2007-01-02-Retrospective

10 0.62412584 92 hunch net-2005-07-11-AAAI blog

11 0.57242429 402 hunch net-2010-07-02-MetaOptimize

12 0.54372907 467 hunch net-2012-06-15-Normal Deviate and the UCSC Machine Learning Summer School

13 0.50191027 296 hunch net-2008-04-21-The Science 2.0 article

14 0.49913195 182 hunch net-2006-06-05-Server Shift, Site Tweaks, Suggestions?

15 0.36816812 151 hunch net-2006-01-25-1 year

16 0.30359325 412 hunch net-2010-09-28-Machined Learnings

17 0.26649696 51 hunch net-2005-04-01-The Producer-Consumer Model of Research

18 0.26114762 36 hunch net-2005-03-05-Funding Research

19 0.25345787 171 hunch net-2006-04-09-Progress in Machine Translation

20 0.24885833 312 hunch net-2008-08-04-Electoralmarkets.com


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.09), (33, 0.601)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.76301122 350 hunch net-2009-04-23-Jonathan Chang at Slycoder

Introduction: Jonathan Chang has a research blog on aspects of machine learning.

2 0.70910561 61 hunch net-2005-04-25-Embeddings: what are they good for?

Introduction: I’ve been looking at some recent embeddings work, and am struck by how beautiful the theory and algorithms are. It also makes me wonder, what are embeddings good for? A few things immediately come to mind: (1) For visualization of high-dimensional data sets. In this case, one would like good algorithms for embedding specifically into 2- and 3-dimensional Euclidean spaces. (2) For nonparametric modeling. The usual nonparametric models (histograms, nearest neighbor) often require resources which are exponential in the dimension. So if the data actually lie close to some low-dimensional surface, it might be a good idea to first identify this surface and embed the data before applying the model. Incidentally, for applications like these, it’s important to have a functional mapping from high to low dimension, which some techniques do not yield up. (3) As a prelude to classifier learning. The hope here is presumably that learning will be easier in the low-dimensional space,

3 0.59314644 384 hunch net-2009-12-24-Top graduates this season

Introduction: I would like to point out 3 graduates this season as having my confidence they are capable of doing great things. Daniel Hsu has diverse papers with diverse coauthors on {active learning, mulitlabeling, temporal learning, …} each covering new algorithms and methods of analysis. He is also a capable programmer, having helped me with some nitty-gritty details of cluster parallel Vowpal Wabbit this summer. He has an excellent tendency to just get things done. Nicolas Lambert doesn’t nominally work in machine learning, but I’ve found his work in elicitation relevant nevertheless. In essence, elicitable properties are closely related to learnable properties, and the elicitation complexity is related to a notion of learning complexity. See the Surrogate regret bounds paper for some related discussion. Few people successfully work at such a general level that it crosses fields, but he’s one of them. Yisong Yue is deeply focused on interactive learning, which he has a

4 0.45276946 493 hunch net-2014-02-16-Metacademy: a package manager for knowledge

Introduction: In recent years, there’s been an explosion of free educational resources that make high-level knowledge and skills accessible to an ever-wider group of people. In your own field, you probably have a good idea of where to look for the answer to any particular question. But outside your areas of expertise, sifting through textbooks, Wikipedia articles, research papers, and online lectures can be bewildering (unless you’re fortunate enough to have a knowledgeable colleague to consult). What are the key concepts in the field, how do they relate to each other, which ones should you learn, and where should you learn them? Courses are a major vehicle for packaging educational materials for a broad audience. The trouble is that they’re typically meant to be consumed linearly, regardless of your specific background or goals. Also, unless thousands of other people have had the same background and learning goals, there may not even be a course that fits your needs. Recently, we ( Roger Grosse

5 0.41560259 234 hunch net-2007-02-22-Create Your Own ICML Workshop

Introduction: As usual ICML 2007 will be hosting a workshop program to be held this year on June 24th. The success of the program depends on having researchers like you propose interesting workshop topics and then organize the workshops. I’d like to encourage all of you to consider sending a workshop proposal. The proposal deadline has been extended to March 5. See the workshop web-site for details. Organizing a workshop is a unique way to gather an international group of researchers together to focus for an entire day on a topic of your choosing. I’ve always found that the cost of organizing a workshop is not so large, and very low compared to the benefits. The topic and format of a workshop are limited only by your imagination (and the attractiveness to potential participants) and need not follow the usual model of a mini-conference on a particular ML sub-area. Hope to see some interesting proposals rolling in.

6 0.20345798 248 hunch net-2007-06-19-How is Compressed Sensing going to change Machine Learning ?

7 0.19703805 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

8 0.16723283 45 hunch net-2005-03-22-Active learning

9 0.1658631 140 hunch net-2005-12-14-More NIPS Papers II

10 0.14941044 166 hunch net-2006-03-24-NLPers

11 0.14941044 246 hunch net-2007-06-13-Not Posting

12 0.14941044 418 hunch net-2010-12-02-Traffic Prediction Problem

13 0.14925648 274 hunch net-2007-11-28-Computational Consequences of Classification

14 0.14901024 247 hunch net-2007-06-14-Interesting Papers at COLT 2007

15 0.14867526 308 hunch net-2008-07-06-To Dual or Not

16 0.14837952 400 hunch net-2010-06-13-The Good News on Exploration and Learning

17 0.14831565 245 hunch net-2007-05-12-Loss Function Semantics

18 0.14828865 172 hunch net-2006-04-14-JMLR is a success

19 0.14815436 288 hunch net-2008-02-10-Complexity Illness

20 0.14567502 9 hunch net-2005-02-01-Watchword: Loss