hunch_net hunch_net-2014 hunch_net-2014-493 knowledge-graph by maker-knowledge-mining

493 hunch net-2014-02-16-Metacademy: a package manager for knowledge


meta infos for this blog

Source: html

Introduction: In recent years, there’s been an explosion of free educational resources that make high-level knowledge and skills accessible to an ever-wider group of people. In your own field, you probably have a good idea of where to look for the answer to any particular question. But outside your areas of expertise, sifting through textbooks, Wikipedia articles, research papers, and online lectures can be bewildering (unless you’re fortunate enough to have a knowledgeable colleague to consult). What are the key concepts in the field, how do they relate to each other, which ones should you learn, and where should you learn them? Courses are a major vehicle for packaging educational materials for a broad audience. The trouble is that they’re typically meant to be consumed linearly, regardless of your specific background or goals. Also, unless thousands of other people have had the same background and learning goals, there may not even be a course that fits your needs. Recently, we ( Roger Grosse


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 In recent years, there’s been an explosion of free educational resources that make high-level knowledge and skills accessible to an ever-wider group of people. [sent-1, score-0.25]

2 But outside your areas of expertise, sifting through textbooks, Wikipedia articles, research papers, and online lectures can be bewildering (unless you’re fortunate enough to have a knowledgeable colleague to consult). [sent-3, score-0.064]

3 What are the key concepts in the field, how do they relate to each other, which ones should you learn, and where should you learn them? [sent-4, score-0.491]

4 Courses are a major vehicle for packaging educational materials for a broad audience. [sent-5, score-0.191]

5 The trouble is that they’re typically meant to be consumed linearly, regardless of your specific background or goals. [sent-6, score-0.143]

6 Also, unless thousands of other people have had the same background and learning goals, there may not even be a course that fits your needs. [sent-7, score-0.285]

7 Recently, we ( Roger Grosse  and Colorado Reed ) have been working on Metacademy , an open-source project to make the structure of a field more explicit and help students formulate personal learning plans. [sent-8, score-0.395]

8 Metacademy is built around an interconnected web of concepts, each one annotated with a short description, a set of learning goals, a (very rough) time estimate, and pointers to learning resources. [sent-9, score-0.112]

9 The concepts are arranged in a prerequisite graph, which is used to generate a learning plan for a concept. [sent-10, score-0.506]

10 ” Currently, most of our content is related to machine learning and probabilistic AI; for instance, here are the learning plan and graph for deep belief nets. [sent-12, score-0.273]

11 Metacademy also has wiki-like documents called roadmaps , which briefly overview key concepts in a field and explain why you might want to learn about them; here’s one we wrote for Bayesian machine learning . [sent-13, score-0.677]

12 We’re not trying to be the first to do any particular thing; rather, we’re trying to build a tool that we personally wanted to exist, and we hope others will find it useful as well. [sent-16, score-0.175]

13 Granted, if you’re reading this blog, you probably have a decent grasp of most of the concepts we’ve annotated. [sent-17, score-0.424]

14 If you’re teaching an applied course and don’t want to re-explain Gibbs sampling , you can simply point your students to the concept on Metacademy. [sent-19, score-0.295]

15 Or, if you’re writing a textbook or teaching a MOOC, Metacademy can help potential students find their way there. [sent-20, score-0.453]

16 Don’t worry about self-promotion: if you’ve written something you think people will find useful, feel free to add a pointer! [sent-21, score-0.173]

17 We are hoping to expand the content beyond machine learning, and we welcome contributions. [sent-22, score-0.072]

18 You can create a roadmap to help people find their way around a field. [sent-23, score-0.273]

19 We are currently working on a GUI for editing the concepts and the graph connecting them (our current system  is based on Github pull requests), and we’ll send an email to our registered users once this system is online. [sent-24, score-0.683]

20 If you find Metacademy useful or want to contribute, let us know at feedback _at_ metacademy _dot_ org. [sent-25, score-0.876]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('metacademy', 0.645), ('concepts', 0.306), ('re', 0.294), ('field', 0.13), ('educational', 0.127), ('graph', 0.124), ('find', 0.114), ('help', 0.103), ('students', 0.098), ('unless', 0.086), ('background', 0.084), ('teaching', 0.082), ('goals', 0.078), ('plan', 0.077), ('content', 0.072), ('currently', 0.071), ('key', 0.066), ('vehicle', 0.064), ('prerequisite', 0.064), ('formulate', 0.064), ('editing', 0.064), ('sifting', 0.064), ('textbooks', 0.064), ('package', 0.064), ('accessible', 0.064), ('mooc', 0.064), ('learn', 0.063), ('useful', 0.061), ('probably', 0.059), ('granted', 0.059), ('connecting', 0.059), ('courses', 0.059), ('manager', 0.059), ('gibbs', 0.059), ('trouble', 0.059), ('pull', 0.059), ('arranged', 0.059), ('decent', 0.059), ('course', 0.059), ('free', 0.059), ('around', 0.056), ('want', 0.056), ('relate', 0.056), ('thousands', 0.056), ('wrote', 0.056), ('linearly', 0.056), ('articles', 0.056), ('ingredients', 0.056), ('textbook', 0.056), ('pointers', 0.056)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000002 493 hunch net-2014-02-16-Metacademy: a package manager for knowledge

Introduction: In recent years, there’s been an explosion of free educational resources that make high-level knowledge and skills accessible to an ever-wider group of people. In your own field, you probably have a good idea of where to look for the answer to any particular question. But outside your areas of expertise, sifting through textbooks, Wikipedia articles, research papers, and online lectures can be bewildering (unless you’re fortunate enough to have a knowledgeable colleague to consult). What are the key concepts in the field, how do they relate to each other, which ones should you learn, and where should you learn them? Courses are a major vehicle for packaging educational materials for a broad audience. The trouble is that they’re typically meant to be consumed linearly, regardless of your specific background or goals. Also, unless thousands of other people have had the same background and learning goals, there may not even be a course that fits your needs. Recently, we ( Roger Grosse

2 0.094470002 445 hunch net-2011-09-28-Somebody’s Eating Your Lunch

Introduction: Since we last discussed the other online learning , Stanford has very visibly started pushing mass teaching in AI , Machine Learning , and Databases . In retrospect, it’s not too surprising that the next step up in serious online teaching experiments are occurring at the computer science department of a university embedded in the land of startups. Numbers on the order of 100000 are quite significant—similar in scale to the number of computer science undergraduate students/year in the US. Although these populations surely differ, the fact that they could overlap is worth considering for the future. It’s too soon to say how successful these classes will be and there are many easy criticisms to make: Registration != Learning … but if only 1/10th complete these classes, the scale of teaching still surpasses the scale of any traditional process. 1st year excitement != nth year routine … but if only 1/10th take future classes, the scale of teaching still surpass

3 0.092237346 249 hunch net-2007-06-21-Presentation Preparation

Introduction: A big part of doing research is presenting it at a conference. Since many people start out shy of public presentations, this can be a substantial challenge. Here are a few notes which might be helpful when thinking about preparing a presentation on research. Motivate . Talks which don’t start by describing the problem to solve cause many people to zone out. Prioritize . It is typical that you have more things to say than time to say them, and many presenters fall into the failure mode of trying to say too much. This is an easy-to-understand failure mode as it’s very natural to want to include everything. A basic fact is: you can’t. Example of this are: Your slides are so densely full of equations and words that you can’t cover them. Your talk runs over and a moderator prioritizes for you by cutting you off. You motor-mouth through the presentation, and the information absorption rate of the audience prioritizes in some uncontrolled fashion. The rate of flow of c

4 0.081355289 132 hunch net-2005-11-26-The Design of an Optimal Research Environment

Introduction: How do you create an optimal environment for research? Here are some essential ingredients that I see. Stability . University-based research is relatively good at this. On any particular day, researchers face choices in what they will work on. A very common tradeoff is between: easy small difficult big For researchers without stability, the ‘easy small’ option wins. This is often “ok”—a series of incremental improvements on the state of the art can add up to something very beneficial. However, it misses one of the big potentials of research: finding entirely new and better ways of doing things. Stability comes in many forms. The prototypical example is tenure at a university—a tenured professor is almost imposssible to fire which means that the professor has the freedom to consider far horizon activities. An iron-clad guarantee of a paycheck is not necessary—industrial research labs have succeeded well with research positions of indefinite duration. Atnt rese

5 0.068619244 225 hunch net-2007-01-02-Retrospective

Introduction: It’s been almost two years since this blog began. In that time, I’ve learned enough to shift my expectations in several ways. Initially, the idea was for a general purpose ML blog where different people could contribute posts. What has actually happened is most posts come from me, with a few guest posts that I greatly value. There are a few reasons I see for this. Overload . A couple years ago, I had not fully appreciated just how busy life gets for a researcher. Making a post is not simply a matter of getting to it, but rather of prioritizing between {writing a grant, finishing an overdue review, writing a paper, teaching a class, writing a program, etc…}. This is a substantial transition away from what life as a graduate student is like. At some point the question is not “when will I get to it?” but rather “will I get to it?” and the answer starts to become “no” most of the time. Feedback failure . This blog currently receives about 3K unique visitors per day from

6 0.067704543 378 hunch net-2009-11-15-The Other Online Learning

7 0.066333607 454 hunch net-2012-01-30-ICML Posters and Scope

8 0.064965263 353 hunch net-2009-05-08-Computability in Artificial Intelligence

9 0.064554185 483 hunch net-2013-06-10-The Large Scale Learning class notes

10 0.063692242 134 hunch net-2005-12-01-The Webscience Future

11 0.063518018 208 hunch net-2006-09-18-What is missing for online collaborative research?

12 0.062868364 185 hunch net-2006-06-16-Regularization = Robustness

13 0.059380796 93 hunch net-2005-07-13-“Sister Conference” presentations

14 0.058849387 95 hunch net-2005-07-14-What Learning Theory might do

15 0.056827787 151 hunch net-2006-01-25-1 year

16 0.056403294 168 hunch net-2006-04-02-Mad (Neuro)science

17 0.056008786 110 hunch net-2005-09-10-“Failure” is an option

18 0.055789914 393 hunch net-2010-04-14-MLcomp: a website for objectively comparing ML algorithms

19 0.054606497 73 hunch net-2005-05-17-A Short Guide to PhD Graduate Study

20 0.054110803 75 hunch net-2005-05-28-Running A Machine Learning Summer School


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.14), (1, -0.018), (2, -0.054), (3, 0.069), (4, -0.018), (5, -0.011), (6, -0.014), (7, -0.017), (8, -0.014), (9, -0.03), (10, -0.021), (11, -0.032), (12, 0.004), (13, -0.015), (14, 0.058), (15, -0.001), (16, -0.027), (17, 0.035), (18, 0.026), (19, 0.048), (20, 0.062), (21, 0.069), (22, -0.006), (23, -0.056), (24, 0.104), (25, -0.021), (26, 0.007), (27, -0.052), (28, 0.019), (29, 0.025), (30, 0.004), (31, -0.007), (32, -0.006), (33, -0.04), (34, 0.01), (35, -0.02), (36, 0.029), (37, -0.045), (38, 0.011), (39, -0.044), (40, -0.04), (41, -0.032), (42, -0.006), (43, -0.085), (44, 0.021), (45, -0.014), (46, -0.038), (47, -0.014), (48, 0.011), (49, 0.02)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.92929554 493 hunch net-2014-02-16-Metacademy: a package manager for knowledge

Introduction: In recent years, there’s been an explosion of free educational resources that make high-level knowledge and skills accessible to an ever-wider group of people. In your own field, you probably have a good idea of where to look for the answer to any particular question. But outside your areas of expertise, sifting through textbooks, Wikipedia articles, research papers, and online lectures can be bewildering (unless you’re fortunate enough to have a knowledgeable colleague to consult). What are the key concepts in the field, how do they relate to each other, which ones should you learn, and where should you learn them? Courses are a major vehicle for packaging educational materials for a broad audience. The trouble is that they’re typically meant to be consumed linearly, regardless of your specific background or goals. Also, unless thousands of other people have had the same background and learning goals, there may not even be a course that fits your needs. Recently, we ( Roger Grosse

2 0.65148616 424 hunch net-2011-02-17-What does Watson mean?

Introduction: Watson convincingly beat the best champion Jeopardy! players. The apparent significance of this varies hugely, depending on your background knowledge about the related machine learning, NLP, and search technology. For a random person, this might seem evidence of serious machine intelligence, while for people working on the system itself, it probably seems like a reasonably good assemblage of existing technologies with several twists to make the entire system work. Above all, I think we should congratulate the people who managed to put together and execute this project—many years of effort by a diverse set of highly skilled people were needed to make this happen. In academia, it’s pretty difficult for one professor to assemble that quantity of talent, and in industry it’s rarely the case that such a capable group has both a worthwhile project and the support needed to pursue something like this for several years before success. Alina invited me to the Jeopardy watching party

3 0.58375305 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops

Introduction: Chicago ’05 ended a couple of weeks ago. This was the sixth Machine Learning Summer School , and the second one that used a wiki . (The first was Berder ’04, thanks to Gunnar Raetsch.) Wikis are relatively easy to set up, greatly aid social interaction, and should be used a lot more at summer schools and workshops. They can even be used as the meeting’s webpage, as a permanent record of its participants’ collaborations — see for example the wiki/website for last year’s NVO Summer School . A basic wiki is a collection of editable webpages, maintained by software called a wiki engine . The engine used at both Berder and Chicago was TikiWiki — it is well documented and gets you something running fast. It uses PHP and MySQL, but doesn’t require you to know either. Tikiwiki has far more features than most wikis, as it is really a full Content Management System . (My thanks to Sebastian Stark for pointing this out.) Here are the features we found most useful: Bulletin boa

4 0.58315611 445 hunch net-2011-09-28-Somebody’s Eating Your Lunch

Introduction: Since we last discussed the other online learning , Stanford has very visibly started pushing mass teaching in AI , Machine Learning , and Databases . In retrospect, it’s not too surprising that the next step up in serious online teaching experiments are occurring at the computer science department of a university embedded in the land of startups. Numbers on the order of 100000 are quite significant—similar in scale to the number of computer science undergraduate students/year in the US. Although these populations surely differ, the fact that they could overlap is worth considering for the future. It’s too soon to say how successful these classes will be and there are many easy criticisms to make: Registration != Learning … but if only 1/10th complete these classes, the scale of teaching still surpasses the scale of any traditional process. 1st year excitement != nth year routine … but if only 1/10th take future classes, the scale of teaching still surpass

5 0.58160639 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

Introduction: Yann LeCun and I are coteaching a class on Large Scale Machine Learning starting late January at NYU . This class will cover many tricks to get machine learning working well on datasets with many features, examples, and classes, along with several elements of deep learning and support systems enabling the previous. This is not a beginning class—you really need to have taken a basic machine learning class previously to follow along. Students will be able to run and experiment with large scale learning algorithms since Yahoo! has donated servers which are being configured into a small scale Hadoop cluster. We are planning to cover the frontier of research in scalable learning algorithms, so good class projects could easily lead to papers. For me, this is a chance to teach on many topics of past research. In general, it seems like researchers should engage in at least occasional teaching of research, both as a proof of teachability and to see their own research through th

6 0.57633048 352 hunch net-2009-05-06-Machine Learning to AI

7 0.55749524 75 hunch net-2005-05-28-Running A Machine Learning Summer School

8 0.54172748 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

9 0.53669232 479 hunch net-2013-01-31-Remote large scale learning class participation

10 0.53612667 483 hunch net-2013-06-10-The Large Scale Learning class notes

11 0.53207868 448 hunch net-2011-10-24-2011 ML symposium and the bears

12 0.52478063 378 hunch net-2009-11-15-The Other Online Learning

13 0.51742941 257 hunch net-2007-07-28-Asking questions

14 0.51487356 69 hunch net-2005-05-11-Visa Casualties

15 0.51209378 353 hunch net-2009-05-08-Computability in Artificial Intelligence

16 0.51144344 110 hunch net-2005-09-10-“Failure” is an option

17 0.50811994 322 hunch net-2008-10-20-New York’s ML Day

18 0.50287342 231 hunch net-2007-02-10-Best Practices for Collaboration

19 0.50019068 428 hunch net-2011-03-27-Vowpal Wabbit, v5.1

20 0.49822897 153 hunch net-2006-02-02-Introspectionism as a Disease


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(3, 0.033), (27, 0.194), (33, 0.313), (38, 0.046), (48, 0.013), (49, 0.031), (53, 0.064), (55, 0.08), (80, 0.019), (94, 0.038), (95, 0.06)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.91125268 61 hunch net-2005-04-25-Embeddings: what are they good for?

Introduction: I’ve been looking at some recent embeddings work, and am struck by how beautiful the theory and algorithms are. It also makes me wonder, what are embeddings good for? A few things immediately come to mind: (1) For visualization of high-dimensional data sets. In this case, one would like good algorithms for embedding specifically into 2- and 3-dimensional Euclidean spaces. (2) For nonparametric modeling. The usual nonparametric models (histograms, nearest neighbor) often require resources which are exponential in the dimension. So if the data actually lie close to some low-dimensional surface, it might be a good idea to first identify this surface and embed the data before applying the model. Incidentally, for applications like these, it’s important to have a functional mapping from high to low dimension, which some techniques do not yield up. (3) As a prelude to classifier learning. The hope here is presumably that learning will be easier in the low-dimensional space,

2 0.8918159 350 hunch net-2009-04-23-Jonathan Chang at Slycoder

Introduction: Jonathan Chang has a research blog on aspects of machine learning.

3 0.88585007 384 hunch net-2009-12-24-Top graduates this season

Introduction: I would like to point out 3 graduates this season as having my confidence they are capable of doing great things. Daniel Hsu has diverse papers with diverse coauthors on {active learning, mulitlabeling, temporal learning, …} each covering new algorithms and methods of analysis. He is also a capable programmer, having helped me with some nitty-gritty details of cluster parallel Vowpal Wabbit this summer. He has an excellent tendency to just get things done. Nicolas Lambert doesn’t nominally work in machine learning, but I’ve found his work in elicitation relevant nevertheless. In essence, elicitable properties are closely related to learnable properties, and the elicitation complexity is related to a notion of learning complexity. See the Surrogate regret bounds paper for some related discussion. Few people successfully work at such a general level that it crosses fields, but he’s one of them. Yisong Yue is deeply focused on interactive learning, which he has a

same-blog 4 0.83214706 493 hunch net-2014-02-16-Metacademy: a package manager for knowledge

Introduction: In recent years, there’s been an explosion of free educational resources that make high-level knowledge and skills accessible to an ever-wider group of people. In your own field, you probably have a good idea of where to look for the answer to any particular question. But outside your areas of expertise, sifting through textbooks, Wikipedia articles, research papers, and online lectures can be bewildering (unless you’re fortunate enough to have a knowledgeable colleague to consult). What are the key concepts in the field, how do they relate to each other, which ones should you learn, and where should you learn them? Courses are a major vehicle for packaging educational materials for a broad audience. The trouble is that they’re typically meant to be consumed linearly, regardless of your specific background or goals. Also, unless thousands of other people have had the same background and learning goals, there may not even be a course that fits your needs. Recently, we ( Roger Grosse

5 0.73678732 234 hunch net-2007-02-22-Create Your Own ICML Workshop

Introduction: As usual ICML 2007 will be hosting a workshop program to be held this year on June 24th. The success of the program depends on having researchers like you propose interesting workshop topics and then organize the workshops. I’d like to encourage all of you to consider sending a workshop proposal. The proposal deadline has been extended to March 5. See the workshop web-site for details. Organizing a workshop is a unique way to gather an international group of researchers together to focus for an entire day on a topic of your choosing. I’ve always found that the cost of organizing a workshop is not so large, and very low compared to the benefits. The topic and format of a workshop are limited only by your imagination (and the attractiveness to potential participants) and need not follow the usual model of a mini-conference on a particular ML sub-area. Hope to see some interesting proposals rolling in.

6 0.64436126 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class

7 0.60343736 248 hunch net-2007-06-19-How is Compressed Sensing going to change Machine Learning ?

8 0.59190637 194 hunch net-2006-07-11-New Models

9 0.5848282 12 hunch net-2005-02-03-Learning Theory, by assumption

10 0.58261585 370 hunch net-2009-09-18-Necessary and Sufficient Research

11 0.58248103 343 hunch net-2009-02-18-Decision by Vetocracy

12 0.58127367 225 hunch net-2007-01-02-Retrospective

13 0.58111894 466 hunch net-2012-06-05-ICML acceptance statistics

14 0.5810402 360 hunch net-2009-06-15-In Active Learning, the question changes

15 0.58037287 230 hunch net-2007-02-02-Thoughts regarding “Is machine learning different from statistics?”

16 0.58030576 227 hunch net-2007-01-10-A Deep Belief Net Learning Problem

17 0.57966334 132 hunch net-2005-11-26-The Design of an Optimal Research Environment

18 0.57854688 359 hunch net-2009-06-03-Functionally defined Nonlinear Dynamic Models

19 0.57767916 134 hunch net-2005-12-01-The Webscience Future

20 0.57622612 36 hunch net-2005-03-05-Funding Research