hunch_net hunch_net-2011 hunch_net-2011-421 knowledge-graph by maker-knowledge-mining

421 hunch net-2011-01-03-Herman Goldstine 2011


meta infos for this blog

Source: html

Introduction: Vikas points out the Herman Goldstine Fellowship at IBM . I was a Herman Goldstine Fellow, and benefited from the experience a great deal—that’s where work on learning reductions started. If you can do research independently, it’s recommended. Applications are due January 6.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Vikas points out the Herman Goldstine Fellowship at IBM . [sent-1, score-0.113]

2 I was a Herman Goldstine Fellow, and benefited from the experience a great deal—that’s where work on learning reductions started. [sent-2, score-0.66]

3 If you can do research independently, it’s recommended. [sent-3, score-0.062]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('goldstine', 0.559), ('herman', 0.518), ('vikas', 0.28), ('benefited', 0.259), ('fellow', 0.244), ('independently', 0.233), ('january', 0.189), ('ibm', 0.185), ('reductions', 0.13), ('deal', 0.113), ('points', 0.113), ('experience', 0.11), ('applications', 0.11), ('due', 0.092), ('great', 0.085), ('research', 0.062), ('work', 0.059), ('learning', 0.017)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 421 hunch net-2011-01-03-Herman Goldstine 2011

Introduction: Vikas points out the Herman Goldstine Fellowship at IBM . I was a Herman Goldstine Fellow, and benefited from the experience a great deal—that’s where work on learning reductions started. If you can do research independently, it’s recommended. Applications are due January 6.

2 0.12532382 449 hunch net-2011-11-26-Giving Thanks

Introduction: Thanksgiving is perhaps my favorite holiday, because pausing your life and giving thanks provides a needed moment of perspective. As a researcher, I am most thankful for my education, without which I could not function. I want to share this, because it provides some sense of how a researcher starts. My long term memory seems to function particularly well, which makes any education I get is particularly useful. I am naturally obsessive, which makes me chase down details until I fully understand things. Natural obsessiveness can go wrong, of course, but it’s a great ally when you absolutely must get things right. My childhood was all in one hometown, which was a conscious sacrifice on the part of my father, implying disruptions from moving around were eliminated. I’m not sure how important this was since travel has it’s own benefits, but it bears thought. I had several great teachers in grade school, and naturally gravitated towards teachers over classmates, as they seemed

3 0.093490355 121 hunch net-2005-10-12-The unrealized potential of the research lab

Introduction: I attended the IBM research 60th anniversary . IBM research is, by any reasonable account, the industrial research lab which has managed to bring the most value to it’s parent company over the long term. This can be seen by simply counting the survivors: IBM research is the only older research lab which has not gone through a period of massive firing. (Note that there are also new research labs .) Despite this impressive record, IBM research has failed, by far, to achieve it’s potential. Examples which came up in this meeting include: It took about a decade to produce DRAM after it was invented in the lab. (In fact, Intel produced it first.) Relational databases and SQL were invented and then languished. It was only under external competition that IBM released it’s own relational database. Why didn’t IBM grow an Oracle division ? An early lead in IP networking hardware did not result in IBM growing a Cisco division . Why not? And remember … IBM research is a s

4 0.057299763 212 hunch net-2006-10-04-Health of Conferences Wiki

Introduction: Aaron Hertzmann points out the health of conferences wiki , which has a great deal of information about how many different conferences function.

5 0.052778587 433 hunch net-2011-04-23-ICML workshops due

Introduction: Lihong points out that ICML workshop submissions are due April 29.

6 0.051153336 326 hunch net-2008-11-11-COLT CFP

7 0.051061533 417 hunch net-2010-11-18-ICML 2011 – Call for Tutorials

8 0.05059148 285 hunch net-2008-01-23-Why Workshop?

9 0.049916662 145 hunch net-2005-12-29-Deadline Season

10 0.046816118 379 hunch net-2009-11-23-ICML 2009 Workshops (and Tutorials)

11 0.044869985 236 hunch net-2007-03-15-Alternative Machine Learning Reductions Definitions

12 0.043586574 406 hunch net-2010-08-22-KDD 2010

13 0.043507207 14 hunch net-2005-02-07-The State of the Reduction

14 0.043180414 375 hunch net-2009-10-26-NIPS workshops

15 0.041585311 11 hunch net-2005-02-02-Paper Deadlines

16 0.04139927 82 hunch net-2005-06-17-Reopening RL->Classification

17 0.040462289 387 hunch net-2010-01-19-Deadline Season, 2010

18 0.038671985 473 hunch net-2012-09-29-Vowpal Wabbit, version 7.0

19 0.038469769 425 hunch net-2011-02-25-Yahoo! Machine Learning grant due March 11

20 0.0378461 83 hunch net-2005-06-18-Lower Bounds for Learning Reductions


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.051), (1, -0.024), (2, -0.039), (3, -0.028), (4, -0.031), (5, -0.025), (6, 0.041), (7, 0.023), (8, -0.022), (9, 0.015), (10, 0.012), (11, -0.015), (12, -0.045), (13, -0.029), (14, -0.045), (15, 0.025), (16, 0.029), (17, -0.03), (18, -0.07), (19, -0.059), (20, -0.002), (21, 0.036), (22, -0.044), (23, -0.007), (24, 0.014), (25, 0.017), (26, 0.069), (27, 0.01), (28, -0.055), (29, 0.026), (30, 0.017), (31, -0.042), (32, 0.071), (33, 0.022), (34, -0.033), (35, -0.056), (36, -0.06), (37, 0.006), (38, -0.05), (39, 0.013), (40, -0.004), (41, 0.014), (42, -0.009), (43, 0.058), (44, -0.035), (45, -0.006), (46, -0.052), (47, 0.066), (48, -0.053), (49, 0.016)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95560825 421 hunch net-2011-01-03-Herman Goldstine 2011

Introduction: Vikas points out the Herman Goldstine Fellowship at IBM . I was a Herman Goldstine Fellow, and benefited from the experience a great deal—that’s where work on learning reductions started. If you can do research independently, it’s recommended. Applications are due January 6.

2 0.53413683 459 hunch net-2012-03-13-The Submodularity workshop and Lucca Professorship

Introduction: Nina points out the Submodularity Workshop March 19-20 next week at Georgia Tech . Many people want to make Submodularity the new Convexity in machine learning, and it certainly seems worth exploring. Sara Olson also points out a tenured faculty position at IMT Lucca with a deadline of May 15th . Lucca happens to be the ancestral home of 1/4 of my heritage

3 0.53239012 409 hunch net-2010-09-13-AIStats

Introduction: Geoff Gordon points out AIStats 2011 in Ft. Lauderdale, Florida. The call for papers is now out, due Nov. 1. The plan is to experiment with the review process to encourage quality in several ways. I expect to submit a paper and would encourage others with good research to do likewise.

4 0.46697626 433 hunch net-2011-04-23-ICML workshops due

Introduction: Lihong points out that ICML workshop submissions are due April 29.

5 0.45476043 375 hunch net-2009-10-26-NIPS workshops

Introduction: Many of the NIPS workshops have a deadline about now, and the NIPS early registration deadline is Nov. 6 . Several interest me: Adaptive Sensing, Active Learning, and Experimental Design due 10/27. Discrete Optimization in Machine Learning: Submodularity, Sparsity & Polyhedra , due Nov. 6. Large-Scale Machine Learning: Parallelism and Massive Datasets , due 10/23 (i.e. past) Analysis and Design of Algorithms for Interactive Machine Learning , due 10/30. And I’m sure many of the others interest others. Workshops are great as a mechanism for research, so take a look if there is any chance you might be interested.

6 0.45109597 326 hunch net-2008-11-11-COLT CFP

7 0.41729978 446 hunch net-2011-10-03-Monday announcements

8 0.40190944 190 hunch net-2006-07-06-Branch Prediction Competition

9 0.39895099 83 hunch net-2005-06-18-Lower Bounds for Learning Reductions

10 0.39427117 121 hunch net-2005-10-12-The unrealized potential of the research lab

11 0.3832145 417 hunch net-2010-11-18-ICML 2011 – Call for Tutorials

12 0.37370604 36 hunch net-2005-03-05-Funding Research

13 0.35848817 384 hunch net-2009-12-24-Top graduates this season

14 0.35187748 154 hunch net-2006-02-04-Research Budget Changes

15 0.35092783 412 hunch net-2010-09-28-Machined Learnings

16 0.35090145 82 hunch net-2005-06-17-Reopening RL->Classification

17 0.34774417 344 hunch net-2009-02-22-Effective Research Funding

18 0.34736502 425 hunch net-2011-02-25-Yahoo! Machine Learning grant due March 11

19 0.34414455 399 hunch net-2010-05-20-Google Predict

20 0.33923668 357 hunch net-2009-05-30-Many ways to Learn this summer


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(12, 0.619), (27, 0.156)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.96676344 24 hunch net-2005-02-19-Machine learning reading groups

Introduction: Yaroslav collected an extensive list of machine learning reading groups .

same-blog 2 0.90813154 421 hunch net-2011-01-03-Herman Goldstine 2011

Introduction: Vikas points out the Herman Goldstine Fellowship at IBM . I was a Herman Goldstine Fellow, and benefited from the experience a great deal—that’s where work on learning reductions started. If you can do research independently, it’s recommended. Applications are due January 6.

3 0.64990997 482 hunch net-2013-05-04-COLT and ICML registration

Introduction: Sebastien Bubeck points out COLT registration with a May 13 early registration deadline. The local organizers have done an admirable job of containing costs with a $300 registration fee. ICML registration is also available, at about an x3 higher cost. My understanding is that this is partly due to the costs of a larger conference being harder to contain, partly due to ICML lasting twice as long with tutorials and workshops, and partly because the conference organizers were a bit over-conservative in various ways.

4 0.56721485 438 hunch net-2011-07-11-Interesting Neural Network Papers at ICML 2011

Introduction: Maybe it’s too early to call, but with four separate Neural Network sessions at this year’s ICML , it looks like Neural Networks are making a comeback. Here are my highlights of these sessions. In general, my feeling is that these papers both demystify deep learning and show its broader applicability. The first observation I made is that the once disreputable “Neural” nomenclature is being used again in lieu of “deep learning”. Maybe it’s because Adam Coates et al. showed that single layer networks can work surprisingly well. An Analysis of Single-Layer Networks in Unsupervised Feature Learning , Adam Coates , Honglak Lee , Andrew Y. Ng (AISTATS 2011) The Importance of Encoding Versus Training with Sparse Coding and Vector Quantization , Adam Coates , Andrew Y. Ng (ICML 2011) Another surprising result out of Andrew Ng’s group comes from Andrew Saxe et al. who show that certain convolutional pooling architectures can obtain close to state-of-the-art pe

5 0.37958634 311 hunch net-2008-07-26-Compositional Machine Learning Algorithm Design

Introduction: There were two papers at ICML presenting learning algorithms for a contextual bandit -style setting, where the loss for all labels is not known, but the loss for one label is known. (The first might require a exploration scavenging viewpoint to understand if the experimental assignment was nonrandom.) I strongly approve of these papers and further work in this setting and its variants, because I expect it to become more important than supervised learning. As a quick review, we are thinking about situations where repeatedly: The world reveals feature values (aka context information). A policy chooses an action. The world provides a reward. Sometimes this is done in an online fashion where the policy can change based on immediate feedback and sometimes it’s done in a batch setting where many samples are collected before the policy can change. If you haven’t spent time thinking about the setting, you might want to because there are many natural applications. I’m g

6 0.3285293 259 hunch net-2007-08-19-Choice of Metrics

7 0.24414885 166 hunch net-2006-03-24-NLPers

8 0.24414885 246 hunch net-2007-06-13-Not Posting

9 0.24414885 418 hunch net-2010-12-02-Traffic Prediction Problem

10 0.24389727 274 hunch net-2007-11-28-Computational Consequences of Classification

11 0.2434949 247 hunch net-2007-06-14-Interesting Papers at COLT 2007

12 0.24294752 308 hunch net-2008-07-06-To Dual or Not

13 0.24246424 400 hunch net-2010-06-13-The Good News on Exploration and Learning

14 0.24235988 245 hunch net-2007-05-12-Loss Function Semantics

15 0.24231575 172 hunch net-2006-04-14-JMLR is a success

16 0.24209632 288 hunch net-2008-02-10-Complexity Illness

17 0.24081895 45 hunch net-2005-03-22-Active learning

18 0.23804487 9 hunch net-2005-02-01-Watchword: Loss

19 0.23673637 341 hunch net-2009-02-04-Optimal Proxy Loss for Classification

20 0.23630841 352 hunch net-2009-05-06-Machine Learning to AI