hunch_net hunch_net-2005 hunch_net-2005-4 knowledge-graph by maker-knowledge-mining

4 hunch net-2005-01-26-Summer Schools


meta infos for this blog

Source: html

Introduction: There are several summer schools related to machine learning. We are running a two week machine learning summer school in Chicago, USA May 16-27. IPAM is running a more focused three week summer school on Intelligent Extraction of Information from Graphs and High Dimensional Data in Los Angeles, USA July 11-29. A broad one-week school on analysis of patterns will be held in Erice, Italy, Oct. 28-Nov 6.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 There are several summer schools related to machine learning. [sent-1, score-0.705]

2 We are running a two week machine learning summer school in Chicago, USA May 16-27. [sent-2, score-1.392]

3 IPAM is running a more focused three week summer school on Intelligent Extraction of Information from Graphs and High Dimensional Data in Los Angeles, USA July 11-29. [sent-3, score-1.518]

4 A broad one-week school on analysis of patterns will be held in Erice, Italy, Oct. [sent-4, score-0.984]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('school', 0.393), ('usa', 0.381), ('summer', 0.349), ('week', 0.308), ('angeles', 0.218), ('los', 0.218), ('patterns', 0.218), ('running', 0.216), ('italy', 0.202), ('schools', 0.168), ('graphs', 0.163), ('dimensional', 0.154), ('held', 0.15), ('chicago', 0.147), ('intelligent', 0.147), ('july', 0.144), ('broad', 0.138), ('focused', 0.129), ('three', 0.123), ('high', 0.086), ('analysis', 0.085), ('related', 0.083), ('information', 0.064), ('machine', 0.063), ('data', 0.056), ('two', 0.05), ('may', 0.043), ('several', 0.042), ('learning', 0.013)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 4 hunch net-2005-01-26-Summer Schools

Introduction: There are several summer schools related to machine learning. We are running a two week machine learning summer school in Chicago, USA May 16-27. IPAM is running a more focused three week summer school on Intelligent Extraction of Information from Graphs and High Dimensional Data in Los Angeles, USA July 11-29. A broad one-week school on analysis of patterns will be held in Erice, Italy, Oct. 28-Nov 6.

2 0.23962089 66 hunch net-2005-05-03-Conference attendance is mandatory

Introduction: For anyone planning to do research, conference attendance is virtually mandatory for success. Aside from exposing yourself to a large collection of different ideas, many interesting conversations leading to new research happen at conferences. If you are a student, you should plan to go to at least one summer conference. Your advisor should cover the costs. Conference Location Early Registration deadline normal/student cost in US dollars AAAI Pittsburgh, PA, USA May 13 590/170 IJCAI Edinburgh, Scotland May 21 663/351 COLT Bertinoro, Italy May 30 256/178 KDD Chicago, IL, USA July 15 590/260 ICML Bonn, Germany July 1 448 UAI Edinburgh, Scotland not ready yet ???

3 0.21838312 467 hunch net-2012-06-15-Normal Deviate and the UCSC Machine Learning Summer School

Introduction: Larry Wasserman has started the Normal Deviate blog which I added to the blogroll on the right. Manfred Warmuth points out the UCSC machine learning summer school running July 9-20 which may be of particular interest to those in silicon valley.

4 0.2083139 422 hunch net-2011-01-16-2011 Summer Conference Deadline Season

Introduction: Machine learning always welcomes the new year with paper deadlines for summer conferences. This year, we have: Conference Paper Deadline When/Where Double blind? Author Feedback? Notes ICML February 1 June 28-July 2, Bellevue, Washington, USA Y Y Weak colocation with ACL COLT February 11 July 9-July 11, Budapest, Hungary N N colocated with FOCM KDD February 11/18 August 21-24, San Diego, California, USA N N UAI March 18 July 14-17, Barcelona, Spain Y N The larger conferences are on the west coast in the United States, while the smaller ones are in Europe.

5 0.18836798 75 hunch net-2005-05-28-Running A Machine Learning Summer School

Introduction: We just finished the Chicago 2005 Machine Learning Summer School . The school was 2 weeks long with about 130 (or 140 counting the speakers) participants. For perspective, this is perhaps the largest graduate level machine learning class I am aware of anywhere and anytime (previous MLSS s have been close). Overall, it seemed to go well, although the students are the real authority on this. For those who missed it, DVDs will be available from our Slovenian friends. Email Mrs Spela Sitar of the Jozsef Stefan Institute for details. The following are some notes for future planning and those interested. Good Decisions Acquiring the larger-than-necessary “Assembly Hall” at International House . Our attendance came in well above our expectations, so this was a critical early decision that made a huge difference. The invited speakers were key. They made a huge difference in the quality of the content. Delegating early and often was important. One key difficulty here

6 0.18292955 17 hunch net-2005-02-10-Conferences, Dates, Locations

7 0.16196001 130 hunch net-2005-11-16-MLSS 2006

8 0.14025059 273 hunch net-2007-11-16-MLSS 2008

9 0.12706335 69 hunch net-2005-05-11-Visa Casualties

10 0.12236945 357 hunch net-2009-05-30-Many ways to Learn this summer

11 0.11882015 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops

12 0.11828247 449 hunch net-2011-11-26-Giving Thanks

13 0.098736793 146 hunch net-2006-01-06-MLTV

14 0.091096401 471 hunch net-2012-08-24-Patterns for research in machine learning

15 0.084493473 184 hunch net-2006-06-15-IJCAI is out of season

16 0.075240076 405 hunch net-2010-08-21-Rob Schapire at NYC ML Meetup

17 0.068621047 283 hunch net-2008-01-07-2008 Summer Machine Learning Conference Schedule

18 0.068290107 92 hunch net-2005-07-11-AAAI blog

19 0.067962147 174 hunch net-2006-04-27-Conferences, Workshops, and Tutorials

20 0.067372583 490 hunch net-2013-11-09-Graduates and Postdocs


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.076), (1, -0.078), (2, -0.099), (3, -0.099), (4, -0.063), (5, -0.198), (6, -0.047), (7, -0.045), (8, 0.017), (9, 0.017), (10, -0.004), (11, -0.058), (12, 0.059), (13, -0.022), (14, -0.057), (15, -0.024), (16, -0.03), (17, 0.235), (18, 0.079), (19, 0.181), (20, 0.059), (21, -0.1), (22, -0.032), (23, -0.186), (24, -0.097), (25, 0.073), (26, 0.129), (27, -0.104), (28, 0.048), (29, 0.045), (30, 0.087), (31, -0.076), (32, -0.019), (33, 0.107), (34, -0.067), (35, 0.073), (36, -0.069), (37, -0.112), (38, -0.001), (39, -0.022), (40, -0.039), (41, -0.039), (42, -0.01), (43, 0.111), (44, 0.037), (45, 0.067), (46, 0.06), (47, 0.006), (48, -0.057), (49, -0.044)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97990161 4 hunch net-2005-01-26-Summer Schools

Introduction: There are several summer schools related to machine learning. We are running a two week machine learning summer school in Chicago, USA May 16-27. IPAM is running a more focused three week summer school on Intelligent Extraction of Information from Graphs and High Dimensional Data in Los Angeles, USA July 11-29. A broad one-week school on analysis of patterns will be held in Erice, Italy, Oct. 28-Nov 6.

2 0.70929128 467 hunch net-2012-06-15-Normal Deviate and the UCSC Machine Learning Summer School

Introduction: Larry Wasserman has started the Normal Deviate blog which I added to the blogroll on the right. Manfred Warmuth points out the UCSC machine learning summer school running July 9-20 which may be of particular interest to those in silicon valley.

3 0.65711051 357 hunch net-2009-05-30-Many ways to Learn this summer

Introduction: There are at least 3 summer schools related to machine learning this summer. The first is at University of Chicago June 1-11 organized by Misha Belkin , Partha Niyogi , and Steve Smale . Registration is closed for this one, meaning they met their capacity limit. The format is essentially an extended Tutorial/Workshop. I was particularly interested to see Valiant amongst the speakers. I’m also presenting Saturday June 6, on logarithmic time prediction. Praveen Srinivasan points out the second at Peking University in Beijing, China, July 20-27. This one differs substantially, as it is about vision, machine learning, and their intersection. The deadline for applications is June 10 or 15. This is also another example of the growth of research in China, with active support from NSF . The third one is at Cambridge , England, August 29-September 10. It’s in the MLSS series . Compared to the Chicago one, this one is more about the Bayesian side of ML, alth

4 0.65437996 75 hunch net-2005-05-28-Running A Machine Learning Summer School

Introduction: We just finished the Chicago 2005 Machine Learning Summer School . The school was 2 weeks long with about 130 (or 140 counting the speakers) participants. For perspective, this is perhaps the largest graduate level machine learning class I am aware of anywhere and anytime (previous MLSS s have been close). Overall, it seemed to go well, although the students are the real authority on this. For those who missed it, DVDs will be available from our Slovenian friends. Email Mrs Spela Sitar of the Jozsef Stefan Institute for details. The following are some notes for future planning and those interested. Good Decisions Acquiring the larger-than-necessary “Assembly Hall” at International House . Our attendance came in well above our expectations, so this was a critical early decision that made a huge difference. The invited speakers were key. They made a huge difference in the quality of the content. Delegating early and often was important. One key difficulty here

5 0.65158385 130 hunch net-2005-11-16-MLSS 2006

Introduction: There will be two machine learning summer schools in 2006. One is in Canberra, Australia from February 6 to February 17 (Aussie summer). The webpage is fully ‘live’ so you should actively consider it now. The other is in Taipei, Taiwan from July 24 to August 4. This one is still in the planning phase, but that should be settled soon. Attending an MLSS is probably the quickest and easiest way to bootstrap yourself into a reasonable initial understanding of the field of machine learning.

6 0.63110107 69 hunch net-2005-05-11-Visa Casualties

7 0.61239392 66 hunch net-2005-05-03-Conference attendance is mandatory

8 0.55089623 17 hunch net-2005-02-10-Conferences, Dates, Locations

9 0.54134041 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops

10 0.41203606 184 hunch net-2006-06-15-IJCAI is out of season

11 0.39422327 283 hunch net-2008-01-07-2008 Summer Machine Learning Conference Schedule

12 0.38575676 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

13 0.38095272 273 hunch net-2007-11-16-MLSS 2008

14 0.37232727 449 hunch net-2011-11-26-Giving Thanks

15 0.35752633 422 hunch net-2011-01-16-2011 Summer Conference Deadline Season

16 0.32896671 470 hunch net-2012-07-17-MUCMD and BayLearn

17 0.29919231 405 hunch net-2010-08-21-Rob Schapire at NYC ML Meetup

18 0.28341082 13 hunch net-2005-02-04-JMLG

19 0.28329146 146 hunch net-2006-01-06-MLTV

20 0.2830534 414 hunch net-2010-10-17-Partha Niyogi has died


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(24, 0.746), (27, 0.046), (94, 0.025)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.93425059 4 hunch net-2005-01-26-Summer Schools

Introduction: There are several summer schools related to machine learning. We are running a two week machine learning summer school in Chicago, USA May 16-27. IPAM is running a more focused three week summer school on Intelligent Extraction of Information from Graphs and High Dimensional Data in Los Angeles, USA July 11-29. A broad one-week school on analysis of patterns will be held in Erice, Italy, Oct. 28-Nov 6.

2 0.27577481 96 hunch net-2005-07-21-Six Months

Introduction: This is the 6 month point in the “run a research blog” experiment, so it seems like a good point to take stock and assess. One fundamental question is: “Is it worth it?” The idea of running a research blog will never become widely popular and useful unless it actually aids research. On the negative side, composing ideas for a post and maintaining a blog takes a significant amount of time. On the positive side, the process might yield better research because there is an opportunity for better, faster feedback implying better, faster thinking. My answer at the moment is a provisional “yes”. Running the blog has been incidentally helpful in several ways: It is sometimes educational. example More often, the process of composing thoughts well enough to post simply aids thinking. This has resulted in a couple solutions to problems of interest (and perhaps more over time). If you really want to solve a problem, letting the world know is helpful. This isn’t necessarily

3 0.19100344 95 hunch net-2005-07-14-What Learning Theory might do

Introduction: I wanted to expand on this post and some of the previous problems/research directions about where learning theory might make large strides. Why theory? The essential reason for theory is “intuition extension”. A very good applied learning person can master some particular application domain yielding the best computer algorithms for solving that problem. A very good theory can take the intuitions discovered by this and other applied learning people and extend them to new domains in a relatively automatic fashion. To do this, we take these basic intuitions and try to find a mathematical model that: Explains the basic intuitions. Makes new testable predictions about how to learn. Succeeds in so learning. This is “intuition extension”: taking what we have learned somewhere else and applying it in new domains. It is fundamentally useful to everyone because it increases the level of automation in solving problems. Where next for learning theory? I like the a

4 0.17743833 341 hunch net-2009-02-04-Optimal Proxy Loss for Classification

Introduction: Many people in machine learning take advantage of the notion of a proxy loss: A loss function which is much easier to optimize computationally than the loss function imposed by the world. A canonical example is when we want to learn a weight vector w and predict according to a dot product f w (x)= sum i w i x i where optimizing squared loss (y-f w (x)) 2 over many samples is much more tractable than optimizing 0-1 loss I(y = Threshold(f w (x) – 0.5)) . While the computational advantages of optimizing a proxy loss are substantial, we are curious: which proxy loss is best? The answer of course depends on what the real loss imposed by the world is. For 0-1 loss classification, there are adherents to many choices: Log loss. If we confine the prediction to [0,1] , we can treat it as a predicted probability that the label is 1 , and measure loss according to log 1/p’(y|x) where p’(y|x) is the predicted probability of the observed label. A standard method for confi

5 0.16436715 12 hunch net-2005-02-03-Learning Theory, by assumption

Introduction: One way to organize learning theory is by assumption (in the assumption = axiom sense ), from no assumptions to many assumptions. As you travel down this list, the statements become stronger, but the scope of applicability decreases. No assumptions Online learning There exist a meta prediction algorithm which compete well with the best element of any set of prediction algorithms. Universal Learning Using a “bias” of 2 - description length of turing machine in learning is equivalent to all other computable biases up to some constant. Reductions The ability to predict well on classification problems is equivalent to the ability to predict well on many other learning problems. Independent and Identically Distributed (IID) Data Performance Prediction Based upon past performance, you can predict future performance. Uniform Convergence Performance prediction works even after choosing classifiers based on the data from large sets of classifiers.

6 0.076545104 415 hunch net-2010-10-28-NY ML Symposium 2010

7 0.0689601 252 hunch net-2007-07-01-Watchword: Online Learning

8 0.068168841 426 hunch net-2011-03-19-The Ideal Large Scale Learning Class

9 0.068048328 43 hunch net-2005-03-18-Binomial Weighting

10 0.067998216 229 hunch net-2007-01-26-Parallel Machine Learning Problems

11 0.067844272 258 hunch net-2007-08-12-Exponentiated Gradient

12 0.067735985 345 hunch net-2009-03-08-Prediction Science

13 0.067664653 286 hunch net-2008-01-25-Turing’s Club for Machine Learning

14 0.067543641 337 hunch net-2009-01-21-Nearly all natural problems require nonlinearity

15 0.067361757 351 hunch net-2009-05-02-Wielding a New Abstraction

16 0.067352325 276 hunch net-2007-12-10-Learning Track of International Planning Competition

17 0.067055881 352 hunch net-2009-05-06-Machine Learning to AI

18 0.066968083 450 hunch net-2011-12-02-Hadoop AllReduce and Terascale Learning

19 0.066957042 190 hunch net-2006-07-06-Branch Prediction Competition

20 0.066895813 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms