hunch_net hunch_net-2005 hunch_net-2005-130 knowledge-graph by maker-knowledge-mining
Source: html
Introduction: There will be two machine learning summer schools in 2006. One is in Canberra, Australia from February 6 to February 17 (Aussie summer). The webpage is fully ‘live’ so you should actively consider it now. The other is in Taipei, Taiwan from July 24 to August 4. This one is still in the planning phase, but that should be settled soon. Attending an MLSS is probably the quickest and easiest way to bootstrap yourself into a reasonable initial understanding of the field of machine learning.
sentIndex sentText sentNum sentScore
1 There will be two machine learning summer schools in 2006. [sent-1, score-0.639]
2 The webpage is fully ‘live’ so you should actively consider it now. [sent-3, score-0.574]
3 This one is still in the planning phase, but that should be settled soon. [sent-5, score-0.573]
4 Attending an MLSS is probably the quickest and easiest way to bootstrap yourself into a reasonable initial understanding of the field of machine learning. [sent-6, score-1.134]
wordName wordTfidf (topN-words)
[('february', 0.356), ('summer', 0.275), ('settled', 0.257), ('taipei', 0.257), ('taiwan', 0.257), ('australia', 0.225), ('bootstrap', 0.225), ('phase', 0.206), ('live', 0.206), ('schools', 0.199), ('easiest', 0.193), ('mlss', 0.193), ('webpage', 0.187), ('august', 0.182), ('july', 0.17), ('attending', 0.167), ('actively', 0.157), ('planning', 0.152), ('initial', 0.15), ('fully', 0.141), ('field', 0.131), ('probably', 0.12), ('still', 0.106), ('understanding', 0.091), ('consider', 0.089), ('reasonable', 0.084), ('machine', 0.075), ('way', 0.065), ('two', 0.059), ('one', 0.058), ('learning', 0.031)]
simIndex simValue blogId blogTitle
same-blog 1 1.0 130 hunch net-2005-11-16-MLSS 2006
Introduction: There will be two machine learning summer schools in 2006. One is in Canberra, Australia from February 6 to February 17 (Aussie summer). The webpage is fully ‘live’ so you should actively consider it now. The other is in Taipei, Taiwan from July 24 to August 4. This one is still in the planning phase, but that should be settled soon. Attending an MLSS is probably the quickest and easiest way to bootstrap yourself into a reasonable initial understanding of the field of machine learning.
2 0.24755308 422 hunch net-2011-01-16-2011 Summer Conference Deadline Season
Introduction: Machine learning always welcomes the new year with paper deadlines for summer conferences. This year, we have: Conference Paper Deadline When/Where Double blind? Author Feedback? Notes ICML February 1 June 28-July 2, Bellevue, Washington, USA Y Y Weak colocation with ACL COLT February 11 July 9-July 11, Budapest, Hungary N N colocated with FOCM KDD February 11/18 August 21-24, San Diego, California, USA N N UAI March 18 July 14-17, Barcelona, Spain Y N The larger conferences are on the west coast in the United States, while the smaller ones are in Europe.
3 0.23338267 226 hunch net-2007-01-04-2007 Summer Machine Learning Conferences
Introduction: It’s conference season once again. Conference Due? When? Where? double blind? author feedback? Workshops? AAAI February 1/6 (and 27) July 22-26 Vancouver, British Columbia Yes Yes Done UAI February 28/March 2 July 19-22 Vancouver, British Columbia No No No COLT January 16 June 13-15 San Diego, California (with FCRC ) No No No ICML February 7/9 June 20-24 Corvallis, Oregon Yes Yes February 16 KDD February 23/28 August 12-15 San Jose, California Yes No? February 28 The geowinner this year is the west coast of North America. Last year ‘s geowinner was the Northeastern US, and the year before it was mostly Europe. It’s notable how tightly the conferences cluster, even when they don’t colocate.
4 0.21293733 273 hunch net-2007-11-16-MLSS 2008
Introduction: … is in Kioloa, Australia from March 3 to March 14. It’s a great chance to learn something about Machine Learning and I’ve enjoyed several previous Machine Learning Summer Schools . The website has many more details , but registration is open now for the first 80 to sign up.
5 0.18103147 387 hunch net-2010-01-19-Deadline Season, 2010
Introduction: Many conference deadlines are coming soon. Deadline Double Blind / Author Feedback Time/Place ICML January 18((workshops) / February 1 (Papers) / February 13 (Tutorials) Y/Y Haifa, Israel, June 21-25 KDD February 1(Workshops) / February 2&5 (Papers) / February 26 (Tutorials & Panels)) / April 17 (Demos) N/S Washington DC, July 25-28 COLT January 18 (Workshops) / February 19 (Papers) N/S Haifa, Israel, June 25-29 UAI March 11 (Papers) N?/Y Catalina Island, California, July 8-11 ICML continues to experiment with the reviewing process, although perhaps less so than last year. The S “sort-of” for COLT is because author feedback occurs only after decisions are made. KDD is notable for being the most comprehensive in terms of {Tutorials, Workshops, Challenges, Panels, Papers (two tracks), Demos}. The S for KDD is because there is sometimes author feedback at the decision of the SPC. The (past) January 18 de
6 0.16868258 261 hunch net-2007-08-28-Live ML Class
7 0.16196001 4 hunch net-2005-01-26-Summer Schools
8 0.16044345 11 hunch net-2005-02-02-Paper Deadlines
9 0.13352805 470 hunch net-2012-07-17-MUCMD and BayLearn
10 0.12359394 357 hunch net-2009-05-30-Many ways to Learn this summer
11 0.11745304 145 hunch net-2005-12-29-Deadline Season
12 0.10914357 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei
13 0.10709877 17 hunch net-2005-02-10-Conferences, Dates, Locations
14 0.10253866 331 hunch net-2008-12-12-Summer Conferences
15 0.097021356 75 hunch net-2005-05-28-Running A Machine Learning Summer School
16 0.092313334 467 hunch net-2012-06-15-Normal Deviate and the UCSC Machine Learning Summer School
17 0.082888871 66 hunch net-2005-05-03-Conference attendance is mandatory
18 0.082840167 283 hunch net-2008-01-07-2008 Summer Machine Learning Conference Schedule
19 0.077644512 276 hunch net-2007-12-10-Learning Track of International Planning Competition
20 0.077618532 447 hunch net-2011-10-10-ML Symposium and ICML details
topicId topicWeight
[(0, 0.08), (1, -0.112), (2, -0.073), (3, -0.154), (4, -0.058), (5, -0.253), (6, -0.065), (7, 0.027), (8, 0.009), (9, 0.039), (10, -0.027), (11, -0.091), (12, 0.113), (13, 0.016), (14, -0.076), (15, -0.004), (16, -0.056), (17, 0.125), (18, 0.075), (19, 0.039), (20, 0.015), (21, 0.053), (22, -0.085), (23, -0.223), (24, -0.002), (25, 0.006), (26, 0.067), (27, -0.048), (28, 0.089), (29, 0.01), (30, -0.062), (31, -0.058), (32, 0.067), (33, -0.061), (34, -0.006), (35, -0.031), (36, 0.014), (37, -0.1), (38, 0.048), (39, -0.031), (40, 0.001), (41, -0.028), (42, -0.109), (43, -0.005), (44, -0.038), (45, 0.022), (46, -0.023), (47, -0.031), (48, 0.009), (49, 0.012)]
simIndex simValue blogId blogTitle
same-blog 1 0.96592724 130 hunch net-2005-11-16-MLSS 2006
Introduction: There will be two machine learning summer schools in 2006. One is in Canberra, Australia from February 6 to February 17 (Aussie summer). The webpage is fully ‘live’ so you should actively consider it now. The other is in Taipei, Taiwan from July 24 to August 4. This one is still in the planning phase, but that should be settled soon. Attending an MLSS is probably the quickest and easiest way to bootstrap yourself into a reasonable initial understanding of the field of machine learning.
2 0.6781823 422 hunch net-2011-01-16-2011 Summer Conference Deadline Season
Introduction: Machine learning always welcomes the new year with paper deadlines for summer conferences. This year, we have: Conference Paper Deadline When/Where Double blind? Author Feedback? Notes ICML February 1 June 28-July 2, Bellevue, Washington, USA Y Y Weak colocation with ACL COLT February 11 July 9-July 11, Budapest, Hungary N N colocated with FOCM KDD February 11/18 August 21-24, San Diego, California, USA N N UAI March 18 July 14-17, Barcelona, Spain Y N The larger conferences are on the west coast in the United States, while the smaller ones are in Europe.
3 0.65561801 4 hunch net-2005-01-26-Summer Schools
Introduction: There are several summer schools related to machine learning. We are running a two week machine learning summer school in Chicago, USA May 16-27. IPAM is running a more focused three week summer school on Intelligent Extraction of Information from Graphs and High Dimensional Data in Los Angeles, USA July 11-29. A broad one-week school on analysis of patterns will be held in Erice, Italy, Oct. 28-Nov 6.
4 0.64567614 273 hunch net-2007-11-16-MLSS 2008
Introduction: … is in Kioloa, Australia from March 3 to March 14. It’s a great chance to learn something about Machine Learning and I’ve enjoyed several previous Machine Learning Summer Schools . The website has many more details , but registration is open now for the first 80 to sign up.
5 0.5970276 357 hunch net-2009-05-30-Many ways to Learn this summer
Introduction: There are at least 3 summer schools related to machine learning this summer. The first is at University of Chicago June 1-11 organized by Misha Belkin , Partha Niyogi , and Steve Smale . Registration is closed for this one, meaning they met their capacity limit. The format is essentially an extended Tutorial/Workshop. I was particularly interested to see Valiant amongst the speakers. I’m also presenting Saturday June 6, on logarithmic time prediction. Praveen Srinivasan points out the second at Peking University in Beijing, China, July 20-27. This one differs substantially, as it is about vision, machine learning, and their intersection. The deadline for applications is June 10 or 15. This is also another example of the growth of research in China, with active support from NSF . The third one is at Cambridge , England, August 29-September 10. It’s in the MLSS series . Compared to the Chicago one, this one is more about the Bayesian side of ML, alth
6 0.58320504 226 hunch net-2007-01-04-2007 Summer Machine Learning Conferences
7 0.55257565 145 hunch net-2005-12-29-Deadline Season
8 0.51201606 11 hunch net-2005-02-02-Paper Deadlines
9 0.49766117 75 hunch net-2005-05-28-Running A Machine Learning Summer School
10 0.48263264 261 hunch net-2007-08-28-Live ML Class
11 0.47579247 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops
12 0.45411724 69 hunch net-2005-05-11-Visa Casualties
13 0.42920929 387 hunch net-2010-01-19-Deadline Season, 2010
14 0.42160675 184 hunch net-2006-06-15-IJCAI is out of season
15 0.40220591 331 hunch net-2008-12-12-Summer Conferences
16 0.38650623 467 hunch net-2012-06-15-Normal Deviate and the UCSC Machine Learning Summer School
17 0.38353455 470 hunch net-2012-07-17-MUCMD and BayLearn
18 0.37468293 17 hunch net-2005-02-10-Conferences, Dates, Locations
19 0.37315017 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei
20 0.36136165 66 hunch net-2005-05-03-Conference attendance is mandatory
topicId topicWeight
[(27, 0.095), (35, 0.521), (55, 0.052), (94, 0.174)]
simIndex simValue blogId blogTitle
same-blog 1 0.88447797 130 hunch net-2005-11-16-MLSS 2006
Introduction: There will be two machine learning summer schools in 2006. One is in Canberra, Australia from February 6 to February 17 (Aussie summer). The webpage is fully ‘live’ so you should actively consider it now. The other is in Taipei, Taiwan from July 24 to August 4. This one is still in the planning phase, but that should be settled soon. Attending an MLSS is probably the quickest and easiest way to bootstrap yourself into a reasonable initial understanding of the field of machine learning.
2 0.67207325 73 hunch net-2005-05-17-A Short Guide to PhD Graduate Study
Introduction: Graduate study is a mysterious and uncertain process. This easiest way to see this is by noting that a very old advisor/student mechanism is preferred. There is no known succesful mechanism for “mass producing” PhDs as is done (in some sense) for undergraduate and masters study. Here are a few hints that might be useful to prospective or current students based on my own experience. Masters or PhD (a) You want a PhD if you want to do research. (b) You want a masters if you want to make money. People wanting (b) will be manifestly unhappy with (a) because it typically means years of low pay. People wanting (a) should try to avoid (b) because it prolongs an already long process. Attitude . Many students struggle for awhile with the wrong attitude towards research. Most students come into graduate school with 16-19 years of schooling where the principle means of success is proving that you know something via assignments, tests, etc… Research does not work this way. Re
3 0.60373449 392 hunch net-2010-03-26-A Variance only Deviation Bound
Introduction: At the PAC-Bayes workshop earlier this week, Olivier Catoni described a result that I hadn’t believed was possible: a deviation bound depending only on the variance of a random variable . For people not familiar with deviation bounds, this may be hard to appreciate. Deviation bounds, are one of the core components for the foundations of machine learning theory, so developments here have a potential to alter our understanding of how to learn and what is learnable. My understanding is that the basic proof techniques started with Bernstein and have evolved into several variants specialized for various applications. All of the variants I knew had a dependence on the range, with some also having a dependence on the variance of an IID or martingale random variable. This one is the first I know of with a dependence on only the variance. The basic idea is to use a biased estimator of the mean which is not influenced much by outliers. Then, a deviation bound can be proved by
4 0.3610611 346 hunch net-2009-03-18-Parallel ML primitives
Introduction: Previously, we discussed parallel machine learning a bit. As parallel ML is rather difficult, I’d like to describe my thinking at the moment, and ask for advice from the rest of the world. This is particularly relevant right now, as I’m attending a workshop tomorrow on parallel ML. Parallelizing slow algorithms seems uncompelling. Parallelizing many algorithms also seems uncompelling, because the effort required to parallelize is substantial. This leaves the question: Which one fast algorithm is the best to parallelize? What is a substantially different second? One compellingly fast simple algorithm is online gradient descent on a linear representation. This is the core of Leon’s sgd code and Vowpal Wabbit . Antoine Bordes showed a variant was competitive in the large scale learning challenge . It’s also a decades old primitive which has been reused in many algorithms, and continues to be reused. It also applies to online learning rather than just online optimiz
5 0.35865933 115 hunch net-2005-09-26-Prediction Bounds as the Mathematics of Science
Introduction: “Science” has many meanings, but one common meaning is “the scientific method ” which is a principled method for investigating the world using the following steps: Form a hypothesis about the world. Use the hypothesis to make predictions. Run experiments to confirm or disprove the predictions. The ordering of these steps is very important to the scientific method. In particular, predictions must be made before experiments are run. Given that we all believe in the scientific method of investigation, it may be surprising to learn that cheating is very common. This happens for many reasons, some innocent and some not. Drug studies. Pharmaceutical companies make predictions about the effects of their drugs and then conduct blind clinical studies to determine their effect. Unfortunately, they have also been caught using some of the more advanced techniques for cheating here : including “reprobleming”, “data set selection”, and probably “overfitting by review”
6 0.35852432 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops
7 0.35573035 42 hunch net-2005-03-17-Going all the Way, Sometimes
8 0.35558915 120 hunch net-2005-10-10-Predictive Search is Coming
9 0.35549051 35 hunch net-2005-03-04-The Big O and Constants in Learning
10 0.34713131 276 hunch net-2007-12-10-Learning Track of International Planning Competition
11 0.34132862 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making
12 0.31812117 229 hunch net-2007-01-26-Parallel Machine Learning Problems
13 0.31627172 136 hunch net-2005-12-07-Is the Google way the way for machine learning?
14 0.30213928 286 hunch net-2008-01-25-Turing’s Club for Machine Learning
15 0.29306632 419 hunch net-2010-12-04-Vowpal Wabbit, version 5.0, and the second heresy
16 0.29298663 450 hunch net-2011-12-02-Hadoop AllReduce and Terascale Learning
17 0.29062414 441 hunch net-2011-08-15-Vowpal Wabbit 6.0
18 0.29061216 253 hunch net-2007-07-06-Idempotent-capable Predictors
19 0.28855595 43 hunch net-2005-03-18-Binomial Weighting
20 0.28842795 306 hunch net-2008-07-02-Proprietary Data in Academic Research?