hunch_net hunch_net-2012 knowledge-graph by maker-knowledge-mining

hunch_net 2012 knowledge graph


similar blogs computed by tfidf model


similar blogs computed by lsi model


similar blogs computed by lda model


blogs list:

1 hunch net-2012-12-29-Simons Institute Big Data Program

Introduction: Michael Jordan sends the below: The new Simons Institute for the Theory of Computing will begin organizing semester-long programs starting in 2013. One of our first programs, set for Fall 2013, will be on the “Theoretical Foundations of Big Data Analysis”. The organizers of this program are Michael Jordan (chair), Stephen Boyd, Peter Buehlmann, Ravi Kannan, Michael Mahoney, and Muthu Muthukrishnan. See http://simons.berkeley.edu/program_bigdata2013.html for more information on the program. The Simons Institute has created a number of “Research Fellowships” for young researchers (within at most six years of the award of their PhD) who wish to participate in Institute programs, including the Big Data program. Individuals who already hold postdoctoral positions or who are junior faculty are welcome to apply, as are finishing PhDs. Please note that the application deadline is January 15, 2013. Further details are available at http://simons.berkeley.edu/fellows.h

2 hunch net-2012-10-26-ML Symposium and Strata-Hadoop World

Introduction: The New York ML symposium was last Friday. There were 303 registrations, up a bit from last year . I particularly enjoyed talks by Bill Freeman on vision and ML, Jon Lenchner on strategy in Jeopardy, and Tara N. Sainath and Brian Kingsbury on deep learning for speech recognition . If anyone has suggestions or thoughts for next year, please speak up. I also attended Strata + Hadoop World for the first time. This is primarily a trade conference rather than an academic conference, but I found it pretty interesting as a first time attendee. This is ground zero for the Big data buzzword, and I see now why. It’s about data, and the word “big” is so ambiguous that everyone can lay claim to it. There were essentially zero academic talks. Instead, the focus was on war stories, product announcements, and education. The general level of education is much lower—explaining Machine Learning to the SQL educated is the primary operating point. Nevertheless that’s happening, a

3 hunch net-2012-10-18-7th Annual Machine Learning Symposium

Introduction: A reminder that the New York Academy of Sciences will be hosting the  7th Annual Machine Learning Symposium tomorrow from 9:30am. The main program will feature invited talks from Peter Bartlett ,  William Freeman , and Vladimir Vapnik , along with numerous spotlight talks and a poster session. Following the main program, hackNY and Microsoft Research are sponsoring a networking hour with talks from machine learning practitioners at NYC startups (specifically bit.ly , Buzzfeed , Chartbeat , and Sense Networks , Visual Revenue ). This should be of great interest to everyone considering working in machine learning.

4 hunch net-2012-09-29-Vowpal Wabbit, version 7.0

Introduction: A new version of VW is out . The primary changes are: Learning Reductions : I’ve wanted to get learning reductions working and we’ve finally done it. Not everything is implemented yet, but VW now supports direct: Multiclass Classification –oaa or –ect . Cost Sensitive Multiclass Classification –csoaa or –wap . Contextual Bandit Classification –cb . Sequential Structured Prediction –searn or –dagger In addition, it is now easy to build your own custom learning reductions for various plausible uses: feature diddling, custom structured prediction problems, or alternate learning reductions. This effort is far from done, but it is now in a generally useful state. Note that all learning reductions inherit the ability to do cluster parallel learning. Library interface : VW now has a basic library interface. The library provides most of the functionality of VW, with the limitation that it is monolithic and nonreentrant. These will be improved over

5 hunch net-2012-08-27-NYAS ML 2012 and ICML 2013

Introduction: The New York Machine Learning Symposium is October 19 with a 2 page abstract deadline due September 13 via email with subject “Machine Learning Poster Submission” sent to physicalscience@nyas.org. Everyone is welcome to submit. Last year’s attendance was 246 and I expect more this year. The primary experiment for ICML 2013 is multiple paper submission deadlines with rolling review cycles. The key dates are October 1, December 15, and February 15. This is an attempt to shift ICML further towards a journal style review process and reduce peak load. The “not for proceedings” experiment from this year’s ICML is not continuing. Edit: Fixed second ICML deadline.

6 hunch net-2012-08-24-Patterns for research in machine learning

Introduction: There are a handful of basic code patterns that I wish I was more aware of when I started research in machine learning. Each on its own may seem pointless, but collectively they go a long way towards making the typical research workflow more efficient. Here they are: Separate code from data. Separate input data, working data and output data. Save everything to disk frequently. Separate options from parameters. Do not use global variables. Record the options used to generate each run of the algorithm. Make it easy to sweep options. Make it easy to execute only portions of the code. Use checkpointing. Write demos and tests. Click here for discussion and examples for each item. Also see Charles Sutton’s and HackerNews’ thoughts on the same topic. My guess is that these patterns will not only be useful for machine learning, but also any other computational work that involves either a) processing large amounts of data, or b) algorithms that take a signif

7 hunch net-2012-07-17-MUCMD and BayLearn

Introduction: The workshop on the Meaningful Use of Complex Medical Data is happening again, August 9-12 in LA, near UAI on Catalina Island August 15-17. I enjoyed my visit last year, and expect this year to be interesting also. The first Bay Area Machine Learning Symposium is August 30 at Google . Abstracts are due July 30.

8 hunch net-2012-07-09-Videolectures

Introduction: Yaser points out some nicely videotaped machine learning lectures at Caltech . Yaser taught me machine learning, and I always found the lectures clear and interesting, so I expect many people can benefit from watching. Relative to Andrew Ng ‘s ML class there are somewhat different areas of emphasis but the topic is the same, so picking and choosing the union may be helpful.

9 hunch net-2012-06-29-ICML survey and comments

Introduction: Just about nothing could keep me from attending ICML , except for Dora who arrived on Monday. Consequently, I have only secondhand reports that the conference is going well. For those who are remote (like me) or after the conference (like everyone), Mark Reid has setup the ICML discussion site where you can comment on any paper or subscribe to papers. Authors are automatically subscribed to their own papers, so it should be possible to have a discussion significantly after the fact, as people desire. We also conducted a survey before the conference and have the survey results now. This can be compared with the ICML 2010 survey results . Looking at the comparable questions, we can sometimes order the answers to have scores ranging from 0 to 3 or 0 to 4 with 3 or 4 being best and 0 worst, then compute the average difference between 2012 and 2010. Glancing through them, I see: Most people found the papers they reviewed a good fit for their expertise (-.037 w.r.t 20

10 hunch net-2012-06-15-Normal Deviate and the UCSC Machine Learning Summer School

Introduction: Larry Wasserman has started the Normal Deviate blog which I added to the blogroll on the right. Manfred Warmuth points out the UCSC machine learning summer school running July 9-20 which may be of particular interest to those in silicon valley.

11 hunch net-2012-06-05-ICML acceptance statistics

Introduction: People are naturally interested in slicing the ICML acceptance statistics in various ways. Here’s a rundown for the top categories. 18/66 = 0.27 in (0.18,0.36) Reinforcement Learning 10/52 = 0.19 in (0.17,0.37) Supervised Learning 9/51 = 0.18 not in (0.18, 0.37) Clustering 12/46 = 0.26 in (0.17, 0.37) Kernel Methods 11/40 = 0.28 in (0.15, 0.4) Optimization Algorithms 8/33 = 0.24 in (0.15, 0.39) Learning Theory 14/33 = 0.42 not in (0.15, 0.39) Graphical Models 10/32 = 0.31 in (0.15, 0.41) Applications (+5 invited) 8/29 = 0.28 in (0.14, 0.41]) Probabilistic Models 13/29 = 0.45 not in (0.14, 0.41) NN & Deep Learning 8/26 = 0.31 in (0.12, 0.42) Transfer and Multi-Task Learning 13/25 = 0.52 not in (0.12, 0.44) Online Learning 5/25 = 0.20 in (0.12, 0.44) Active Learning 6/22 = 0.27 in (0.14, 0.41) Semi-Superv

12 hunch net-2012-05-12-ICML accepted papers and early registration

Introduction: The accepted papers are up in full detail. We are still struggling with the precise program itself, but that’s coming along. Also note the May 13 deadline for early registration and room booking.

13 hunch net-2012-05-03-Microsoft Research, New York City

Introduction: Yahoo! laid off people . Unlike every previous time there have been layoffs, this is serious for Yahoo! Research . We had advanced warning from Prabhakar through the simple act of leaving . Yahoo! Research was a world class organization that Prabhakar recruited much of personally, so it is deeply implausible that he would spontaneously decide to leave. My first thought when I saw the news was “Uhoh, Rob said that he knew it was serious when the head of ATnT Research left.” In this case it was even more significant, because Prabhakar recruited me on the premise that Y!R was an experiment in how research should be done: via a combination of high quality people and high engagement with the company. Prabhakar’s departure is a clear end to that experiment. The result is ambiguous from a business perspective. Y!R clearly was not capable of saving the company from its illnesses. I’m not privy to the internal accounting of impact and this is the kind of subject where there c

14 hunch net-2012-05-02-ICML: Behind the Scenes

Introduction: This is a rather long post, detailing the ICML 2012 review process. The goal is to make the process more transparent, help authors understand how we came to a decision, and discuss the strengths and weaknesses of this process for future conference organizers. Microsoft’s Conference Management Toolkit (CMT) We chose to use CMT over other conference management software mainly because of its rich toolkit. The interface is sub-optimal (to say the least!) but it has extensive capabilities (to handle bids, author response, resubmissions, etc.), good import/export mechanisms (to process the data elsewhere), excellent technical support (to answer late night emails, add new functionalities). Overall, it was the right choice, although we hope a designer will look at that interface sometime soon! Toronto Matching System (TMS) TMS is now being used by many major conferences in our field (including NIPS and UAI). It is an automated system (developed by Laurent Charlin and Rich Ze

15 hunch net-2012-04-20-Both new: STOC workshops and NEML

Introduction: May 16 in Cambridge , is the New England Machine Learning Day , a first regional workshop/symposium on machine learning. To present a poster, submit an abstract by May 5 . May 19 in New York , STOC is coming to town and rather surprisingly having workshops which should be quite a bit of fun. I’ll be speaking at Algorithms for Distributed and Streaming Data .

16 hunch net-2012-04-09-ICML author feedback is open

Introduction: as of last night, late. When the reviewing deadline passed Wednesday night 15% of reviews were still missing, much higher than I expected. Between late reviews coming in, ACs working overtime through the weekend, and people willing to help in the pinch another ~390 reviews came in, reducing the missing mass to 0.2%. Nailing that last bit and a similar quantity of papers with uniformly low confidence reviews is what remains to be done in terms of basic reviews. We are trying to make all of those happen this week so authors have some chance to respond. I was surprised by the quantity of late reviews, and I think that’s an area where ICML needs to improve in future years. Good reviews are not done in a rush—they are done by setting aside time (like an afternoon), and carefully reading the paper while thinking about implications. Many reviewers do this well but a significant minority aren’t good at scheduling their personal time. In this situation there are several ways to fail:

17 hunch net-2012-03-24-David Waltz

Introduction: has died . He lived a full life. I know him personally as a founder of the Center for Computational Learning Systems and the New York Machine Learning Symposium , both of which have sheltered and promoted the advancement of machine learning. I expect much of the New York area machine learning community will miss him, as well as many others around the world.

18 hunch net-2012-03-13-The Submodularity workshop and Lucca Professorship

Introduction: Nina points out the Submodularity Workshop March 19-20 next week at Georgia Tech . Many people want to make Submodularity the new Convexity in machine learning, and it certainly seems worth exploring. Sara Olson also points out a tenured faculty position at IMT Lucca with a deadline of May 15th . Lucca happens to be the ancestral home of 1/4 of my heritage

19 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions

Introduction: Sasha is the open problems chair for both COLT and ICML . Open problems will be presented in a joint session in the evening of the COLT/ICML overlap day. COLT has a history of open sessions, but this is new for ICML. If you have a difficult theoretically definable problem in machine learning, consider submitting it for review, due March 16 . You’ll benefit three ways: The effort of writing down a precise formulation of what you want often helps you understand the nature of the problem. Your problem will be officially published and citable. You might have it solved by some very intelligent bored people. The general idea could easily be applied to any problem which can be crisply stated with an easily verifiable solution, and we may consider expanding this in later years, but for this year all problems need to be of a theoretical variety. Joelle and I (and Mahdi , and Laurent ) finished an initial assignment of Program Committee and Area Chairs to pap

20 hunch net-2012-02-29-Key Scientific Challenges and the Franklin Symposium

Introduction: For graduate students, the Yahoo! Key Scientific Challenges program including in machine learning is on again, due March 9 . The application is easy and the $5K award is high quality “no strings attached” funding. Consider submitting. Those in Washington DC, Philadelphia, and New York, may consider attending the Franklin Institute Symposium April 25 which has several speakers and an award for V . Attendance is free with an RSVP.

21 hunch net-2012-02-24-ICML+50%

22 hunch net-2012-02-20-Berkeley Streaming Data Workshop

23 hunch net-2012-01-30-ICML Posters and Scope

24 hunch net-2012-01-28-Why COLT?

25 hunch net-2012-01-04-Why ICML? and the summer conferences