hunch_net hunch_net-2010 hunch_net-2010-404 knowledge-graph by maker-knowledge-mining

404 hunch net-2010-08-20-The Workshop on Cores, Clusters, and Clouds


meta infos for this blog

Source: html

Introduction: Alekh , John , Ofer , and I are organizing a workshop at NIPS this year on learning in parallel and distributed environments. The general interest level in parallel learning seems to be growing rapidly, so I expect quite a bit of attendance. Please join us if you are parallel-interested. And, if you are working in the area of parallel learning, please consider submitting an abstract due Oct. 17 for presentation at the workshop.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Alekh , John , Ofer , and I are organizing a workshop at NIPS this year on learning in parallel and distributed environments. [sent-1, score-1.278]

2 The general interest level in parallel learning seems to be growing rapidly, so I expect quite a bit of attendance. [sent-2, score-1.394]

3 And, if you are working in the area of parallel learning, please consider submitting an abstract due Oct. [sent-4, score-1.622]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('parallel', 0.524), ('please', 0.295), ('ofer', 0.269), ('alekh', 0.242), ('workshop', 0.226), ('join', 0.224), ('submitting', 0.211), ('organizing', 0.201), ('rapidly', 0.196), ('distributed', 0.188), ('abstract', 0.172), ('growing', 0.172), ('presentation', 0.159), ('john', 0.153), ('level', 0.122), ('area', 0.119), ('nips', 0.118), ('expect', 0.111), ('interest', 0.109), ('working', 0.104), ('consider', 0.101), ('due', 0.096), ('us', 0.095), ('bit', 0.087), ('year', 0.087), ('quite', 0.08), ('general', 0.078), ('seems', 0.059), ('learning', 0.052)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 404 hunch net-2010-08-20-The Workshop on Cores, Clusters, and Clouds

Introduction: Alekh , John , Ofer , and I are organizing a workshop at NIPS this year on learning in parallel and distributed environments. The general interest level in parallel learning seems to be growing rapidly, so I expect quite a bit of attendance. Please join us if you are parallel-interested. And, if you are working in the area of parallel learning, please consider submitting an abstract due Oct. 17 for presentation at the workshop.

2 0.20383562 488 hunch net-2013-08-31-Extreme Classification workshop at NIPS

Introduction: Manik and I are organizing the extreme classification workshop at NIPS this year. We have a number of good speakers lined up, but I would further encourage anyone working in the area to submit an abstract by October 9. I believe this is an idea whose time has now come. The NIPS website doesn’t have other workshops listed yet, but I expect several others to be of significant interest.

3 0.19241792 451 hunch net-2011-12-13-Vowpal Wabbit version 6.1 & the NIPS tutorial

Introduction: I just made version 6.1 of Vowpal Wabbit . Relative to 6.0 , there are few new features, but many refinements. The cluster parallel learning code better supports multiple simultaneous runs, and other forms of parallelism have been mostly removed. This incidentally significantly simplifies the learning core. The online learning algorithms are more general, with support for l 1 (via a truncated gradient variant) and l 2 regularization, and a generalized form of variable metric learning. There is a solid persistent server mode which can train online, as well as serve answers to many simultaneous queries, either in text or binary. This should be a very good release if you are just getting started, as we’ve made it compile more automatically out of the box, have several new examples and updated documentation. As per tradition , we’re planning to do a tutorial at NIPS during the break at the parallel learning workshop at 2pm Spanish time Friday. I’ll cover the

4 0.18634909 346 hunch net-2009-03-18-Parallel ML primitives

Introduction: Previously, we discussed parallel machine learning a bit. As parallel ML is rather difficult, I’d like to describe my thinking at the moment, and ask for advice from the rest of the world. This is particularly relevant right now, as I’m attending a workshop tomorrow on parallel ML. Parallelizing slow algorithms seems uncompelling. Parallelizing many algorithms also seems uncompelling, because the effort required to parallelize is substantial. This leaves the question: Which one fast algorithm is the best to parallelize? What is a substantially different second? One compellingly fast simple algorithm is online gradient descent on a linear representation. This is the core of Leon’s sgd code and Vowpal Wabbit . Antoine Bordes showed a variant was competitive in the large scale learning challenge . It’s also a decades old primitive which has been reused in many algorithms, and continues to be reused. It also applies to online learning rather than just online optimiz

5 0.16862887 234 hunch net-2007-02-22-Create Your Own ICML Workshop

Introduction: As usual ICML 2007 will be hosting a workshop program to be held this year on June 24th. The success of the program depends on having researchers like you propose interesting workshop topics and then organize the workshops. I’d like to encourage all of you to consider sending a workshop proposal. The proposal deadline has been extended to March 5. See the workshop web-site for details. Organizing a workshop is a unique way to gather an international group of researchers together to focus for an entire day on a topic of your choosing. I’ve always found that the cost of organizing a workshop is not so large, and very low compared to the benefits. The topic and format of a workshop are limited only by your imagination (and the attractiveness to potential participants) and need not follow the usual model of a mini-conference on a particular ML sub-area. Hope to see some interesting proposals rolling in.

6 0.14485142 229 hunch net-2007-01-26-Parallel Machine Learning Problems

7 0.12754296 265 hunch net-2007-10-14-NIPS workshp: Learning Problem Design

8 0.12458862 442 hunch net-2011-08-20-The Large Scale Learning Survey Tutorial

9 0.12453533 381 hunch net-2009-12-07-Vowpal Wabbit version 4.0, and a NIPS heresy

10 0.11376581 300 hunch net-2008-04-30-Concerns about the Large Scale Learning Challenge

11 0.11235192 425 hunch net-2011-02-25-Yahoo! Machine Learning grant due March 11

12 0.10683773 366 hunch net-2009-08-03-Carbon in Computer Science Research

13 0.10361937 490 hunch net-2013-11-09-Graduates and Postdocs

14 0.10070233 46 hunch net-2005-03-24-The Role of Workshops

15 0.098986752 316 hunch net-2008-09-04-Fall ML Conferences

16 0.097563975 113 hunch net-2005-09-19-NIPS Workshops

17 0.090850554 455 hunch net-2012-02-20-Berkeley Streaming Data Workshop

18 0.090610161 462 hunch net-2012-04-20-Both new: STOC workshops and NEML

19 0.090033881 285 hunch net-2008-01-23-Why Workshop?

20 0.086751729 141 hunch net-2005-12-17-Workshops as Franchise Conferences


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.126), (1, -0.09), (2, -0.146), (3, -0.103), (4, 0.075), (5, 0.179), (6, -0.024), (7, -0.02), (8, -0.105), (9, -0.017), (10, -0.033), (11, -0.091), (12, 0.09), (13, 0.031), (14, 0.025), (15, -0.144), (16, -0.038), (17, -0.004), (18, -0.142), (19, -0.124), (20, -0.149), (21, -0.031), (22, -0.07), (23, 0.056), (24, -0.016), (25, 0.089), (26, 0.043), (27, -0.046), (28, 0.041), (29, 0.087), (30, 0.013), (31, -0.005), (32, -0.088), (33, 0.046), (34, 0.099), (35, -0.025), (36, 0.048), (37, 0.077), (38, -0.064), (39, -0.001), (40, -0.015), (41, -0.056), (42, 0.014), (43, 0.142), (44, -0.108), (45, 0.024), (46, 0.079), (47, -0.091), (48, 0.052), (49, -0.017)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97432876 404 hunch net-2010-08-20-The Workshop on Cores, Clusters, and Clouds

Introduction: Alekh , John , Ofer , and I are organizing a workshop at NIPS this year on learning in parallel and distributed environments. The general interest level in parallel learning seems to be growing rapidly, so I expect quite a bit of attendance. Please join us if you are parallel-interested. And, if you are working in the area of parallel learning, please consider submitting an abstract due Oct. 17 for presentation at the workshop.

2 0.59726286 381 hunch net-2009-12-07-Vowpal Wabbit version 4.0, and a NIPS heresy

Introduction: I’m releasing version 4.0 ( tarball ) of Vowpal Wabbit . The biggest change (by far) in this release is experimental support for cluster parallelism, with notable help from Daniel Hsu . I also took advantage of the major version number to introduce some incompatible changes, including switching to murmurhash 2 , and other alterations to cachefiles. You’ll need to delete and regenerate them. In addition, the precise specification for a “tag” (i.e. string that can be used to identify an example) changed—you can’t have a space between the tag and the ‘|’ at the beginning of the feature namespace. And, of course, we made it faster. For the future, I put up my todo list outlining the major future improvements I want to see in the code. I’m planning to discuss the current mechanism and results of the cluster parallel implementation at the large scale machine learning workshop at NIPS later this week. Several people have asked me to do a tutorial/walkthrough of VW, wh

3 0.5869422 451 hunch net-2011-12-13-Vowpal Wabbit version 6.1 & the NIPS tutorial

Introduction: I just made version 6.1 of Vowpal Wabbit . Relative to 6.0 , there are few new features, but many refinements. The cluster parallel learning code better supports multiple simultaneous runs, and other forms of parallelism have been mostly removed. This incidentally significantly simplifies the learning core. The online learning algorithms are more general, with support for l 1 (via a truncated gradient variant) and l 2 regularization, and a generalized form of variable metric learning. There is a solid persistent server mode which can train online, as well as serve answers to many simultaneous queries, either in text or binary. This should be a very good release if you are just getting started, as we’ve made it compile more automatically out of the box, have several new examples and updated documentation. As per tradition , we’re planning to do a tutorial at NIPS during the break at the parallel learning workshop at 2pm Spanish time Friday. I’ll cover the

4 0.57314718 455 hunch net-2012-02-20-Berkeley Streaming Data Workshop

Introduction: The From Data to Knowledge workshop May 7-11 at Berkeley should be of interest to the many people encountering streaming data in different disciplines. It’s run by a group of astronomers who encounter streaming data all the time. I met Josh Bloom recently and he is broadly interested in a workshop covering all aspects of Machine Learning on streaming data. The hope here is that techniques developed in one area turn out useful in another which seems quite plausible. Particularly if you are in the bay area, consider checking it out.

5 0.55477834 488 hunch net-2013-08-31-Extreme Classification workshop at NIPS

Introduction: Manik and I are organizing the extreme classification workshop at NIPS this year. We have a number of good speakers lined up, but I would further encourage anyone working in the area to submit an abstract by October 9. I believe this is an idea whose time has now come. The NIPS website doesn’t have other workshops listed yet, but I expect several others to be of significant interest.

6 0.5474363 442 hunch net-2011-08-20-The Large Scale Learning Survey Tutorial

7 0.54264718 234 hunch net-2007-02-22-Create Your Own ICML Workshop

8 0.54196012 124 hunch net-2005-10-19-Workshop: Atomic Learning

9 0.50620067 198 hunch net-2006-07-25-Upcoming conference

10 0.49863383 346 hunch net-2009-03-18-Parallel ML primitives

11 0.49836376 229 hunch net-2007-01-26-Parallel Machine Learning Problems

12 0.4677895 54 hunch net-2005-04-08-Fast SVMs

13 0.45703429 321 hunch net-2008-10-19-NIPS 2008 workshop on Kernel Learning

14 0.44157773 113 hunch net-2005-09-19-NIPS Workshops

15 0.44037506 114 hunch net-2005-09-20-Workshop Proposal: Atomic Learning

16 0.43763867 53 hunch net-2005-04-06-Structured Regret Minimization

17 0.43744403 443 hunch net-2011-09-03-Fall Machine Learning Events

18 0.40118915 192 hunch net-2006-07-08-Some recent papers

19 0.39727315 319 hunch net-2008-10-01-NIPS 2008 workshop on ‘Learning over Empirical Hypothesis Spaces’

20 0.39391342 136 hunch net-2005-12-07-Is the Google way the way for machine learning?


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.146), (38, 0.039), (55, 0.148), (74, 0.381), (95, 0.117)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.95542157 278 hunch net-2007-12-17-New Machine Learning mailing list

Introduction: IMLS (which is the nonprofit running ICML) has setup a new mailing list for Machine Learning News . The list address is ML-news@googlegroups.com, and signup requires a google account (which you can create). Only members can send messages.

same-blog 2 0.91939217 404 hunch net-2010-08-20-The Workshop on Cores, Clusters, and Clouds

Introduction: Alekh , John , Ofer , and I are organizing a workshop at NIPS this year on learning in parallel and distributed environments. The general interest level in parallel learning seems to be growing rapidly, so I expect quite a bit of attendance. Please join us if you are parallel-interested. And, if you are working in the area of parallel learning, please consider submitting an abstract due Oct. 17 for presentation at the workshop.

3 0.75165272 422 hunch net-2011-01-16-2011 Summer Conference Deadline Season

Introduction: Machine learning always welcomes the new year with paper deadlines for summer conferences. This year, we have: Conference Paper Deadline When/Where Double blind? Author Feedback? Notes ICML February 1 June 28-July 2, Bellevue, Washington, USA Y Y Weak colocation with ACL COLT February 11 July 9-July 11, Budapest, Hungary N N colocated with FOCM KDD February 11/18 August 21-24, San Diego, California, USA N N UAI March 18 July 14-17, Barcelona, Spain Y N The larger conferences are on the west coast in the United States, while the smaller ones are in Europe.

4 0.66273594 217 hunch net-2006-11-06-Data Linkage Problems

Introduction: Data linkage is a problem which seems to come up in various applied machine learning problems. I have heard it mentioned in various data mining contexts, but it seems relatively less studied for systemic reasons. A very simple version of the data linkage problem is a cross hospital patient record merge. Suppose a patient (John Doe) is admitted to a hospital (General Health), treated, and released. Later, John Doe is admitted to a second hospital (Health General), treated, and released. Given a large number of records of this sort, it becomes very tempting to try and predict the outcomes of treatments. This is reasonably straightforward as a machine learning problem if there is a shared unique identifier for John Doe used by General Health and Health General along with time stamps. We can merge the records and create examples of the form “Given symptoms and treatment, did the patient come back to a hospital within the next year?” These examples could be fed into a learning algo

5 0.53880131 370 hunch net-2009-09-18-Necessary and Sufficient Research

Introduction: Researchers are typically confronted with big problems that they have no idea how to solve. In trying to come up with a solution, a natural approach is to decompose the big problem into a set of subproblems whose solution yields a solution to the larger problem. This approach can go wrong in several ways. Decomposition failure . The solution to the decomposition does not in fact yield a solution to the overall problem. Artificial hardness . The subproblems created are sufficient if solved to solve the overall problem, but they are harder than necessary. As you can see, computational complexity forms a relatively new (in research-history) razor by which to judge an approach sufficient but not necessary. In my experience, the artificial hardness problem is very common. Many researchers abdicate the responsibility of choosing a problem to work on to other people. This process starts very naturally as a graduate student, when an incoming student might have relatively l

6 0.51873446 67 hunch net-2005-05-06-Don’t mix the solution into the problem

7 0.51001406 443 hunch net-2011-09-03-Fall Machine Learning Events

8 0.50491637 10 hunch net-2005-02-02-Kolmogorov Complexity and Googling

9 0.49966696 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers

10 0.48885846 462 hunch net-2012-04-20-Both new: STOC workshops and NEML

11 0.48225063 466 hunch net-2012-06-05-ICML acceptance statistics

12 0.48209244 464 hunch net-2012-05-03-Microsoft Research, New York City

13 0.47954768 36 hunch net-2005-03-05-Funding Research

14 0.47593874 344 hunch net-2009-02-22-Effective Research Funding

15 0.47283781 234 hunch net-2007-02-22-Create Your Own ICML Workshop

16 0.47087684 89 hunch net-2005-07-04-The Health of COLT

17 0.47004098 389 hunch net-2010-02-26-Yahoo! ML events

18 0.46755958 452 hunch net-2012-01-04-Why ICML? and the summer conferences

19 0.465994 373 hunch net-2009-10-03-Static vs. Dynamic multiclass prediction

20 0.46448275 453 hunch net-2012-01-28-Why COLT?