hunch_net hunch_net-2005 hunch_net-2005-81 knowledge-graph by maker-knowledge-mining

81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops


meta infos for this blog

Source: html

Introduction: Chicago ’05 ended a couple of weeks ago. This was the sixth Machine Learning Summer School , and the second one that used a wiki . (The first was Berder ’04, thanks to Gunnar Raetsch.) Wikis are relatively easy to set up, greatly aid social interaction, and should be used a lot more at summer schools and workshops. They can even be used as the meeting’s webpage, as a permanent record of its participants’ collaborations — see for example the wiki/website for last year’s NVO Summer School . A basic wiki is a collection of editable webpages, maintained by software called a wiki engine . The engine used at both Berder and Chicago was TikiWiki — it is well documented and gets you something running fast. It uses PHP and MySQL, but doesn’t require you to know either. Tikiwiki has far more features than most wikis, as it is really a full Content Management System . (My thanks to Sebastian Stark for pointing this out.) Here are the features we found most useful: Bulletin boa


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 This was the sixth Machine Learning Summer School , and the second one that used a wiki . [sent-2, score-0.33]

2 ) Wikis are relatively easy to set up, greatly aid social interaction, and should be used a lot more at summer schools and workshops. [sent-4, score-0.435]

3 A basic wiki is a collection of editable webpages, maintained by software called a wiki engine . [sent-6, score-0.948]

4 The engine used at both Berder and Chicago was TikiWiki — it is well documented and gets you something running fast. [sent-7, score-0.207]

5 The most-used one was the one for social events, which allowed participants to find company for doing stuff without requiring organizer assistance. [sent-12, score-0.434]

6 [ Example ] Other useful forums to set up are “Lost and Found”, and discussion lists for lectures — although the latter only work if the lecturer is willing to actively answer questions arising on the forum. [sent-14, score-0.527]

7 You can set forums up so that all posts to them are immediately emailed to someone. [sent-15, score-0.236]

8 For example, we set up pages for each lecture that we were able to edit easily later as more information (e. [sent-17, score-0.37]

9 Lecturers who wanted to modify their pages could do so without requiring organizer help or permission. [sent-20, score-0.476]

10 (Not that most of them actually took advantage of this in practice… but this will happen in time, as the wiki meme infects academia. [sent-21, score-0.269]

11 Having editable pages means that people can sign up themselves. [sent-24, score-0.49]

12 We set most of this up before the summer school, with directions of how to get there from the airport, what to bring, etc. [sent-27, score-0.252]

13 You can set up the overall layout of the webpage, by specifying the locations and contents menus on the left and right of a central `front page’. [sent-30, score-0.37]

14 This is done via the use of `modules’, and makes it possible for your wiki pages to completely replace the webpages — if you are willing to make some aesthetic sacrifices. [sent-31, score-0.688]

15 Different levels of users : The utopian wiki model of having ‘all pages editable by everyone’ is … well, utopian. [sent-32, score-0.872]

16 You can set up different groups of users with different permissions. [sent-33, score-0.359]

17 one for lectures, another for practical sessions, and another for social events — and users can overlay them on each other. [sent-39, score-0.338]

18 [ Example ] A couple of other TikiWiki features that we didn’t get working at Chicago, but would have been nice to have, are these: Image Galleries . [sent-40, score-0.214]

19 These are easy to set up, and have option for participants to see, or not to see, the results of surveys — useful when asking people to rate lectures. [sent-44, score-0.446]

20 It also has a couple of bugs (and features that are bad enough to be called bugs), such as permission issues and the inability to print calendars neatly. [sent-46, score-0.575]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('tikiwiki', 0.378), ('wiki', 0.269), ('editable', 0.252), ('pages', 0.238), ('berder', 0.189), ('calendars', 0.189), ('wikis', 0.189), ('chicago', 0.151), ('set', 0.132), ('participants', 0.13), ('faqs', 0.126), ('menus', 0.126), ('social', 0.122), ('summer', 0.12), ('features', 0.118), ('users', 0.113), ('surveys', 0.112), ('forums', 0.104), ('bugs', 0.104), ('documentation', 0.104), ('gunnar', 0.104), ('webpages', 0.104), ('events', 0.103), ('school', 0.101), ('organizer', 0.098), ('couple', 0.096), ('engine', 0.09), ('lectures', 0.086), ('requiring', 0.084), ('thanks', 0.081), ('webpage', 0.081), ('willing', 0.077), ('example', 0.075), ('useful', 0.072), ('didn', 0.071), ('called', 0.068), ('used', 0.061), ('changes', 0.06), ('different', 0.057), ('layout', 0.056), ('airport', 0.056), ('arising', 0.056), ('boards', 0.056), ('bulletin', 0.056), ('complementary', 0.056), ('contents', 0.056), ('documented', 0.056), ('front', 0.056), ('modify', 0.056), ('modules', 0.056)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999976 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops

Introduction: Chicago ’05 ended a couple of weeks ago. This was the sixth Machine Learning Summer School , and the second one that used a wiki . (The first was Berder ’04, thanks to Gunnar Raetsch.) Wikis are relatively easy to set up, greatly aid social interaction, and should be used a lot more at summer schools and workshops. They can even be used as the meeting’s webpage, as a permanent record of its participants’ collaborations — see for example the wiki/website for last year’s NVO Summer School . A basic wiki is a collection of editable webpages, maintained by software called a wiki engine . The engine used at both Berder and Chicago was TikiWiki — it is well documented and gets you something running fast. It uses PHP and MySQL, but doesn’t require you to know either. Tikiwiki has far more features than most wikis, as it is really a full Content Management System . (My thanks to Sebastian Stark for pointing this out.) Here are the features we found most useful: Bulletin boa

2 0.16446602 75 hunch net-2005-05-28-Running A Machine Learning Summer School

Introduction: We just finished the Chicago 2005 Machine Learning Summer School . The school was 2 weeks long with about 130 (or 140 counting the speakers) participants. For perspective, this is perhaps the largest graduate level machine learning class I am aware of anywhere and anytime (previous MLSS s have been close). Overall, it seemed to go well, although the students are the real authority on this. For those who missed it, DVDs will be available from our Slovenian friends. Email Mrs Spela Sitar of the Jozsef Stefan Institute for details. The following are some notes for future planning and those interested. Good Decisions Acquiring the larger-than-necessary “Assembly Hall” at International House . Our attendance came in well above our expectations, so this was a critical early decision that made a huge difference. The invited speakers were key. They made a huge difference in the quality of the content. Delegating early and often was important. One key difficulty here

3 0.13856503 297 hunch net-2008-04-22-Taking the next step

Introduction: At the last ICML , Tom Dietterich asked me to look into systems for commenting on papers. I’ve been slow getting to this, but it’s relevant now. The essential observation is that we now have many tools for online collaboration, but they are not yet much used in academic research. If we can find the right way to use them, then perhaps great things might happen, with extra kudos to the first conference that manages to really create an online community. Various conferences have been poking at this. For example, UAI has setup a wiki , COLT has started using Joomla , with some dynamic content, and AAAI has been setting up a “ student blog “. Similarly, Dinoj Surendran setup a twiki for the Chicago Machine Learning Summer School , which was quite useful for coordinating events and other things. I believe the most important thing is a willingness to experiment. A good place to start seems to be enhancing existing conference websites. For example, the ICML 2007 papers pag

4 0.13768262 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

Introduction: The 2006 Machine Learning Summer School in Taipei, Taiwan ended on August 4, 2006. It has been a very exciting two weeks for a record crowd of 245 participants (including speakers and organizers) from 18 countries. We had a lineup of speakers that is hard to match up for other similar events (see our WIKI for more information). With this lineup, it is difficult for us as organizers to screw it up too bad. Also, since we have pretty good infrastructure for international meetings and experienced staff at NTUST and Academia Sinica, plus the reputation established by previous MLSS series, it was relatively easy for us to attract registrations and simply enjoyed this two-week long party of machine learning. In the end of MLSS we distributed a survey form for participants to fill in. I will report what we found from this survey, together with the registration data and word-of-mouth from participants. The first question is designed to find out how our participants learned about MLSS

5 0.12461934 212 hunch net-2006-10-04-Health of Conferences Wiki

Introduction: Aaron Hertzmann points out the health of conferences wiki , which has a great deal of information about how many different conferences function.

6 0.11882015 4 hunch net-2005-01-26-Summer Schools

7 0.084137931 146 hunch net-2006-01-06-MLTV

8 0.08325962 449 hunch net-2011-11-26-Giving Thanks

9 0.075638786 293 hunch net-2008-03-23-Interactive Machine Learning

10 0.07542827 357 hunch net-2009-05-30-Many ways to Learn this summer

11 0.07533887 143 hunch net-2005-12-27-Automated Labeling

12 0.074304901 428 hunch net-2011-03-27-Vowpal Wabbit, v5.1

13 0.072702438 130 hunch net-2005-11-16-MLSS 2006

14 0.072143883 273 hunch net-2007-11-16-MLSS 2008

15 0.071340874 371 hunch net-2009-09-21-Netflix finishes (and starts)

16 0.070334964 437 hunch net-2011-07-10-ICML 2011 and the future

17 0.069961138 218 hunch net-2006-11-20-Context and the calculation misperception

18 0.069952533 454 hunch net-2012-01-30-ICML Posters and Scope

19 0.069176242 134 hunch net-2005-12-01-The Webscience Future

20 0.065991804 225 hunch net-2007-01-02-Retrospective


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.166), (1, -0.032), (2, -0.074), (3, 0.013), (4, -0.019), (5, -0.018), (6, -0.028), (7, -0.02), (8, -0.01), (9, 0.003), (10, -0.04), (11, 0.01), (12, 0.008), (13, 0.013), (14, 0.01), (15, -0.037), (16, -0.093), (17, 0.138), (18, 0.088), (19, 0.188), (20, 0.038), (21, 0.007), (22, 0.036), (23, -0.138), (24, 0.015), (25, 0.04), (26, 0.053), (27, 0.015), (28, -0.006), (29, 0.062), (30, 0.001), (31, -0.025), (32, 0.067), (33, -0.028), (34, -0.04), (35, 0.07), (36, -0.01), (37, -0.067), (38, 0.019), (39, -0.016), (40, -0.024), (41, -0.016), (42, -0.083), (43, -0.049), (44, -0.042), (45, 0.032), (46, 0.053), (47, 0.028), (48, -0.012), (49, 0.032)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.9608041 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops

Introduction: Chicago ’05 ended a couple of weeks ago. This was the sixth Machine Learning Summer School , and the second one that used a wiki . (The first was Berder ’04, thanks to Gunnar Raetsch.) Wikis are relatively easy to set up, greatly aid social interaction, and should be used a lot more at summer schools and workshops. They can even be used as the meeting’s webpage, as a permanent record of its participants’ collaborations — see for example the wiki/website for last year’s NVO Summer School . A basic wiki is a collection of editable webpages, maintained by software called a wiki engine . The engine used at both Berder and Chicago was TikiWiki — it is well documented and gets you something running fast. It uses PHP and MySQL, but doesn’t require you to know either. Tikiwiki has far more features than most wikis, as it is really a full Content Management System . (My thanks to Sebastian Stark for pointing this out.) Here are the features we found most useful: Bulletin boa

2 0.77634633 75 hunch net-2005-05-28-Running A Machine Learning Summer School

Introduction: We just finished the Chicago 2005 Machine Learning Summer School . The school was 2 weeks long with about 130 (or 140 counting the speakers) participants. For perspective, this is perhaps the largest graduate level machine learning class I am aware of anywhere and anytime (previous MLSS s have been close). Overall, it seemed to go well, although the students are the real authority on this. For those who missed it, DVDs will be available from our Slovenian friends. Email Mrs Spela Sitar of the Jozsef Stefan Institute for details. The following are some notes for future planning and those interested. Good Decisions Acquiring the larger-than-necessary “Assembly Hall” at International House . Our attendance came in well above our expectations, so this was a critical early decision that made a huge difference. The invited speakers were key. They made a huge difference in the quality of the content. Delegating early and often was important. One key difficulty here

3 0.6937812 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

Introduction: The 2006 Machine Learning Summer School in Taipei, Taiwan ended on August 4, 2006. It has been a very exciting two weeks for a record crowd of 245 participants (including speakers and organizers) from 18 countries. We had a lineup of speakers that is hard to match up for other similar events (see our WIKI for more information). With this lineup, it is difficult for us as organizers to screw it up too bad. Also, since we have pretty good infrastructure for international meetings and experienced staff at NTUST and Academia Sinica, plus the reputation established by previous MLSS series, it was relatively easy for us to attract registrations and simply enjoyed this two-week long party of machine learning. In the end of MLSS we distributed a survey form for participants to fill in. I will report what we found from this survey, together with the registration data and word-of-mouth from participants. The first question is designed to find out how our participants learned about MLSS

4 0.62843186 4 hunch net-2005-01-26-Summer Schools

Introduction: There are several summer schools related to machine learning. We are running a two week machine learning summer school in Chicago, USA May 16-27. IPAM is running a more focused three week summer school on Intelligent Extraction of Information from Graphs and High Dimensional Data in Los Angeles, USA July 11-29. A broad one-week school on analysis of patterns will be held in Erice, Italy, Oct. 28-Nov 6.

5 0.61925954 69 hunch net-2005-05-11-Visa Casualties

Introduction: For the Chicago 2005 machine learning summer school we are organizing, at least 5 international students can not come due to visa issues. There seem to be two aspects to visa issues: Inefficiency . The system rejected the student simply by being incapable of even starting to evaluate their visa in less than 1 month of time. Politics . Border controls became much tighter after the September 11 attack. Losing a big chunk of downtown of the largest city in a country will do that. What I (and the students) learned is that (1) is a much larger problem than (2). Only 1 prospective student seems to have achieved an explicit visa rejection. Fixing problem (1) should be a no-brainer, because the lag time almost surely indicates overload, and overload on border controls should worry even people concerned with (2). The obvious fixes to overload are “spend more money” and “make the system more efficient”. With respect to (2), (which is a more minor issue by the numbers) it i

6 0.55572665 493 hunch net-2014-02-16-Metacademy: a package manager for knowledge

7 0.5500403 130 hunch net-2005-11-16-MLSS 2006

8 0.53641731 357 hunch net-2009-05-30-Many ways to Learn this summer

9 0.52763742 297 hunch net-2008-04-22-Taking the next step

10 0.51211578 261 hunch net-2007-08-28-Live ML Class

11 0.48467836 63 hunch net-2005-04-27-DARPA project: LAGR

12 0.47038221 70 hunch net-2005-05-12-Math on the Web

13 0.46038601 122 hunch net-2005-10-13-Site tweak

14 0.45637304 449 hunch net-2011-11-26-Giving Thanks

15 0.44547135 173 hunch net-2006-04-17-Rexa is live

16 0.43537325 231 hunch net-2007-02-10-Best Practices for Collaboration

17 0.43157414 322 hunch net-2008-10-20-New York’s ML Day

18 0.42912659 146 hunch net-2006-01-06-MLTV

19 0.42754447 354 hunch net-2009-05-17-Server Update

20 0.42518514 479 hunch net-2013-01-31-Remote large scale learning class participation


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(9, 0.018), (16, 0.014), (27, 0.093), (37, 0.015), (38, 0.028), (53, 0.033), (55, 0.076), (84, 0.022), (92, 0.021), (94, 0.521), (95, 0.051)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.96640527 42 hunch net-2005-03-17-Going all the Way, Sometimes

Introduction: At many points in research, you face a choice: should I keep on improving some old piece of technology or should I do something new? For example: Should I refine bounds to make them tighter? Should I take some learning theory and turn it into a learning algorithm? Should I implement the learning algorithm? Should I test the learning algorithm widely? Should I release the algorithm as source code? Should I go see what problems people actually need to solve? The universal temptation of people attracted to research is doing something new. That is sometimes the right decision, but is also often not. I’d like to discuss some reasons why not. Expertise Once expertise are developed on some subject, you are the right person to refine them. What is the real problem? Continually improving a piece of technology is a mechanism forcing you to confront this question. In many cases, this confrontation is uncomfortable because you discover that your method has fundamen

same-blog 2 0.96268338 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops

Introduction: Chicago ’05 ended a couple of weeks ago. This was the sixth Machine Learning Summer School , and the second one that used a wiki . (The first was Berder ’04, thanks to Gunnar Raetsch.) Wikis are relatively easy to set up, greatly aid social interaction, and should be used a lot more at summer schools and workshops. They can even be used as the meeting’s webpage, as a permanent record of its participants’ collaborations — see for example the wiki/website for last year’s NVO Summer School . A basic wiki is a collection of editable webpages, maintained by software called a wiki engine . The engine used at both Berder and Chicago was TikiWiki — it is well documented and gets you something running fast. It uses PHP and MySQL, but doesn’t require you to know either. Tikiwiki has far more features than most wikis, as it is really a full Content Management System . (My thanks to Sebastian Stark for pointing this out.) Here are the features we found most useful: Bulletin boa

3 0.94203281 115 hunch net-2005-09-26-Prediction Bounds as the Mathematics of Science

Introduction: “Science” has many meanings, but one common meaning is “the scientific method ” which is a principled method for investigating the world using the following steps: Form a hypothesis about the world. Use the hypothesis to make predictions. Run experiments to confirm or disprove the predictions. The ordering of these steps is very important to the scientific method. In particular, predictions must be made before experiments are run. Given that we all believe in the scientific method of investigation, it may be surprising to learn that cheating is very common. This happens for many reasons, some innocent and some not. Drug studies. Pharmaceutical companies make predictions about the effects of their drugs and then conduct blind clinical studies to determine their effect. Unfortunately, they have also been caught using some of the more advanced techniques for cheating here : including “reprobleming”, “data set selection”, and probably “overfitting by review”

4 0.93060637 346 hunch net-2009-03-18-Parallel ML primitives

Introduction: Previously, we discussed parallel machine learning a bit. As parallel ML is rather difficult, I’d like to describe my thinking at the moment, and ask for advice from the rest of the world. This is particularly relevant right now, as I’m attending a workshop tomorrow on parallel ML. Parallelizing slow algorithms seems uncompelling. Parallelizing many algorithms also seems uncompelling, because the effort required to parallelize is substantial. This leaves the question: Which one fast algorithm is the best to parallelize? What is a substantially different second? One compellingly fast simple algorithm is online gradient descent on a linear representation. This is the core of Leon’s sgd code and Vowpal Wabbit . Antoine Bordes showed a variant was competitive in the large scale learning challenge . It’s also a decades old primitive which has been reused in many algorithms, and continues to be reused. It also applies to online learning rather than just online optimiz

5 0.93034148 35 hunch net-2005-03-04-The Big O and Constants in Learning

Introduction: The notation g(n) = O(f(n)) means that in the limit as n approaches infinity there exists a constant C such that the g(n) is less than Cf(n) . In learning theory, there are many statements about learning algorithms of the form “under assumptions x , y , and z , the classifier learned has an error rate of at most O(f(m)) “. There is one very good reason to use O(): it helps you understand the big picture and neglect the minor details which are not important in the big picture. However, there are some important reasons not to do this as well. Unspeedup In algorithm analysis, the use of O() for time complexity is pervasive and well-justified. Determining the exact value of C is inherently computer architecture dependent. (The “C” for x86 processors might differ from the “C” on PowerPC processors.) Since many learning theorists come from a CS theory background, the O() notation is applied to generalization error. The O() abstraction breaks here—you can not genera

6 0.87810665 120 hunch net-2005-10-10-Predictive Search is Coming

7 0.83871603 276 hunch net-2007-12-10-Learning Track of International Planning Competition

8 0.80419123 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

9 0.73007286 136 hunch net-2005-12-07-Is the Google way the way for machine learning?

10 0.71226519 229 hunch net-2007-01-26-Parallel Machine Learning Problems

11 0.70998085 441 hunch net-2011-08-15-Vowpal Wabbit 6.0

12 0.66441482 73 hunch net-2005-05-17-A Short Guide to PhD Graduate Study

13 0.65742695 286 hunch net-2008-01-25-Turing’s Club for Machine Learning

14 0.65051335 13 hunch net-2005-02-04-JMLG

15 0.64368325 178 hunch net-2006-05-08-Big machine learning

16 0.63948643 471 hunch net-2012-08-24-Patterns for research in machine learning

17 0.63916427 162 hunch net-2006-03-09-Use of Notation

18 0.63806945 146 hunch net-2006-01-06-MLTV

19 0.63800347 200 hunch net-2006-08-03-AOL’s data drop

20 0.63203645 408 hunch net-2010-08-24-Alex Smola starts a blog