hunch_net hunch_net-2005 hunch_net-2005-70 knowledge-graph by maker-knowledge-mining

70 hunch net-2005-05-12-Math on the Web


meta infos for this blog

Source: html

Introduction: Andrej Bauer has setup a Mathematics and Computation Blog. As a first step he has tried to address the persistent and annoying problem of math on the web. As a basic tool for precisely stating and transfering understanding of technical subjects, mathematics is very necessary. Despite this necessity, every mechanism for expressing mathematics on the web seems unnaturally clumsy. Here are some of the methods and their drawbacks: MathML This was supposed to be the answer, but it has two severe drawbacks: “Internet Explorer” doesn’t read it and the language is an example of push-XML-to-the-limit which no one would ever consider writing in. (In contrast, html is easy to write in.) It’s also very annoying that math fonts must be installed independent of the browser, even for mozilla based browsers. Create inline images. This has several big drawbacks: font size is fixed for all viewers, you can’t cut & paste inside the images, and you can’t hyperlink from (say) symbol to de


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 As a first step he has tried to address the persistent and annoying problem of math on the web. [sent-2, score-0.608]

2 As a basic tool for precisely stating and transfering understanding of technical subjects, mathematics is very necessary. [sent-3, score-0.309]

3 Despite this necessity, every mechanism for expressing mathematics on the web seems unnaturally clumsy. [sent-4, score-0.314]

4 Here are some of the methods and their drawbacks: MathML This was supposed to be the answer, but it has two severe drawbacks: “Internet Explorer” doesn’t read it and the language is an example of push-XML-to-the-limit which no one would ever consider writing in. [sent-5, score-0.443]

5 ) It’s also very annoying that math fonts must be installed independent of the browser, even for mozilla based browsers. [sent-7, score-0.63]

6 This has several big drawbacks: font size is fixed for all viewers, you can’t cut & paste inside the images, and you can’t hyperlink from (say) symbol to definition. [sent-9, score-0.494]

7 Math World is a good example using this approach. [sent-10, score-0.069]

8 The drawback here is that the available language is very limited (no square roots, integrals, sums, etc…). [sent-13, score-0.204]

9 Researchers are used to writing math in latex and compile into postscript or pdf. [sent-16, score-0.75]

10 It is possible to simply communicate in that language. [sent-17, score-0.07]

11 Unfortunately, the language can make simple things like fractions appear (syntactically) much more complicated. [sent-18, score-0.222]

12 More importantly, latex is not nearly as universally known as the mathematics layed out in math books. [sent-19, score-0.977]

13 An obvious trick is to translate this human-editable syntax into something. [sent-21, score-0.235]

14 There are two difficulties here: What do you translate to? [sent-22, score-0.151]

15 For example in latex , it’s hard to make a hyperlink from a variable in one formula to an anchor in the variable definition of another formula and have that translated correctly into (say) MathML. [sent-25, score-1.189]

16 Approach (4) is what Andrej’s blog is using, with a javascript translator that changes output depending on the destination browser. [sent-26, score-0.406]

17 Ideally, the ‘smart translator’ would use whichever of {MathML, image, html extensions, human-edit format} was best and supported by the destination browser, but that is not yet the case. [sent-27, score-0.628]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('math', 0.305), ('latex', 0.271), ('html', 0.251), ('mathematics', 0.239), ('destination', 0.203), ('mathml', 0.203), ('translator', 0.203), ('drawbacks', 0.192), ('hyperlink', 0.181), ('andrej', 0.167), ('browser', 0.158), ('translate', 0.151), ('annoying', 0.151), ('formula', 0.145), ('language', 0.132), ('variable', 0.112), ('writing', 0.099), ('bauer', 0.09), ('explorer', 0.09), ('fractions', 0.09), ('installed', 0.09), ('layed', 0.09), ('whichever', 0.09), ('mozilla', 0.084), ('font', 0.084), ('persistent', 0.084), ('sums', 0.084), ('supported', 0.084), ('symbol', 0.084), ('syntax', 0.084), ('translated', 0.084), ('say', 0.081), ('extensions', 0.079), ('ideally', 0.079), ('expressing', 0.075), ('cut', 0.075), ('compile', 0.075), ('supposed', 0.075), ('raw', 0.072), ('universally', 0.072), ('square', 0.072), ('communicate', 0.07), ('correctly', 0.07), ('inside', 0.07), ('stating', 0.07), ('example', 0.069), ('images', 0.068), ('tried', 0.068), ('ever', 0.068), ('necessity', 0.068)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 70 hunch net-2005-05-12-Math on the Web

Introduction: Andrej Bauer has setup a Mathematics and Computation Blog. As a first step he has tried to address the persistent and annoying problem of math on the web. As a basic tool for precisely stating and transfering understanding of technical subjects, mathematics is very necessary. Despite this necessity, every mechanism for expressing mathematics on the web seems unnaturally clumsy. Here are some of the methods and their drawbacks: MathML This was supposed to be the answer, but it has two severe drawbacks: “Internet Explorer” doesn’t read it and the language is an example of push-XML-to-the-limit which no one would ever consider writing in. (In contrast, html is easy to write in.) It’s also very annoying that math fonts must be installed independent of the browser, even for mozilla based browsers. Create inline images. This has several big drawbacks: font size is fixed for all viewers, you can’t cut & paste inside the images, and you can’t hyperlink from (say) symbol to de

2 0.17422426 302 hunch net-2008-05-25-Inappropriate Mathematics for Machine Learning

Introduction: Reviewers and students are sometimes greatly concerned by the distinction between: An open set and a closed set . A Supremum and a Maximum . An event which happens with probability 1 and an event that always happens. I don’t appreciate this distinction in machine learning & learning theory. All machine learning takes place (by definition) on a machine where every parameter has finite precision. Consequently, every set is closed, a maximal element always exists, and probability 1 events always happen. The fundamental issue here is that substantial parts of mathematics don’t appear well-matched to computation in the physical world, because the mathematics has concerns which are unphysical. This mismatched mathematics makes irrelevant distinctions. We can ask “what mathematics is appropriate to computation?” Andrej has convinced me that a pretty good answer to this question is constructive mathematics . So, here’s a basic challenge: Can anyone name a situati

3 0.15627594 305 hunch net-2008-06-30-ICML has a comment system

Introduction: Mark Reid has stepped up and created a comment system for ICML papers which Greger Linden has tightly integrated. My understanding is that Mark spent quite a bit of time on the details, and there are some cool features like working latex math mode. This is an excellent chance for the ICML community to experiment with making ICML year-round, so I hope it works out. Please do consider experimenting with it.

4 0.12403981 202 hunch net-2006-08-10-Precision is not accuracy

Introduction: In my experience, there are two different groups of people who believe the same thing: the mathematics encountered in typical machine learning conference papers is often of questionable value. The two groups who agree on this are applied machine learning people who have given up on math, and mature theoreticians who understand the limits of theory. Partly, this is just a statement about where we are with respect to machine learning. In particular, we have no mechanism capable of generating a prescription for how to solve all learning problems. In the absence of such certainty, people try to come up with formalisms that partially describe and motivate how and why they do things. This is natural and healthy—we might hope that it will eventually lead to just such a mechanism. But, part of this is simply an emphasis on complexity over clarity. A very natural and simple theoretical statement is often obscured by complexifications. Common sources of complexification include:

5 0.1062701 122 hunch net-2005-10-13-Site tweak

Introduction: Several people have had difficulty with comments which seem to have an allowed language significantly poorer than posts. The set of allowed html tags has been increased and the markdown filter has been put in place to try to make commenting easier. I’ll put some examples into the comments of this post.

6 0.092084922 55 hunch net-2005-04-10-Is the Goal Understanding or Prediction?

7 0.080274642 170 hunch net-2006-04-06-Bounds greater than 1

8 0.078256987 210 hunch net-2006-09-28-Programming Languages for Machine Learning Implementations

9 0.074526265 134 hunch net-2005-12-01-The Webscience Future

10 0.069368854 90 hunch net-2005-07-07-The Limits of Learning Theory

11 0.068445772 20 hunch net-2005-02-15-ESPgame and image labeling

12 0.068011984 151 hunch net-2006-01-25-1 year

13 0.067859329 225 hunch net-2007-01-02-Retrospective

14 0.067843065 288 hunch net-2008-02-10-Complexity Illness

15 0.061055604 208 hunch net-2006-09-18-What is missing for online collaborative research?

16 0.060913835 166 hunch net-2006-03-24-NLPers

17 0.059435539 49 hunch net-2005-03-30-What can Type Theory teach us about Machine Learning?

18 0.058889966 450 hunch net-2011-12-02-Hadoop AllReduce and Terascale Learning

19 0.058615286 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

20 0.057713829 358 hunch net-2009-06-01-Multitask Poisoning


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.13), (1, 0.008), (2, -0.029), (3, 0.074), (4, -0.042), (5, -0.015), (6, 0.023), (7, -0.05), (8, 0.022), (9, -0.006), (10, -0.067), (11, -0.011), (12, -0.006), (13, 0.034), (14, 0.047), (15, -0.013), (16, -0.08), (17, -0.019), (18, 0.0), (19, 0.094), (20, 0.021), (21, 0.051), (22, -0.016), (23, 0.022), (24, 0.012), (25, 0.008), (26, -0.02), (27, -0.024), (28, -0.01), (29, -0.004), (30, -0.06), (31, 0.119), (32, 0.097), (33, 0.103), (34, -0.09), (35, -0.054), (36, -0.027), (37, 0.071), (38, 0.038), (39, -0.035), (40, -0.037), (41, -0.023), (42, -0.086), (43, -0.014), (44, -0.109), (45, 0.029), (46, 0.057), (47, 0.015), (48, -0.046), (49, -0.014)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96603686 70 hunch net-2005-05-12-Math on the Web

Introduction: Andrej Bauer has setup a Mathematics and Computation Blog. As a first step he has tried to address the persistent and annoying problem of math on the web. As a basic tool for precisely stating and transfering understanding of technical subjects, mathematics is very necessary. Despite this necessity, every mechanism for expressing mathematics on the web seems unnaturally clumsy. Here are some of the methods and their drawbacks: MathML This was supposed to be the answer, but it has two severe drawbacks: “Internet Explorer” doesn’t read it and the language is an example of push-XML-to-the-limit which no one would ever consider writing in. (In contrast, html is easy to write in.) It’s also very annoying that math fonts must be installed independent of the browser, even for mozilla based browsers. Create inline images. This has several big drawbacks: font size is fixed for all viewers, you can’t cut & paste inside the images, and you can’t hyperlink from (say) symbol to de

2 0.67645043 302 hunch net-2008-05-25-Inappropriate Mathematics for Machine Learning

Introduction: Reviewers and students are sometimes greatly concerned by the distinction between: An open set and a closed set . A Supremum and a Maximum . An event which happens with probability 1 and an event that always happens. I don’t appreciate this distinction in machine learning & learning theory. All machine learning takes place (by definition) on a machine where every parameter has finite precision. Consequently, every set is closed, a maximal element always exists, and probability 1 events always happen. The fundamental issue here is that substantial parts of mathematics don’t appear well-matched to computation in the physical world, because the mathematics has concerns which are unphysical. This mismatched mathematics makes irrelevant distinctions. We can ask “what mathematics is appropriate to computation?” Andrej has convinced me that a pretty good answer to this question is constructive mathematics . So, here’s a basic challenge: Can anyone name a situati

3 0.66299886 55 hunch net-2005-04-10-Is the Goal Understanding or Prediction?

Introduction: Steve Smale and I have a debate about goals of learning theory. Steve likes theorems with a dependence on unobservable quantities. For example, if D is a distribution over a space X x [0,1] , you can state a theorem about the error rate dependent on the variance, E (x,y)~D (y-E y’~D|x [y']) 2 . I dislike this, because I want to use the theorems to produce code solving learning problems. Since I don’t know (and can’t measure) the variance, a theorem depending on the variance does not help me—I would not know what variance to plug into the learning algorithm. Recast more broadly, this is a debate between “declarative” and “operative” mathematics. A strong example of “declarative” mathematics is “a new kind of science” . Roughly speaking, the goal of this kind of approach seems to be finding a way to explain the observations we make. Examples include “some things are unpredictable”, “a phase transition exists”, etc… “Operative” mathematics helps you make predictions a

4 0.52798194 162 hunch net-2006-03-09-Use of Notation

Introduction: For most people, a mathematical notation is like a language: you learn it and stick with it. For people doing mathematical research, however, this is not enough: they must design new notations for new problems. The design of good notation is both hard and worthwhile since a bad initial notation can retard a line of research greatly. Before we had mathematical notation, equations were all written out in language. Since words have multiple meanings and variable precedences, long equations written out in language can be extraordinarily difficult and sometimes fundamentally ambiguous. A good representative example of this is the legalese in the tax code. Since we want greater precision and clarity, we adopt mathematical notation. One fundamental thing to understand about mathematical notation, is that humans as logic verifiers, are barely capable. This is the fundamental reason why one notation can be much better than another. This observation is easier to miss than you might

5 0.47821257 294 hunch net-2008-04-12-Blog compromised

Introduction: Iain noticed that hunch.net had zero width divs hiding spammy URLs. Some investigation reveals that the wordpress version being used (2.0.3) had security flaws. I’ve upgraded to the latest, rotated passwords, and removed the spammy URLs. I don’t believe any content was lost. You can check your own and other sites for a similar problem by greping for “width:0″ or “width: 0″ in the delivered html source.

6 0.47267407 202 hunch net-2006-08-10-Precision is not accuracy

7 0.46963206 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops

8 0.45968378 90 hunch net-2005-07-07-The Limits of Learning Theory

9 0.44927788 297 hunch net-2008-04-22-Taking the next step

10 0.43698162 218 hunch net-2006-11-20-Context and the calculation misperception

11 0.43597192 182 hunch net-2006-06-05-Server Shift, Site Tweaks, Suggestions?

12 0.43357992 62 hunch net-2005-04-26-To calibrate or not?

13 0.43278199 262 hunch net-2007-09-16-Optimizing Machine Learning Programs

14 0.43223765 392 hunch net-2010-03-26-A Variance only Deviation Bound

15 0.4273929 48 hunch net-2005-03-29-Academic Mechanism Design

16 0.42729339 241 hunch net-2007-04-28-The Coming Patent Apocalypse

17 0.42211333 413 hunch net-2010-10-08-An easy proof of the Chernoff-Hoeffding bound

18 0.42148164 210 hunch net-2006-09-28-Programming Languages for Machine Learning Implementations

19 0.41760215 122 hunch net-2005-10-13-Site tweak

20 0.40385672 257 hunch net-2007-07-28-Asking questions


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(10, 0.047), (27, 0.164), (38, 0.082), (53, 0.026), (55, 0.095), (85, 0.394), (94, 0.049), (95, 0.047)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.88340086 70 hunch net-2005-05-12-Math on the Web

Introduction: Andrej Bauer has setup a Mathematics and Computation Blog. As a first step he has tried to address the persistent and annoying problem of math on the web. As a basic tool for precisely stating and transfering understanding of technical subjects, mathematics is very necessary. Despite this necessity, every mechanism for expressing mathematics on the web seems unnaturally clumsy. Here are some of the methods and their drawbacks: MathML This was supposed to be the answer, but it has two severe drawbacks: “Internet Explorer” doesn’t read it and the language is an example of push-XML-to-the-limit which no one would ever consider writing in. (In contrast, html is easy to write in.) It’s also very annoying that math fonts must be installed independent of the browser, even for mozilla based browsers. Create inline images. This has several big drawbacks: font size is fixed for all viewers, you can’t cut & paste inside the images, and you can’t hyperlink from (say) symbol to de

2 0.80746871 481 hunch net-2013-04-15-NEML II

Introduction: Adam Kalai points out the New England Machine Learning Day May 1 at MSR New England. There is a poster session with abstracts due April 19. I understand last year’s NEML went well and it’s great to meet your neighbors at regional workshops like this.

3 0.69495422 382 hunch net-2009-12-09-Future Publication Models @ NIPS

Introduction: Yesterday, there was a discussion about future publication models at NIPS . Yann and Zoubin have specific detailed proposals which I’ll add links to when I get them ( Yann’s proposal and Zoubin’s proposal ). What struck me about the discussion is that there are many simultaneous concerns as well as many simultaneous proposals, which makes it difficult to keep all the distinctions straight in a verbal conversation. It also seemed like people were serious enough about this that we may see some real movement. Certainly, my personal experience motivates that as I’ve posted many times about the substantial flaws in our review process, including some very poor personal experiences. Concerns include the following: (Several) Reviewers are overloaded, boosting the noise in decision making. ( Yann ) A new system should run with as little built-in delay and friction to the process of research as possible. ( Hanna Wallach (updated)) Double-blind review is particularly impor

4 0.59987473 466 hunch net-2012-06-05-ICML acceptance statistics

Introduction: People are naturally interested in slicing the ICML acceptance statistics in various ways. Here’s a rundown for the top categories. 18/66 = 0.27 in (0.18,0.36) Reinforcement Learning 10/52 = 0.19 in (0.17,0.37) Supervised Learning 9/51 = 0.18 not in (0.18, 0.37) Clustering 12/46 = 0.26 in (0.17, 0.37) Kernel Methods 11/40 = 0.28 in (0.15, 0.4) Optimization Algorithms 8/33 = 0.24 in (0.15, 0.39) Learning Theory 14/33 = 0.42 not in (0.15, 0.39) Graphical Models 10/32 = 0.31 in (0.15, 0.41) Applications (+5 invited) 8/29 = 0.28 in (0.14, 0.41]) Probabilistic Models 13/29 = 0.45 not in (0.14, 0.41) NN & Deep Learning 8/26 = 0.31 in (0.12, 0.42) Transfer and Multi-Task Learning 13/25 = 0.52 not in (0.12, 0.44) Online Learning 5/25 = 0.20 in (0.12, 0.44) Active Learning 6/22 = 0.27 in (0.14, 0.41) Semi-Superv

5 0.55280757 426 hunch net-2011-03-19-The Ideal Large Scale Learning Class

Introduction: At NIPS, Andrew Ng asked me what should be in a large scale learning class. After some discussion with him and Nando and mulling it over a bit, these are the topics that I think should be covered. There are many different kinds of scaling. Scaling in examples This is the most basic kind of scaling. Online Gradient Descent This is an old algorithm—I’m not sure if anyone can be credited with it in particular. Perhaps the Perceptron is a good precursor, but substantial improvements come from the notion of a loss function of which squared loss , logistic loss , Hinge Loss, and Quantile Loss are all worth covering. It’s important to cover the semantics of these loss functions as well. Vowpal Wabbit is a reasonably fast codebase implementing these. Second Order Gradient Descent methods For some problems, methods taking into account second derivative information can be more effective. I’ve seen preconditioned conjugate gradient work well, for which Jonath

6 0.49660063 462 hunch net-2012-04-20-Both new: STOC workshops and NEML

7 0.47109812 454 hunch net-2012-01-30-ICML Posters and Scope

8 0.47103223 343 hunch net-2009-02-18-Decision by Vetocracy

9 0.46400902 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

10 0.46295786 437 hunch net-2011-07-10-ICML 2011 and the future

11 0.46171373 51 hunch net-2005-04-01-The Producer-Consumer Model of Research

12 0.46101743 464 hunch net-2012-05-03-Microsoft Research, New York City

13 0.46097305 406 hunch net-2010-08-22-KDD 2010

14 0.46055382 36 hunch net-2005-03-05-Funding Research

15 0.45938376 194 hunch net-2006-07-11-New Models

16 0.45812812 95 hunch net-2005-07-14-What Learning Theory might do

17 0.45743659 325 hunch net-2008-11-10-ICML Reviewing Criteria

18 0.4567377 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

19 0.4555845 225 hunch net-2007-01-02-Retrospective

20 0.4548983 360 hunch net-2009-06-15-In Active Learning, the question changes