brendan_oconnor_ai brendan_oconnor_ai-2008 brendan_oconnor_ai-2008-125 knowledge-graph by maker-knowledge-mining

125 brendan oconnor ai-2008-11-21-Netflix Prize


meta infos for this blog

Source: html

Introduction: Here’s a fascinating NYT article on the Netflix Prize for a better movie recommendation system.  Tons of great stuff there; here’s a few highlights … First, a good unsupervised learning story: There’s a sort of unsettling, alien quality to their computers’ results. When the teams examine the ways that singular value decomposition is slotting movies into categories, sometimes it makes sense to them — as when the computer highlights what appears to be some essence of nerdiness in a bunch of sci-fi movies. But many categorizations are now so obscure that they cannot see the reasoning behind them. Possibly the algorithms are finding connections so deep and subconscious that customers themselves wouldn’t even recognize them. At one point, Chabbert showed me a list of movies that his algorithm had discovered share some ineffable similarity; it includes a historical movie, “Joan of Arc,” a wrestling video, “W.W.E.: SummerSlam 2004,” the comedy “It Had to Be You” and a version of Charle


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Here’s a fascinating NYT article on the Netflix Prize for a better movie recommendation system. [sent-1, score-0.446]

2 Tons of great stuff there; here’s a few highlights … First, a good unsupervised learning story: There’s a sort of unsettling, alien quality to their computers’ results. [sent-2, score-0.216]

3 When the teams examine the ways that singular value decomposition is slotting movies into categories, sometimes it makes sense to them — as when the computer highlights what appears to be some essence of nerdiness in a bunch of sci-fi movies. [sent-3, score-1.108]

4 Possibly the algorithms are finding connections so deep and subconscious that customers themselves wouldn’t even recognize them. [sent-5, score-0.321]

5 At one point, Chabbert showed me a list of movies that his algorithm had discovered share some ineffable similarity; it includes a historical movie, “Joan of Arc,” a wrestling video, “W. [sent-6, score-0.443]

6 : SummerSlam 2004,” the comedy “It Had to Be You” and a version of Charles Dickens’s “Bleak House. [sent-9, score-0.146]

7 ” For the life of me, I can’t figure out what possible connection they have, but Chabbert assures me that this singular value decomposition scored 4 percent higher than Cinematch — so it must be doing something right. [sent-10, score-0.492]

8 As Volinsky surmised, “They’re able to tease out all of these things that we would never, ever think of ourselves. [sent-11, score-0.172]

9 Well, I’m pretty suspicious of drawing conclusions from that single example — it could have been a genuine grouping error while different, better groupings elsewhere were responsible for that 4 percent gain. [sent-13, score-0.352]

10 That’s why I’m a fan of systematically evaluating unsupervised algorithms; for example, as in political bias and SVD. [sent-14, score-0.215]

11 Another bit: suspicions that demographics might be less useful than individual movie preferences: Interestingly, the Netflix Prize competitors do not know anything about the demographics of the customers whose taste they’re trying to predict. [sent-15, score-0.888]

12 The teams sometimes argue on the discussion board about whether their predictions would be better if they knew that customer No. [sent-16, score-0.578]

13 Yet most of the leading teams say that personal information is not very useful, because it’s too crude. [sent-18, score-0.219]

14 There’s little reason to think the other 40-year-old men on my block enjoy the same movies as I do. [sent-20, score-0.381]

15 When I tell Netflix that I think Woody Allen’s black comedy “Match Point” deserves three stars but the Joss Whedon sci-fi film “Serenity” is a five-star masterpiece, this reveals quite a lot about my taste. [sent-22, score-0.336]

16 Indeed, Reed Hastings told me that even though Net­flix has a good deal of demographic information about its users, the company does not currently use it much to generate movie recommendations; merely knowing who people are, paradoxically, isn’t very predictive of their movie tastes. [sent-23, score-0.777]

17 Though I would like to see the results for throwing in demographics as features versus not. [sent-24, score-0.327]

18 It’s a little annoying that so many of the claims in the article aren’t backed up by empirical evidence — which you’d think would be the norm for such a data-driven topic! [sent-25, score-0.23]

19 Finally, an interesting question: Hastings is even considering hiring cinephiles to watch all 100,000 movies in the Netflix library and write up, by hand, pages of adjectives describing each movie, a cloud of tags that would offer a subjective view of what makes films similar or dissimilar. [sent-26, score-0.61]

20 It might imbue Cinematch with more unpredictable, humanlike intelligence. [sent-27, score-0.146]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('movie', 0.323), ('netflix', 0.289), ('movies', 0.254), ('cinematch', 0.219), ('demographics', 0.219), ('teams', 0.219), ('chabbert', 0.146), ('comedy', 0.146), ('decomposition', 0.146), ('humanlike', 0.146), ('singular', 0.146), ('customers', 0.127), ('highlights', 0.127), ('hastings', 0.127), ('prize', 0.116), ('would', 0.108), ('percent', 0.108), ('value', 0.092), ('unsupervised', 0.089), ('algorithms', 0.068), ('even', 0.068), ('sometimes', 0.065), ('think', 0.064), ('black', 0.063), ('west', 0.063), ('block', 0.063), ('board', 0.063), ('net', 0.063), ('adjectives', 0.063), ('genuine', 0.063), ('evaluating', 0.063), ('arc', 0.063), ('wrestling', 0.063), ('charles', 0.063), ('categories', 0.063), ('customer', 0.063), ('demographic', 0.063), ('discovered', 0.063), ('grouping', 0.063), ('obscure', 0.063), ('recommendation', 0.063), ('share', 0.063), ('stars', 0.063), ('systematically', 0.063), ('better', 0.06), ('makes', 0.059), ('offer', 0.058), ('norm', 0.058), ('conclusions', 0.058), ('recognize', 0.058)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999982 125 brendan oconnor ai-2008-11-21-Netflix Prize

Introduction: Here’s a fascinating NYT article on the Netflix Prize for a better movie recommendation system.  Tons of great stuff there; here’s a few highlights … First, a good unsupervised learning story: There’s a sort of unsettling, alien quality to their computers’ results. When the teams examine the ways that singular value decomposition is slotting movies into categories, sometimes it makes sense to them — as when the computer highlights what appears to be some essence of nerdiness in a bunch of sci-fi movies. But many categorizations are now so obscure that they cannot see the reasoning behind them. Possibly the algorithms are finding connections so deep and subconscious that customers themselves wouldn’t even recognize them. At one point, Chabbert showed me a list of movies that his algorithm had discovered share some ineffable similarity; it includes a historical movie, “Joan of Arc,” a wrestling video, “W.W.E.: SummerSlam 2004,” the comedy “It Had to Be You” and a version of Charle

2 0.14682123 196 brendan oconnor ai-2013-05-08-Movie summary corpus and learning character personas

Introduction: Here is one of our exciting just-finished ACL papers.   David  and I designed an algorithm that learns different types of character personas — “Protagonist”, “Love Interest”, etc — that are used in movies. To do this we collected a  brand new dataset : 42,306 plot summaries of movies from Wikipedia, along with metadata like box office revenue and genre.  We ran these through parsing and coreference analysis to also create a dataset of movie characters, linked with Freebase records of the actors who portray them.  Did you see that NYT article on quantitative analysis of film scripts ?  This dataset could answer all sorts of things they assert in that article — for example, do movies with bowling scenes really make less money?  We have released the data here . Our focus, though, is on narrative analysis.  We investigate  character personas : familiar character types that are repeated over and over in stories, like “Hero” or “Villian”; maybe grand mythical archetypes like “Trick

3 0.095152691 86 brendan oconnor ai-2007-12-20-Data-driven charity

Introduction: Some ex-hedge fund analysts recently started a non-profit devoted to evaluating the effectiveness of hundreds of charities, and apparently have been making waves (NYT) . A few interesting reports have been posted on their website, givewell.net — they make recommendations for which charities where donors’ money is used most efficiently for saving lives or helping the disadvantaged. (Does anyone else have interesting data on charity effectiveness? I’ve heard that evaluations are the big thing in philanthropy world now, and certainly the Gates Foundation talks a lot about it.) Obviously this sort of evaluation is tricky, but it has to be the right approach. The NYT article makes them sound like they’re a bit arrogant, which is too bad; on the other hand, any one who makes claims to have better empirical information than the established wisdom will always end up in that dynamic. (OK, so I love young smart people who come up with better results than a conservative, close-minded

4 0.085476302 129 brendan oconnor ai-2008-12-03-Statistics vs. Machine Learning, fight!

Introduction: 10/1/09 update — well, it’s been nearly a year, and I should say not everything in this rant is totally true, and I certainly believe much less of it now. Current take: Statistics , not machine learning, is the real deal, but unfortunately suffers from bad marketing. On the other hand, to the extent that bad marketing includes misguided undergraduate curriculums, there’s plenty of room to improve for everyone. So it’s pretty clear by now that statistics and machine learning aren’t very different fields. I was recently pointed to a very amusing comparison by the excellent statistician — and machine learning expert — Robert Tibshiriani . Reproduced here: Glossary Machine learning Statistics network, graphs model weights parameters learning fitting generalization test set performance supervised learning regression/classification unsupervised learning density estimation, clustering large grant = $1,000,000

5 0.084330224 198 brendan oconnor ai-2013-08-20-Some analysis of tweet shares and “predicting” election outcomes

Introduction: Everyone recently seems to be talking about this newish paper by Digrazia, McKelvey, Bollen, and Rojas  ( pdf here ) that examines the correlation of Congressional candidate name mentions on Twitter against whether the candidate won the race.  One of the coauthors also wrote a Washington Post Op-Ed  about it.  I read the paper and I think it’s reasonable, but their op-ed overstates their results.  It claims: “In the 2010 data, our Twitter data predicted the winner in 404 out of 435 competitive races” But this analysis is nowhere in their paper.  Fabio Rojas has now posted errata/rebuttals  about the op-ed and described this analysis they did here.  There are several major issues off the bat: They didn’t ever predict 404/435 races; they only analyzed 406 races they call “competitive,” getting 92.5% (in-sample) accuracy, then extrapolated to all races to get the 435 number. They’re reporting about  in-sample predictions, which is really misleading to a non-scientific audi

6 0.078844294 133 brendan oconnor ai-2009-01-23-SF conference for data mining mercenaries

7 0.069209553 150 brendan oconnor ai-2009-08-08-Haghighi and Klein (2009): Simple Coreference Resolution with Rich Syntactic and Semantic Features

8 0.068316169 6 brendan oconnor ai-2005-06-25-idea: Morals are heuristics for socially optimal behavior

9 0.067006141 131 brendan oconnor ai-2008-12-27-Facebook sentiment mining predicts presidential polls

10 0.065682538 184 brendan oconnor ai-2012-07-04-The $60,000 cat: deep belief networks make less sense for language than vision

11 0.06401448 128 brendan oconnor ai-2008-11-28-Calculating running variance in Python and C++

12 0.063992351 83 brendan oconnor ai-2007-11-15-Actually that 2008 elections voter fMRI study is batshit insane (and sleazy too)

13 0.062727362 140 brendan oconnor ai-2009-05-18-Announcing TweetMotif for summarizing twitter topics

14 0.05891322 191 brendan oconnor ai-2013-02-23-Wasserman on Stats vs ML, and previous comparisons

15 0.056293972 32 brendan oconnor ai-2006-03-26-new kind of science, for real

16 0.05581595 200 brendan oconnor ai-2013-09-13-Response on our movie personas paper

17 0.054214984 65 brendan oconnor ai-2007-06-17-"Time will tell, epistemology won’t"

18 0.054113902 154 brendan oconnor ai-2009-09-10-Don’t MAWK AWK – the fastest and most elegant big data munging language!

19 0.053631652 53 brendan oconnor ai-2007-03-15-Feminists, anarchists, computational complexity, bounded rationality, nethack, and other things to do

20 0.052413903 135 brendan oconnor ai-2009-02-23-Comparison of data analysis packages: R, Matlab, SciPy, Excel, SAS, SPSS, Stata


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, -0.237), (1, -0.01), (2, 0.016), (3, -0.027), (4, 0.007), (5, -0.049), (6, -0.052), (7, -0.028), (8, 0.012), (9, -0.026), (10, 0.005), (11, 0.029), (12, 0.008), (13, -0.004), (14, 0.066), (15, -0.069), (16, 0.042), (17, -0.027), (18, -0.089), (19, 0.07), (20, -0.071), (21, 0.006), (22, 0.079), (23, 0.176), (24, 0.03), (25, -0.112), (26, 0.051), (27, -0.019), (28, 0.062), (29, 0.048), (30, 0.0), (31, -0.014), (32, 0.032), (33, 0.056), (34, -0.026), (35, 0.045), (36, 0.062), (37, -0.007), (38, 0.154), (39, -0.041), (40, -0.006), (41, -0.004), (42, -0.009), (43, -0.039), (44, -0.01), (45, -0.004), (46, 0.089), (47, 0.162), (48, -0.014), (49, 0.127)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97288561 125 brendan oconnor ai-2008-11-21-Netflix Prize

Introduction: Here’s a fascinating NYT article on the Netflix Prize for a better movie recommendation system.  Tons of great stuff there; here’s a few highlights … First, a good unsupervised learning story: There’s a sort of unsettling, alien quality to their computers’ results. When the teams examine the ways that singular value decomposition is slotting movies into categories, sometimes it makes sense to them — as when the computer highlights what appears to be some essence of nerdiness in a bunch of sci-fi movies. But many categorizations are now so obscure that they cannot see the reasoning behind them. Possibly the algorithms are finding connections so deep and subconscious that customers themselves wouldn’t even recognize them. At one point, Chabbert showed me a list of movies that his algorithm had discovered share some ineffable similarity; it includes a historical movie, “Joan of Arc,” a wrestling video, “W.W.E.: SummerSlam 2004,” the comedy “It Had to Be You” and a version of Charle

2 0.59343833 86 brendan oconnor ai-2007-12-20-Data-driven charity

Introduction: Some ex-hedge fund analysts recently started a non-profit devoted to evaluating the effectiveness of hundreds of charities, and apparently have been making waves (NYT) . A few interesting reports have been posted on their website, givewell.net — they make recommendations for which charities where donors’ money is used most efficiently for saving lives or helping the disadvantaged. (Does anyone else have interesting data on charity effectiveness? I’ve heard that evaluations are the big thing in philanthropy world now, and certainly the Gates Foundation talks a lot about it.) Obviously this sort of evaluation is tricky, but it has to be the right approach. The NYT article makes them sound like they’re a bit arrogant, which is too bad; on the other hand, any one who makes claims to have better empirical information than the established wisdom will always end up in that dynamic. (OK, so I love young smart people who come up with better results than a conservative, close-minded

3 0.58261108 196 brendan oconnor ai-2013-05-08-Movie summary corpus and learning character personas

Introduction: Here is one of our exciting just-finished ACL papers.   David  and I designed an algorithm that learns different types of character personas — “Protagonist”, “Love Interest”, etc — that are used in movies. To do this we collected a  brand new dataset : 42,306 plot summaries of movies from Wikipedia, along with metadata like box office revenue and genre.  We ran these through parsing and coreference analysis to also create a dataset of movie characters, linked with Freebase records of the actors who portray them.  Did you see that NYT article on quantitative analysis of film scripts ?  This dataset could answer all sorts of things they assert in that article — for example, do movies with bowling scenes really make less money?  We have released the data here . Our focus, though, is on narrative analysis.  We investigate  character personas : familiar character types that are repeated over and over in stories, like “Hero” or “Villian”; maybe grand mythical archetypes like “Trick

4 0.49321055 184 brendan oconnor ai-2012-07-04-The $60,000 cat: deep belief networks make less sense for language than vision

Introduction: There was an interesting ICML paper this year about very large-scale training of deep belief networks (a.k.a. neural networks) for unsupervised concept extraction from images. They ( Quoc V. Le and colleagues at Google/Stanford) have a cute example of learning very high-level features that are evoked by images of cats (from YouTube still-image training data); one is shown below. For those of us who work on machine learning and text, the question always comes up, why not DBN’s for language? Many shallow latent-space text models have been quite successful (LSI, LDA, HMM, LPCFG…); there is hope that some sort of “deeper” concepts could be learned. I think this is one of the most interesting areas for unsupervised language modeling right now. But note it’s a bad idea to directly analogize results from image analysis to language analysis. The problems have radically different levels of conceptual abstraction baked-in. Consider the problem of detecting the concept of a cat; i.e.

5 0.49193966 170 brendan oconnor ai-2011-05-21-iPhone autocorrection error analysis

Introduction: re @andrewparker : My iPhone auto-corrected “Harvard” to “Garbage”. Well played Apple engineers. I was wondering how this would happen, and then noticed that each character pair has 0 to 2 distance on the QWERTY keyboard.  Perhaps their model is eager to allow QWERTY-local character substitutions. >>> zip(‘harvard’,'garbage’) [('h', 'g'), ('a', 'a'), ('r', 'r'), ('v', 'b'), ('a', 'a'), ('r', 'g'), ('d', 'e')] And then most any language model thinks p(“garbage”) > p(“harvard”), at the very least in a unigram model with a broad domain corpus.  So if it’s a noisy channel-style model, they’re underpenalizing the edit distance relative to the LM prior. (Reference: Norvig’s noisy channel spelling correction article .) On the other hand, given how insane iPhone autocorrections are , and from the number of times I’ve seen it delete a quite reasonable word I wrote, I’d bet “harvard” isn’t even in their LM.  (Where the LM is more like just a dictionary; call it quantizin

6 0.44131437 68 brendan oconnor ai-2007-07-08-Game outcome graphs — prisoner’s dilemma with FUN ARROWS!!!

7 0.43188366 130 brendan oconnor ai-2008-12-18-Information cost and genocide

8 0.43140385 6 brendan oconnor ai-2005-06-25-idea: Morals are heuristics for socially optimal behavior

9 0.42365295 73 brendan oconnor ai-2007-08-05-Are ideas interesting, or are they true?

10 0.42106119 84 brendan oconnor ai-2007-11-26-How did Freud become a respected humanist?!

11 0.417853 65 brendan oconnor ai-2007-06-17-"Time will tell, epistemology won’t"

12 0.40972492 62 brendan oconnor ai-2007-05-29-"Stanford Impostor"

13 0.40723389 87 brendan oconnor ai-2007-12-26-What is experimental philosophy?

14 0.40512928 3 brendan oconnor ai-2004-12-02-go science

15 0.40043184 61 brendan oconnor ai-2007-05-24-Rock Paper Scissors psychology

16 0.39983475 198 brendan oconnor ai-2013-08-20-Some analysis of tweet shares and “predicting” election outcomes

17 0.39435798 131 brendan oconnor ai-2008-12-27-Facebook sentiment mining predicts presidential polls

18 0.37212294 129 brendan oconnor ai-2008-12-03-Statistics vs. Machine Learning, fight!

19 0.36990151 77 brendan oconnor ai-2007-09-15-Dollar auction

20 0.36114493 133 brendan oconnor ai-2009-01-23-SF conference for data mining mercenaries


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(8, 0.435), (16, 0.017), (24, 0.021), (28, 0.013), (43, 0.027), (44, 0.103), (48, 0.038), (50, 0.029), (55, 0.02), (57, 0.01), (59, 0.023), (64, 0.013), (70, 0.039), (74, 0.142)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.87955946 125 brendan oconnor ai-2008-11-21-Netflix Prize

Introduction: Here’s a fascinating NYT article on the Netflix Prize for a better movie recommendation system.  Tons of great stuff there; here’s a few highlights … First, a good unsupervised learning story: There’s a sort of unsettling, alien quality to their computers’ results. When the teams examine the ways that singular value decomposition is slotting movies into categories, sometimes it makes sense to them — as when the computer highlights what appears to be some essence of nerdiness in a bunch of sci-fi movies. But many categorizations are now so obscure that they cannot see the reasoning behind them. Possibly the algorithms are finding connections so deep and subconscious that customers themselves wouldn’t even recognize them. At one point, Chabbert showed me a list of movies that his algorithm had discovered share some ineffable similarity; it includes a historical movie, “Joan of Arc,” a wrestling video, “W.W.E.: SummerSlam 2004,” the comedy “It Had to Be You” and a version of Charle

2 0.36847672 129 brendan oconnor ai-2008-12-03-Statistics vs. Machine Learning, fight!

Introduction: 10/1/09 update — well, it’s been nearly a year, and I should say not everything in this rant is totally true, and I certainly believe much less of it now. Current take: Statistics , not machine learning, is the real deal, but unfortunately suffers from bad marketing. On the other hand, to the extent that bad marketing includes misguided undergraduate curriculums, there’s plenty of room to improve for everyone. So it’s pretty clear by now that statistics and machine learning aren’t very different fields. I was recently pointed to a very amusing comparison by the excellent statistician — and machine learning expert — Robert Tibshiriani . Reproduced here: Glossary Machine learning Statistics network, graphs model weights parameters learning fitting generalization test set performance supervised learning regression/classification unsupervised learning density estimation, clustering large grant = $1,000,000

3 0.35106069 188 brendan oconnor ai-2012-10-02-Powerset’s natural language search system

Introduction: There’s a lot to say about Powerset , the short-lived natural language search company (2005-2008) where I worked after college. AI overhype, flying too close to the sun, the psychology of tech journalism and venture capitalism, etc. A year or two ago I wrote the following bit about Powerset’s technology in response to a question on Quora . I’m posting a revised version here. Question: What was Powerset’s core innovation in search? As far as I can tell, they licensed an NLP engine. They did not have a question answering system or any system for information extraction. How was Powerset’s search engine different than Google’s? My answer: Powerset built a system vaguely like a question-answering system on top of Xerox PARC’s NLP engine. The output is better described as query-focused summarization rather than question answering; primarily, it matched semantic fragments of the user query against indexed semantic relations, with lots of keyword/ngram-matching fallback for when

4 0.34738123 203 brendan oconnor ai-2014-02-19-What the ACL-2014 review scores mean

Introduction: I’ve had several people ask me what the numbers in ACL reviews mean — and I can’t find anywhere online where they’re described. (Can anyone point this out if it is somewhere?) So here’s the review form, below. They all go from 1 to 5, with 5 the best. I think the review emails to authors only include a subset of the below — for example, “Overall Recommendation” is not included? The CFP said that they have different types of review forms for different types of papers. I think this one is for a standard full paper. I guess what people really want to know is what scores tend to correspond to acceptances. I really have no idea and I get the impression this can change year to year. I have no involvement with the ACL conference besides being one of many, many reviewers. APPROPRIATENESS (1-5) Does the paper fit in ACL 2014? (Please answer this question in light of the desire to broaden the scope of the research areas represented at ACL.) 5: Certainly. 4: Probabl

5 0.34641427 123 brendan oconnor ai-2008-11-12-Disease tracking with web queries and social messaging (Google, Twitter, Facebook…)

Introduction: This is a good idea: in a search engine’s query logs, look for outbreaks of queries like [[flu symptoms]] in a given region.  I’ve heard (from Roddy ) that this trick also works well on Facebook statuses (e.g. “Feeling crappy this morning, think I just got the flu”). Google Uses Web Searches to Track Flu’s Spread – NYTimes.com Google Flu Trends – google.org For an example with a publicly available data feed, these queries works decently well on Twitter search: [[ flu -shot -google ]] (high recall) [[ "muscle aches" flu -shot ]] (high precision) The “muscle aches” query is too sparse and the general query is too noisy, but you could imagine some more tricks to clean it up, then train a classifier, etc.  With a bit more work it looks like geolocation information can be had out of the Twitter search API .

6 0.34507328 150 brendan oconnor ai-2009-08-08-Haghighi and Klein (2009): Simple Coreference Resolution with Rich Syntactic and Semantic Features

7 0.34464803 53 brendan oconnor ai-2007-03-15-Feminists, anarchists, computational complexity, bounded rationality, nethack, and other things to do

8 0.3368122 138 brendan oconnor ai-2009-04-17-1 billion web page dataset from CMU

9 0.3329646 184 brendan oconnor ai-2012-07-04-The $60,000 cat: deep belief networks make less sense for language than vision

10 0.33287781 2 brendan oconnor ai-2004-11-24-addiction & 2 problems of economics

11 0.32977417 86 brendan oconnor ai-2007-12-20-Data-driven charity

12 0.32964677 63 brendan oconnor ai-2007-06-10-Freak-Freakonomics (Ariel Rubinstein is the shit!)

13 0.32939658 26 brendan oconnor ai-2005-09-02-cognitive modelling is rational choice++

14 0.32705396 200 brendan oconnor ai-2013-09-13-Response on our movie personas paper

15 0.32186493 105 brendan oconnor ai-2008-06-05-Clinton-Obama support visualization

16 0.32011396 6 brendan oconnor ai-2005-06-25-idea: Morals are heuristics for socially optimal behavior

17 0.31923822 77 brendan oconnor ai-2007-09-15-Dollar auction

18 0.31805933 198 brendan oconnor ai-2013-08-20-Some analysis of tweet shares and “predicting” election outcomes

19 0.31661937 179 brendan oconnor ai-2012-02-02-Histograms — matplotlib vs. R

20 0.31476951 140 brendan oconnor ai-2009-05-18-Announcing TweetMotif for summarizing twitter topics