brendan_oconnor_ai brendan_oconnor_ai-2010 knowledge-graph by maker-knowledge-mining

brendan_oconnor_ai 2010 knowledge graph


similar blogs computed by tfidf model


similar blogs computed by lsi model


similar blogs computed by lda model


blogs list:

1 brendan oconnor ai-2010-11-09-Greenspan on the Daily Show

Introduction: I love this Daily Show clip with Alan Greenspan .  On emotion and economic forecasting. From 2007. GREENSPAN: I’ve been dealing with these big mathematical models of forecasting the economy, and I’m looking at what’s going on in the last few weeks. … If I could figure out a way to determine whether or not people are more fearful or changing to more euphoric … I don’t need any of this other stuff.  I could forecast the economy better than any way I know. The trouble is that we can’t figure that out. I’ve been in the forecasting business for 50 years. … I’m no better than I ever was, and nobody else is. Forecasting 50 years ago was as good or as bad as it is today. And the reason is that human nature hasn’t changed. We can’t improve ourselves.” STEWART: You just bummed the [bleep] out of me. I’ve seen it in two separate talks now, from Peter Dodds (co-author of “Measuring the Happiness of Large-Scale Written Expression” ) and Eric Gilbert (co-author of “Widespre

2 brendan oconnor ai-2010-08-09-An ML-AI approach to P != NP

Introduction: Like everyone, I’ve been just starting to look at the new, tentative, proof that P != NP from Vinay Deolalikar.  After reading the intro, what’s most striking is that probabilistic graphical models and mathematical logic are at the core of the proof.  This feels like a machine learning and artificial intelligence-centric approach to me — very different from what you usually see in mainstream CS theory.  (Maybe I should feel good that in my undergrad I basically stopped studying normal math and spent all my time with this weird stuff instead!) He devotes several chapters to an introduction to graphical models — Ising models, conditional independence, MRF’s, Hammersley-Clifford, and all that other stuff you see in Koller and Friedman or something — and then logic and model theory!  I’m impressed.

3 brendan oconnor ai-2010-04-22-Updates: CMU, Facebook

Introduction: It’s been a good year. Last fall I started a master’s program in the Language Technologies department at CMU SCS , taking some great classes, hanging out with a cool lab , and writing two new papers (for ICWSM , involving Twitter: polls and tweetmotif ; also did some coref work, financial text regression stuff, and looked at social lexicography .) I also applied to CS and stats PhD programs at several universities. Next year I’ll be starting the PhD program in the Machine Learning Department here at CMU. I’m excited! Just the other day I was looking at videos on my old hard drive and found a presentation by Tom Mitchell on “the Discipline of Machine Learning” that I downloaded back in 2007 or so. (Can’t find it online right now, but this is similar .) That might be where I heard of the department first. Maybe some day I will be smarter than the guy who wrote this rant (though I am much more pro-stats and anti-ML these days…). Also, I was recently named a fina

4 brendan oconnor ai-2010-04-14-quick note: cer et al 2010

Introduction: Quick note, reading this paper from their tweet . update this reaction might be totally wrong; in particular, the conll dependencies for at least some languages were done completely by hand. Malt and MSTParser were designed for the Yamada and Matsumodo dependencies formalism (the one used for the CoNLL dependency parsing shared task, from the penn2malt tool). Their feature sets and probably many other design decisions were created to support that. If you compare their outputs side-by-side, you will see that the Stanford Dependencies are a substantially different formalism; for example, compound verbs are handled very differently (the paper talks about copula example). I think the following conclusion is premature: Notwithstanding the very large amount of research that has gone into dependency parsing algorithms in the last ďŹ ve years, our central conclusion is that the quality of the Charniak, Charniak-Johnson reranking, and Berkeley parsers is so high that in th

5 brendan oconnor ai-2010-03-31-How Facebook privacy failed me

Introduction: At some point, I put extra email addresses on Facebook because I thought it was necessary for something, but didn’t want to show them, so in the privacy settings marked their visibility as “Only Me.” It turns out that right now, Facebook is blatantly ignoring that privacy setting, and instead showing them to the world. Here are my settings: Here is a fragment of my profile, viewed from a friend’s account half an hour ago: I would complain to their customer service, but I can’t find a link from their Help Center page. Obviously, this particular issue is a very minor concern for me. But it hardly instills faith in the system — especially considering that privacy bugs are ones that the affected user, almost by definition, can’t see. I’ve also had other weird issues where changes to privacy settings don’t seem to stick when I save then later go back to the page. It’s annoying and hard to verify these things — which is why important “social utilities” like Facebo