andrew_gelman_stats andrew_gelman_stats-2010 andrew_gelman_stats-2010-214 knowledge-graph by maker-knowledge-mining
Source: html
Introduction: Lyric Semiconductor posted: For over 60 years, computers have been based on digital computing principles. Data is represented as bits (0s and 1s). Boolean logic gates perform operations on these bits. A processor steps through many of these operations serially in order to perform a function. However, today’s most interesting problems are not at all suited to this approach. Here at Lyric Semiconductor, we are redesigning information processing circuits from the ground up to natively process probabilities: from the gate circuits to the processor architecture to the programming language. As a result, many applications that today require a thousand conventional processors will soon run in just one Lyric processor, providing 1,000x efficiencies in cost, power, and size. Om Malik has some more information, also relating to the team and the business. The fundamental idea is that computing architectures work deterministically, even though the world is fundamentally stochastic.
sentIndex sentText sentNum sentScore
1 Lyric Semiconductor posted: For over 60 years, computers have been based on digital computing principles. [sent-1, score-0.265]
2 A processor steps through many of these operations serially in order to perform a function. [sent-4, score-0.514]
3 However, today’s most interesting problems are not at all suited to this approach. [sent-5, score-0.084]
4 Here at Lyric Semiconductor, we are redesigning information processing circuits from the ground up to natively process probabilities: from the gate circuits to the processor architecture to the programming language. [sent-6, score-1.338]
5 As a result, many applications that today require a thousand conventional processors will soon run in just one Lyric processor, providing 1,000x efficiencies in cost, power, and size. [sent-7, score-0.333]
6 Om Malik has some more information, also relating to the team and the business. [sent-8, score-0.066]
7 The fundamental idea is that computing architectures work deterministically, even though the world is fundamentally stochastic. [sent-9, score-0.358]
8 In a lot of statistical processing, especially in Bayesian statistics, we take stochastic world, force it into determinism, simulate stochastic world by computationally generating deterministic pseudo-random numbers, and simulate stochastic matching by deterministic likelihood computations. [sent-10, score-1.62]
9 What Lyric could do is to bypass this highly inefficient intermediate deterministic step. [sent-11, score-0.448]
10 They’re also working on a programming language: PSBL (Probability Synthesis for Bayesian Logic), but there are no details. [sent-13, score-0.11]
11 Here is their patent for Analog Logic Automata , indicating applications for images (filtering, recognition, etc. [sent-14, score-0.448]
12 Hayes points to a US Patent indicating that one of the circuits optimizes the sum-product belief propagation algorithm . [sent-18, score-0.61]
13 This type of algorithms is popular in machine learning for various recognition and denoising problems. [sent-19, score-0.127]
wordName wordTfidf (topN-words)
[('lyric', 0.436), ('circuits', 0.298), ('processor', 0.269), ('semiconductor', 0.218), ('deterministic', 0.213), ('stochastic', 0.201), ('logic', 0.192), ('patent', 0.156), ('operations', 0.142), ('simulate', 0.142), ('indicating', 0.135), ('recognition', 0.127), ('processing', 0.126), ('programming', 0.11), ('computing', 0.107), ('perform', 0.103), ('analog', 0.099), ('architectures', 0.099), ('automata', 0.099), ('determinism', 0.099), ('efficiencies', 0.099), ('optimizes', 0.099), ('applications', 0.094), ('boolean', 0.094), ('bypass', 0.094), ('deterministically', 0.094), ('world', 0.09), ('gate', 0.09), ('hayes', 0.09), ('filtering', 0.086), ('synthesis', 0.084), ('architecture', 0.084), ('digital', 0.084), ('suited', 0.084), ('computationally', 0.078), ('propagation', 0.078), ('today', 0.078), ('generating', 0.077), ('inefficient', 0.075), ('computers', 0.074), ('gates', 0.073), ('relating', 0.066), ('intermediate', 0.066), ('bits', 0.065), ('represented', 0.063), ('ground', 0.063), ('images', 0.063), ('fundamentally', 0.062), ('matching', 0.062), ('thousand', 0.062)]
simIndex simValue blogId blogTitle
same-blog 1 1.0 214 andrew gelman stats-2010-08-17-Probability-processing hardware
Introduction: Lyric Semiconductor posted: For over 60 years, computers have been based on digital computing principles. Data is represented as bits (0s and 1s). Boolean logic gates perform operations on these bits. A processor steps through many of these operations serially in order to perform a function. However, today’s most interesting problems are not at all suited to this approach. Here at Lyric Semiconductor, we are redesigning information processing circuits from the ground up to natively process probabilities: from the gate circuits to the processor architecture to the programming language. As a result, many applications that today require a thousand conventional processors will soon run in just one Lyric processor, providing 1,000x efficiencies in cost, power, and size. Om Malik has some more information, also relating to the team and the business. The fundamental idea is that computing architectures work deterministically, even though the world is fundamentally stochastic.
2 0.19894508 780 andrew gelman stats-2011-06-27-Bridges between deterministic and probabilistic models for binary data
Introduction: For the analysis of binary data, various deterministic models have been proposed, which are generally simpler to fit and easier to understand than probabilistic models. We claim that corresponding to any deterministic model is an implicit stochastic model in which the deterministic model fits imperfectly, with errors occurring at random. In the context of binary data, we consider a model in which the probability of error depends on the model prediction. We show how to fit this model using a stochastic modification of deterministic optimization schemes. The advantages of fitting the stochastic model explicitly (rather than implicitly, by simply fitting a deterministic model and accepting the occurrence of errors) include quantification of uncertainty in the deterministic model’s parameter estimates, better estimation of the true model error rate, and the ability to check the fit of the model nontrivially. We illustrate this with a simple theoretical example of item response data and w
3 0.1798256 24 andrew gelman stats-2010-05-09-Special journal issue on statistical methods for the social sciences
Introduction: Last year I spoke at a conference celebrating the 10th anniversary of the University of Washington’s Center for Statistics and the Social Sciences, and just today a special issue of the journal Statistical Methodology came out in honor of the center’s anniversary. My article in the special issue actually has nothing to do with my talk at the conference; rather, it’s an exploration of an idea that Iven Van Mechelen and I had for understanding deterministic models probabilistically: For the analysis of binary data, various deterministic models have been proposed, which are generally simpler to fit and easier to understand than probabilistic models. We claim that corresponding to any deterministic model is an implicit stochastic model in which the deterministic model fits imperfectly, with errors occurring at random. In the context of binary data, we consider a model in which the probability of error depends on the model prediction. We show how to fit this model using a stocha
4 0.11729473 1162 andrew gelman stats-2012-02-11-Adding an error model to a deterministic model
Introduction: Daniel Lakeland asks , “Where do likelihoods come from?” He describes a class of problems where you have a deterministic dynamic model that you want to fit to data. The data won’t fit perfectly so, if you want to do Bayesian inference, you need to introduce an error model. This looks a little bit different from the usual way that models are presented in statistics textbooks, where the focus is typically on the random error process, not on the deterministic part of the model. A focus on the error process makes sense in some applications that have inherent randomness or variation (for example, genetics, psychology, and survey sampling) but not so much in the physical sciences, where the deterministic model can be complicated and is typically the essence of the study. Often in these sorts of studies, the staring point (and sometimes the ending point) is what the physicists call “nonlinear least squares” or what we would call normally-distributed errors. That’s what we did for our
5 0.10222546 860 andrew gelman stats-2011-08-18-Trolls!
Introduction: Christian Robert points to this absurd patent of the Monte Carlo method (which, as Christian notes, was actually invented by Stanislaw Ulam and others in the 1940s). The whole thing is pretty unreadable. I wonder if they first wrote it as a journal article and then it got rejected everywhere, so they decided to submit it as a patent instead. What’s even worse is this bit: This invention was made with government support under Grant Numbers 0612170 and 0347408 awarded by the National Science Foundation. So our tax dollars are being given to IBM so they can try to bring statistics to a halt by patenting one of our most basic tools? I’d say this is just a waste of money, but given that our country is run by lawyers, there must be some outside chance that this patent could actually succeed? Perhaps there’s room for an improvement in the patent that involves albedo in some way?
6 0.10102275 2173 andrew gelman stats-2014-01-15-Postdoc involving pathbreaking work in MRP, Stan, and the 2014 election!
7 0.10018 1443 andrew gelman stats-2012-08-04-Bayesian Learning via Stochastic Gradient Langevin Dynamics
8 0.098337956 1398 andrew gelman stats-2012-06-28-Every time you take a sample, you’ll have to pay this guy a quarter
9 0.086080238 1185 andrew gelman stats-2012-02-26-A statistician’s rants and raves
10 0.085439689 1761 andrew gelman stats-2013-03-13-Lame Statistics Patents
11 0.08156047 1718 andrew gelman stats-2013-02-11-Toward a framework for automatic model building
12 0.081441224 625 andrew gelman stats-2011-03-23-My last post on albedo, I promise
13 0.07584662 1885 andrew gelman stats-2013-06-06-Leahy Versus Albedoman and the Moneygoround, Part One
14 0.072390139 2072 andrew gelman stats-2013-10-21-The future (and past) of statistical sciences
15 0.069089331 1110 andrew gelman stats-2012-01-10-Jobs in statistics research! In New Jersey!
16 0.068494894 1165 andrew gelman stats-2012-02-13-Philosophy of Bayesian statistics: my reactions to Wasserman
17 0.067176133 892 andrew gelman stats-2011-09-06-Info on patent trolls
18 0.067045338 1923 andrew gelman stats-2013-07-03-Bayes pays!
19 0.065066777 1735 andrew gelman stats-2013-02-24-F-f-f-fake data
20 0.062893465 1740 andrew gelman stats-2013-02-26-“Is machine learning a subset of statistics?”
topicId topicWeight
[(0, 0.097), (1, 0.041), (2, -0.035), (3, 0.026), (4, -0.002), (5, 0.036), (6, -0.037), (7, -0.012), (8, -0.002), (9, 0.004), (10, -0.022), (11, -0.01), (12, -0.039), (13, -0.014), (14, -0.046), (15, 0.008), (16, 0.033), (17, -0.013), (18, -0.005), (19, -0.029), (20, 0.003), (21, 0.019), (22, -0.016), (23, -0.003), (24, -0.006), (25, 0.022), (26, -0.021), (27, 0.027), (28, 0.035), (29, -0.028), (30, 0.009), (31, 0.011), (32, -0.0), (33, 0.003), (34, 0.007), (35, -0.006), (36, -0.023), (37, -0.014), (38, 0.015), (39, -0.008), (40, -0.01), (41, -0.012), (42, -0.009), (43, 0.031), (44, 0.018), (45, 0.033), (46, -0.001), (47, 0.001), (48, -0.021), (49, 0.004)]
simIndex simValue blogId blogTitle
same-blog 1 0.92692542 214 andrew gelman stats-2010-08-17-Probability-processing hardware
Introduction: Lyric Semiconductor posted: For over 60 years, computers have been based on digital computing principles. Data is represented as bits (0s and 1s). Boolean logic gates perform operations on these bits. A processor steps through many of these operations serially in order to perform a function. However, today’s most interesting problems are not at all suited to this approach. Here at Lyric Semiconductor, we are redesigning information processing circuits from the ground up to natively process probabilities: from the gate circuits to the processor architecture to the programming language. As a result, many applications that today require a thousand conventional processors will soon run in just one Lyric processor, providing 1,000x efficiencies in cost, power, and size. Om Malik has some more information, also relating to the team and the business. The fundamental idea is that computing architectures work deterministically, even though the world is fundamentally stochastic.
2 0.74457282 1482 andrew gelman stats-2012-09-04-Model checking and model understanding in machine learning
Introduction: Last month I wrote : Computer scientists are often brilliant but they can be unfamiliar with what is done in the worlds of data collection and analysis. This goes the other way too: statisticians such as myself can look pretty awkward, reinventing (or failing to reinvent) various wheels when we write computer programs or, even worse, try to design software.Andrew MacNamara writes: Andrew MacNamara followed up with some thoughts: I [MacNamara] had some basic statistics training through my MBA program, after having completed an undergrad degree in computer science. Since then I’ve been very interested in learning more about statistical techniques, including things like GLM and censored data analyses as well as machine learning topics like neural nets, SVMs, etc. I began following your blog after some research into Bayesian analysis topics and I am trying to dig deeper on that side of things. One thing I have noticed is that there seems to be a distinction between data analysi
Introduction: David Duvenaud writes: I’ve been following your recent discussions about how an AI could do statistics [see also here ]. I was especially excited about your suggestion for new statistical methods using “a language-like approach to recursively creating new models from a specified list of distributions and transformations, and an automatic approach to checking model fit.” Your discussion of these ideas was exciting to me and my colleagues because we recently did some work taking a step in this direction, automatically searching through a grammar over Gaussian process regression models. Roger Grosse previously did the same thing , but over matrix decomposition models using held-out predictive likelihood to check model fit. These are both examples of automatic Bayesian model-building by a search over more and more complex models, as you suggested. One nice thing is that both grammars include lots of standard models for free, and they seem to work pretty well, although the
4 0.70419455 1961 andrew gelman stats-2013-07-29-Postdocs in probabilistic modeling! With David Blei! And Stan!
Introduction: David Blei writes: I have two postdoc openings for basic research in probabilistic modeling . The thrusts are (a) scalable inference and (b) model checking. We will be developing new methods and implementing them in probabilistic programming systems. I am open to applicants interested in many kinds of applications and from any field. “Scalable inference” means black-box VB and related ideas, and “probabilistic programming systems” means Stan! (You might be familiar with Stan as an implementation of Nuts for posterior sampling, but Stan is also an efficient program for computing probability densities and their gradients, and as such is an ideal platform for developing scalable implementations of variational inference and related algorithms.) And you know I like model checking. Here’s the full ad: ===== POSTDOC POSITIONS IN PROBABILISTIC MODELING ===== We expect to have two postdoctoral positions available for January 2014 (or later). These positions are in D
5 0.70307857 2311 andrew gelman stats-2014-04-29-Bayesian Uncertainty Quantification for Differential Equations!
Introduction: Mark Girolami points us to this paper and software (with Oksana Chkrebtii, David Campbell, and Ben Calderhead). They write: We develop a general methodology for the probabilistic integration of differential equations via model based updating of a joint prior measure on the space of functions and their temporal and spatial derivatives. This results in a posterior measure over functions reflecting how well they satisfy the system of differential equations and corresponding initial and boundary values. We show how this posterior measure can be naturally incorporated within the Kennedy and O’Hagan framework for uncertainty quantification and provides a fully Bayesian approach to model calibration. . . . A broad variety of examples are provided to illustrate the potential of this framework for characterising discretization uncertainty, including initial value, delay, and boundary value differential equations, as well as partial differential equations. We also demonstrate our methodolo
6 0.69114506 994 andrew gelman stats-2011-11-06-Josh Tenenbaum presents . . . a model of folk physics!
7 0.68061841 964 andrew gelman stats-2011-10-19-An interweaving-transformation strategy for boosting MCMC efficiency
8 0.67198056 1856 andrew gelman stats-2013-05-14-GPstuff: Bayesian Modeling with Gaussian Processes
9 0.6715582 496 andrew gelman stats-2011-01-01-Tukey’s philosophy
10 0.66554946 1406 andrew gelman stats-2012-07-05-Xiao-Li Meng and Xianchao Xie rethink asymptotics
11 0.66458964 1923 andrew gelman stats-2013-07-03-Bayes pays!
12 0.6612615 1902 andrew gelman stats-2013-06-17-Job opening at new “big data” consulting firm!
13 0.66072941 778 andrew gelman stats-2011-06-24-New ideas on DIC from Martyn Plummer and Sumio Watanabe
14 0.66038066 320 andrew gelman stats-2010-10-05-Does posterior predictive model checking fit with the operational subjective approach?
15 0.65584642 1788 andrew gelman stats-2013-04-04-When is there “hidden structure in data” to be discovered?
16 0.64518452 1162 andrew gelman stats-2012-02-11-Adding an error model to a deterministic model
17 0.64453828 1401 andrew gelman stats-2012-06-30-David Hogg on statistics
18 0.64396125 1374 andrew gelman stats-2012-06-11-Convergence Monitoring for Non-Identifiable and Non-Parametric Models
19 0.6395877 2072 andrew gelman stats-2013-10-21-The future (and past) of statistical sciences
20 0.63889372 265 andrew gelman stats-2010-09-09-Removing the blindfold: visualising statistical models
topicId topicWeight
[(15, 0.021), (16, 0.051), (21, 0.034), (24, 0.104), (49, 0.01), (52, 0.01), (56, 0.046), (59, 0.305), (77, 0.017), (86, 0.072), (99, 0.173)]
simIndex simValue blogId blogTitle
1 0.9493373 580 andrew gelman stats-2011-02-19-Weather visualization with WeatherSpark
Introduction: WeatherSpark : prediction and observation quantiles, historic data, multiple predictors, zoomable, draggable, colorful, wonderful: Via Jure Cuhalev .
2 0.93485332 403 andrew gelman stats-2010-11-09-Society for Industrial and Applied Mathematics startup-math meetup
Introduction: Chris Wiggins sends along this . It’s a meetup at Davis Auditorium, CEPSR Bldg, Columbia University, on Wed 10 Nov (that’s tomorrow! or maybe today! depending on when you’re reading this), 6-8pm.
same-blog 3 0.88193262 214 andrew gelman stats-2010-08-17-Probability-processing hardware
Introduction: Lyric Semiconductor posted: For over 60 years, computers have been based on digital computing principles. Data is represented as bits (0s and 1s). Boolean logic gates perform operations on these bits. A processor steps through many of these operations serially in order to perform a function. However, today’s most interesting problems are not at all suited to this approach. Here at Lyric Semiconductor, we are redesigning information processing circuits from the ground up to natively process probabilities: from the gate circuits to the processor architecture to the programming language. As a result, many applications that today require a thousand conventional processors will soon run in just one Lyric processor, providing 1,000x efficiencies in cost, power, and size. Om Malik has some more information, also relating to the team and the business. The fundamental idea is that computing architectures work deterministically, even though the world is fundamentally stochastic.
4 0.87343967 1716 andrew gelman stats-2013-02-09-iPython Notebook
Introduction: Burak Bayramli writes: I wanted to inform you on iPython Notebook technology – allowing markup, Python code to reside in one document. Someone ported one of your examples from ARM . iPynb file is actually a live document, can be downloaded and reran locally, hence change of code on document means change of images, results. Graphs (as well as text output) which are generated by the code, are placed inside the document automatically. No more referencing image files seperately. For now running notebooks locally require a notebook server, but that part can live “on the cloud” as part of an educational software. Viewers, such as nbviewer.ipython.org, do not even need that much, since all recent results of a notebook are embedded in the notebook itself. A lot of people are excited about this; Also out of nowhere, Alfred P. Sloan Foundation dropped a $1.15 million grant on the developers of ipython which provided some extra energy on the project. Cool. We’ll have to do that ex
5 0.76710838 853 andrew gelman stats-2011-08-14-Preferential admissions for children of elite colleges
Introduction: Jenny Anderson reports on a discussion of the practice of colleges preferential admission of children of alumni: [Richard] Kahlenberg citing research from his book “Affirmative Action for the Rich: Legacy Preferences in College Admissions” made the case that getting into good schools matters — 12 institutions making up less than 1 percent of the U.S. population produced 42 percent of government leaders and 54 percent of corporate leaders. And being a legacy helps improve an applicant’s chances of getting in, with one study finding that being a primary legacy — the son or daughter of an undergraduate alumnus or alumna — increases one’s chance of admission by 45.1 percent. I’d call that 45 percent but I get the basic idea. But then Jeffrey Brenzel of the Yale admissions office replied: “We turn away 80 percent of our legacies, and we feel it every day,” Mr. Brenzel said, adding that he rejected more offspring of the school’s Sterling donors than he accepted this year (
6 0.75567102 965 andrew gelman stats-2011-10-19-Web-friendly visualizations in R
8 0.74604547 763 andrew gelman stats-2011-06-13-Inventor of Connect Four dies at 91
9 0.70832062 34 andrew gelman stats-2010-05-14-Non-academic writings on literature
10 0.69850993 1000 andrew gelman stats-2011-11-10-Forecasting 2012: How much does ideology matter?
11 0.69649768 229 andrew gelman stats-2010-08-24-Bizarre twisty argument about medical diagnostic tests
12 0.68997604 1408 andrew gelman stats-2012-07-07-Not much difference between communicating to self and communicating to others
13 0.68914425 1764 andrew gelman stats-2013-03-15-How do I make my graphs?
14 0.68626666 1185 andrew gelman stats-2012-02-26-A statistician’s rants and raves
15 0.67749107 1380 andrew gelman stats-2012-06-15-Coaching, teaching, and writing
16 0.67393994 517 andrew gelman stats-2011-01-14-Bayes in China update
17 0.66380352 1377 andrew gelman stats-2012-06-13-A question about AIC
18 0.66220599 199 andrew gelman stats-2010-08-11-Note to semi-spammers
20 0.65527153 771 andrew gelman stats-2011-06-16-30 days of statistics