nips nips2008 nips2008-105 knowledge-graph by maker-knowledge-mining

105 nips-2008-Improving on Expectation Propagation


Source: pdf

Author: Manfred Opper, Ulrich Paquet, Ole Winther

Abstract: A series of corrections is developed for the fixed points of Expectation Propagation (EP), which is one of the most popular methods for approximate probabilistic inference. These corrections can lead to improvements of the inference approximation or serve as a sanity check, indicating when EP yields unrealiable results.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 dk Abstract A series of corrections is developed for the fixed points of Expectation Propagation (EP), which is one of the most popular methods for approximate probabilistic inference. [sent-6, score-0.605]

2 These corrections can lead to improvements of the inference approximation or serve as a sanity check, indicating when EP yields unrealiable results. [sent-7, score-0.736]

3 1 Introduction The expectation propagation (EP) message passing algorithm is often considered as the method of choice for approximate Bayesian inference when both good accuracy and computational efficiency are required [5]. [sent-8, score-0.247]

4 However, while such empirical studies hold great value, they can not guarantee the same performance on other data sets or when completely different types of Bayesian models are considered. [sent-10, score-0.022]

5 In this paper methods are developed to assess the quality of the EP approximation. [sent-11, score-0.044]

6 We compute explicit expressions for the remainder terms of the approximation. [sent-12, score-0.067]

7 This leads to various corrections for partition functions and posterior distributions. [sent-13, score-0.612]

8 Under the hypothesis that the EP approximation works well, we identify quantities which can be assumed to be small and can be used in a series expansion of the corrections with increasing complexity. [sent-14, score-0.769]

9 The computation of low order corrections in this expansion is often feasible, typically require only moderate computational efforts, and can lead to an improvement to the EP approximation or to the indication that the approximation cannot be trusted. [sent-15, score-0.84]

10 2 Expectation Propagation in a Nutshell Since it is the goal of this paper to compute corrections to the EP approximation, we will not discuss details of EP algorithms but rather characterise the fixed points which are reached when such algorithms converge. [sent-16, score-0.526]

11 EP is applied to probabilistic models with an unobserved latent variable x having an intractable distribution p(x). [sent-17, score-0.098]

12 In applications p(x) is usually the Bayesian posterior distribution conditioned on a set of observations. [sent-18, score-0.054]

13 Since the dependency on the latter variables is not important for the subsequent theory, we will skip them in our notation. [sent-19, score-0.071]

14 1 It is assumed that p(x) factorizes into a product of terms fn such that p(x) = 1 Z where the normalising partition function Z = an approximation to p(x) in the form fn (x) , (1) n dx q(x) = n fn (x) is also intractable. [sent-20, score-1.29]

15 We then assume gn (x) (2) n where the terms gn (x) belong to a tractable, e. [sent-21, score-0.535]

16 To compute the optimal parameters of the gn term approximation a set of auxiliary tilted distributions is defined via 1 q(x)fn (x) qn (x) = . [sent-24, score-0.775]

17 (3) Zn gn (x) Here a single approximating term gn is replaced by an original term fn . [sent-25, score-0.825]

18 Assuming that this replacement leaves qn still tractable, the parameters in gn are determined by the condition that q(x) and all qn (x) should be made as similar as possible. [sent-26, score-1.039]

19 This is usually achieved by requiring that these distributions share a set of generalised moments (which usually coincide with the sufficient statistics of the exponential family). [sent-27, score-0.267]

20 Note, that we will not assume that this expectation consistency [8] for the moments is derived by minimising a Kullback–Leibler divergence, as was done in the original derivations of EP [5]. [sent-28, score-0.195]

21 Such an assumption would limit the applicability of the approximate inference and exclude e. [sent-29, score-0.114]

22 the approximation of models with binary, Ising variables by a Gaussian model as in one of the applications in the last section. [sent-31, score-0.091]

23 The corresponding approximation to the normalising partition function in (1) was given in [8] and [7] and reads in our present notation1 ZEP = Zn . [sent-32, score-0.338]

24 (4) n 3 Corrections to EP An expression for the remainder terms which are neglected by the EP approximation can be obtained by solving for fn in (3), and taking the product to get fn (x) = n Hence Z = dx R= n n Zn qn (x)gn (x) q(x) qn (x) q(x) = ZEP q(x) n . [sent-33, score-1.562]

25 (5) fn (x) = ZEP R, with dx q(x) n qn (x) q(x) and p(x) = 1 q(x) R n qn (x) q(x) . [sent-34, score-1.094]

26 (6) This shows that corrections to EP are small when all distributions qn are indeed close to q, justifying the optimality criterion of EP. [sent-35, score-0.95]

27 Exact probabilistic inference with the corrections described here again leads to intractable computations. [sent-37, score-0.603]

28 However, we can derive exact perturbation expansions involving a series of corrections with increasing computational complexity. [sent-38, score-0.756]

29 Assuming that EP already yields a good approximation, the computation of a small number of these terms maybe sufficient to obtain the most dominant corrections. [sent-39, score-0.089]

30 On the other hand, when the leading corrections come out large or do not sufficiently decrease with order, this may indicate that the EP approximation is inaccurate. [sent-40, score-0.612]

31 Two such perturbation expansions are be presented in this section. [sent-41, score-0.158]

32 1 The definition of partition functions Zn is slightly different from previous works. [sent-42, score-0.09]

33 1 Expansion I: Clusters n (x) The most basic expansion is based on the variables εn (x) = qq(x) − 1 which we can assume to be typically small, when the EP approximation is good. [sent-44, score-0.194]

34 Expanding the products in (6) we obtain the correction to the partition function R= dx q(x) (1 + εn (x)) (7) n =1+ εn1 (x)εn2 (x) q + εn1 (x)εn2 (x)εn3 (x) q + . [sent-45, score-0.235]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('ep', 0.504), ('corrections', 0.5), ('qn', 0.364), ('fn', 0.273), ('gn', 0.257), ('zep', 0.17), ('zn', 0.128), ('normalising', 0.114), ('ulrich', 0.114), ('expansions', 0.103), ('dx', 0.093), ('approximation', 0.091), ('partition', 0.09), ('propagation', 0.082), ('expansion', 0.08), ('moments', 0.066), ('expectation', 0.057), ('perturbation', 0.055), ('sanity', 0.05), ('skip', 0.05), ('denmark', 0.045), ('manfred', 0.045), ('opper', 0.045), ('kullback', 0.045), ('justifying', 0.045), ('leibler', 0.045), ('qq', 0.045), ('intractable', 0.044), ('reads', 0.043), ('neglected', 0.043), ('factorizes', 0.043), ('minimising', 0.043), ('remainder', 0.041), ('maybe', 0.04), ('tractable', 0.04), ('ising', 0.038), ('generalised', 0.038), ('series', 0.035), ('coincide', 0.034), ('exclude', 0.034), ('tu', 0.034), ('inference', 0.033), ('informatics', 0.032), ('usually', 0.032), ('remarkably', 0.031), ('replacement', 0.031), ('correction', 0.031), ('efforts', 0.03), ('expanding', 0.029), ('moderate', 0.029), ('derivations', 0.029), ('unobserved', 0.028), ('family', 0.028), ('dominant', 0.028), ('berlin', 0.028), ('gp', 0.027), ('passing', 0.027), ('message', 0.026), ('probabilistic', 0.026), ('reached', 0.026), ('expressions', 0.026), ('indication', 0.026), ('applicability', 0.025), ('bayesian', 0.025), ('extensive', 0.025), ('harder', 0.025), ('exponential', 0.024), ('increasing', 0.023), ('laboratory', 0.023), ('typically', 0.023), ('leaves', 0.023), ('mcmc', 0.023), ('auxiliary', 0.023), ('developed', 0.022), ('assuming', 0.022), ('posterior', 0.022), ('great', 0.022), ('modelling', 0.022), ('approximate', 0.022), ('serve', 0.022), ('assess', 0.022), ('exact', 0.021), ('check', 0.021), ('yields', 0.021), ('come', 0.021), ('dependency', 0.021), ('distributions', 0.021), ('products', 0.021), ('belong', 0.021), ('product', 0.02), ('requiring', 0.02), ('optimality', 0.02), ('feasible', 0.02), ('assumed', 0.02), ('quantities', 0.02), ('improvements', 0.019), ('involving', 0.019), ('term', 0.019), ('divergence', 0.019), ('improving', 0.019)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 105 nips-2008-Improving on Expectation Propagation

Author: Manfred Opper, Ulrich Paquet, Ole Winther

Abstract: A series of corrections is developed for the fixed points of Expectation Propagation (EP), which is one of the most popular methods for approximate probabilistic inference. These corrections can lead to improvements of the inference approximation or serve as a sanity check, indicating when EP yields unrealiable results.

2 0.10231685 216 nips-2008-Sparse probabilistic projections

Author: Cédric Archambeau, Francis R. Bach

Abstract: We present a generative model for performing sparse probabilistic projections, which includes sparse principal component analysis and sparse canonical correlation analysis as special cases. Sparsity is enforced by means of automatic relevance determination or by imposing appropriate prior distributions, such as generalised hyperbolic distributions. We derive a variational Expectation-Maximisation algorithm for the estimation of the hyperparameters and show that our novel probabilistic approach compares favourably to existing techniques. We illustrate how the proposed method can be applied in the context of cryptoanalysis as a preprocessing tool for the construction of template attacks. 1

3 0.079444595 12 nips-2008-Accelerating Bayesian Inference over Nonlinear Differential Equations with Gaussian Processes

Author: Ben Calderhead, Mark Girolami, Neil D. Lawrence

Abstract: Identification and comparison of nonlinear dynamical system models using noisy and sparse experimental data is a vital task in many fields, however current methods are computationally expensive and prone to error due in part to the nonlinear nature of the likelihood surfaces induced. We present an accelerated sampling procedure which enables Bayesian inference of parameters in nonlinear ordinary and delay differential equations via the novel use of Gaussian processes (GP). Our method involves GP regression over time-series data, and the resulting derivative and time delay estimates make parameter inference possible without solving the dynamical system explicitly, resulting in dramatic savings of computational time. We demonstrate the speed and statistical accuracy of our approach using examples of both ordinary and delay differential equations, and provide a comprehensive comparison with current state of the art methods. 1

4 0.070874311 186 nips-2008-Probabilistic detection of short events, with application to critical care monitoring

Author: Norm Aleks, Stuart Russell, Michael G. Madden, Diane Morabito, Kristan Staudenmayer, Mitchell Cohen, Geoffrey T. Manley

Abstract: We describe an application of probabilistic modeling and inference technology to the problem of analyzing sensor data in the setting of an intensive care unit (ICU). In particular, we consider the arterial-line blood pressure sensor, which is subject to frequent data artifacts that cause false alarms in the ICU and make the raw data almost useless for automated decision making. The problem is complicated by the fact that the sensor data are averaged over fixed intervals whereas the events causing data artifacts may occur at any time and often have durations significantly shorter than the data collection interval. We show that careful modeling of the sensor, combined with a general technique for detecting sub-interval events and estimating their duration, enables detection of artifacts and accurate estimation of the underlying blood pressure values. Our model’s performance identifying artifacts is superior to two other classifiers’ and about as good as a physician’s. 1

5 0.069981053 233 nips-2008-The Gaussian Process Density Sampler

Author: Iain Murray, David MacKay, Ryan P. Adams

Abstract: We present the Gaussian Process Density Sampler (GPDS), an exchangeable generative model for use in nonparametric Bayesian density estimation. Samples drawn from the GPDS are consistent with exact, independent samples from a fixed density function that is a transformation of a function drawn from a Gaussian process prior. Our formulation allows us to infer an unknown density from data using Markov chain Monte Carlo, which gives samples from the posterior distribution over density functions and from the predictive distribution on data space. We can also infer the hyperparameters of the Gaussian process. We compare this density modeling technique to several existing techniques on a toy problem and a skullreconstruction task. 1

6 0.064279221 34 nips-2008-Bayesian Network Score Approximation using a Metagraph Kernel

7 0.054758083 176 nips-2008-Partially Observed Maximum Entropy Discrimination Markov Networks

8 0.049449503 89 nips-2008-Gates

9 0.04677945 71 nips-2008-Efficient Sampling for Gaussian Process Inference using Control Variables

10 0.041755054 119 nips-2008-Learning a discriminative hidden part model for human action recognition

11 0.039492182 218 nips-2008-Spectral Clustering with Perturbed Data

12 0.038967725 91 nips-2008-Generative and Discriminative Learning with Unknown Labeling Bias

13 0.038389377 156 nips-2008-Nonparametric sparse hierarchical models describe V1 fMRI responses to natural images

14 0.037596282 213 nips-2008-Sparse Convolved Gaussian Processes for Multi-output Regression

15 0.036051553 184 nips-2008-Predictive Indexing for Fast Search

16 0.035894666 73 nips-2008-Estimating Robust Query Models with Convex Optimization

17 0.035805684 129 nips-2008-MAS: a multiplicative approximation scheme for probabilistic inference

18 0.034741212 2 nips-2008-A Convex Upper Bound on the Log-Partition Function for Binary Distributions

19 0.032236107 116 nips-2008-Learning Hybrid Models for Image Annotation with Partially Labeled Data

20 0.032235228 50 nips-2008-Continuously-adaptive discretization for message-passing algorithms


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.084), (1, -0.008), (2, 0.016), (3, 0.022), (4, 0.039), (5, -0.033), (6, 0.015), (7, 0.079), (8, 0.013), (9, 0.0), (10, -0.008), (11, 0.021), (12, 0.109), (13, -0.009), (14, 0.012), (15, -0.021), (16, -0.023), (17, 0.053), (18, 0.097), (19, -0.022), (20, 0.041), (21, -0.059), (22, -0.008), (23, -0.03), (24, -0.009), (25, -0.025), (26, 0.091), (27, -0.02), (28, -0.023), (29, -0.012), (30, -0.05), (31, -0.089), (32, -0.009), (33, 0.037), (34, 0.008), (35, 0.015), (36, -0.012), (37, 0.061), (38, 0.022), (39, 0.077), (40, 0.024), (41, 0.007), (42, -0.052), (43, -0.006), (44, 0.16), (45, 0.044), (46, 0.101), (47, 0.219), (48, -0.089), (49, 0.139)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96001595 105 nips-2008-Improving on Expectation Propagation

Author: Manfred Opper, Ulrich Paquet, Ole Winther

Abstract: A series of corrections is developed for the fixed points of Expectation Propagation (EP), which is one of the most popular methods for approximate probabilistic inference. These corrections can lead to improvements of the inference approximation or serve as a sanity check, indicating when EP yields unrealiable results.

2 0.59226424 186 nips-2008-Probabilistic detection of short events, with application to critical care monitoring

Author: Norm Aleks, Stuart Russell, Michael G. Madden, Diane Morabito, Kristan Staudenmayer, Mitchell Cohen, Geoffrey T. Manley

Abstract: We describe an application of probabilistic modeling and inference technology to the problem of analyzing sensor data in the setting of an intensive care unit (ICU). In particular, we consider the arterial-line blood pressure sensor, which is subject to frequent data artifacts that cause false alarms in the ICU and make the raw data almost useless for automated decision making. The problem is complicated by the fact that the sensor data are averaged over fixed intervals whereas the events causing data artifacts may occur at any time and often have durations significantly shorter than the data collection interval. We show that careful modeling of the sensor, combined with a general technique for detecting sub-interval events and estimating their duration, enables detection of artifacts and accurate estimation of the underlying blood pressure values. Our model’s performance identifying artifacts is superior to two other classifiers’ and about as good as a physician’s. 1

3 0.52750993 216 nips-2008-Sparse probabilistic projections

Author: Cédric Archambeau, Francis R. Bach

Abstract: We present a generative model for performing sparse probabilistic projections, which includes sparse principal component analysis and sparse canonical correlation analysis as special cases. Sparsity is enforced by means of automatic relevance determination or by imposing appropriate prior distributions, such as generalised hyperbolic distributions. We derive a variational Expectation-Maximisation algorithm for the estimation of the hyperparameters and show that our novel probabilistic approach compares favourably to existing techniques. We illustrate how the proposed method can be applied in the context of cryptoanalysis as a preprocessing tool for the construction of template attacks. 1

4 0.51779431 233 nips-2008-The Gaussian Process Density Sampler

Author: Iain Murray, David MacKay, Ryan P. Adams

Abstract: We present the Gaussian Process Density Sampler (GPDS), an exchangeable generative model for use in nonparametric Bayesian density estimation. Samples drawn from the GPDS are consistent with exact, independent samples from a fixed density function that is a transformation of a function drawn from a Gaussian process prior. Our formulation allows us to infer an unknown density from data using Markov chain Monte Carlo, which gives samples from the posterior distribution over density functions and from the predictive distribution on data space. We can also infer the hyperparameters of the Gaussian process. We compare this density modeling technique to several existing techniques on a toy problem and a skullreconstruction task. 1

5 0.46466038 129 nips-2008-MAS: a multiplicative approximation scheme for probabilistic inference

Author: Ydo Wexler, Christopher Meek

Abstract: We propose a multiplicative approximation scheme (MAS) for inference problems in graphical models, which can be applied to various inference algorithms. The method uses -decompositions which decompose functions used throughout the inference procedure into functions over smaller sets of variables with a known error . MAS translates these local approximations into bounds on the accuracy of the results. We show how to optimize -decompositions and provide a fast closed-form solution for an L2 approximation. Applying MAS to the Variable Elimination inference algorithm, we introduce an algorithm we call DynaDecomp which is extremely fast in practice and provides guaranteed error bounds on the result. The superior accuracy and efficiency of DynaDecomp is demonstrated. 1

6 0.42391008 176 nips-2008-Partially Observed Maximum Entropy Discrimination Markov Networks

7 0.42234099 31 nips-2008-Bayesian Exponential Family PCA

8 0.38090679 82 nips-2008-Fast Computation of Posterior Mode in Multi-Level Hierarchical Models

9 0.3693237 12 nips-2008-Accelerating Bayesian Inference over Nonlinear Differential Equations with Gaussian Processes

10 0.36346829 221 nips-2008-Stochastic Relational Models for Large-scale Dyadic Data using MCMC

11 0.35471886 213 nips-2008-Sparse Convolved Gaussian Processes for Multi-output Regression

12 0.34422439 249 nips-2008-Variational Mixture of Gaussian Process Experts

13 0.33267319 32 nips-2008-Bayesian Kernel Shaping for Learning Control

14 0.32566768 71 nips-2008-Efficient Sampling for Gaussian Process Inference using Control Variables

15 0.31647086 50 nips-2008-Continuously-adaptive discretization for message-passing algorithms

16 0.30470514 138 nips-2008-Modeling human function learning with Gaussian processes

17 0.30448854 69 nips-2008-Efficient Exact Inference in Planar Ising Models

18 0.28848463 83 nips-2008-Fast High-dimensional Kernel Summations Using the Monte Carlo Multipole Method

19 0.2809163 13 nips-2008-Adapting to a Market Shock: Optimal Sequential Market-Making

20 0.27817798 98 nips-2008-Hierarchical Semi-Markov Conditional Random Fields for Recursive Sequential Data


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(6, 0.046), (7, 0.055), (12, 0.029), (28, 0.152), (57, 0.063), (63, 0.028), (64, 0.436), (77, 0.049), (78, 0.011), (83, 0.017)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.71955907 105 nips-2008-Improving on Expectation Propagation

Author: Manfred Opper, Ulrich Paquet, Ole Winther

Abstract: A series of corrections is developed for the fixed points of Expectation Propagation (EP), which is one of the most popular methods for approximate probabilistic inference. These corrections can lead to improvements of the inference approximation or serve as a sanity check, indicating when EP yields unrealiable results.

2 0.63764983 108 nips-2008-Integrating Locally Learned Causal Structures with Overlapping Variables

Author: David Danks, Clark Glymour, Robert E. Tillman

Abstract: In many domains, data are distributed among datasets that share only some variables; other recorded variables may occur in only one dataset. While there are asymptotically correct, informative algorithms for discovering causal relationships from a single dataset, even with missing values and hidden variables, there have been no such reliable procedures for distributed data with overlapping variables. We present a novel, asymptotically correct procedure that discovers a minimal equivalence class of causal DAG structures using local independence information from distributed data of this form and evaluate its performance using synthetic and real-world data against causal discovery algorithms for single datasets and applying Structural EM, a heuristic DAG structure learning procedure for data with missing values, to the concatenated data.

3 0.59961575 123 nips-2008-Linear Classification and Selective Sampling Under Low Noise Conditions

Author: Giovanni Cavallanti, Nicolò Cesa-bianchi, Claudio Gentile

Abstract: We provide a new analysis of an efficient margin-based algorithm for selective sampling in classification problems. Using the so-called Tsybakov low noise condition to parametrize the instance distribution, we show bounds on the convergence rate to the Bayes risk of both the fully supervised and the selective sampling versions of the basic algorithm. Our analysis reveals that, excluding logarithmic factors, the average risk of the selective sampler converges to the Bayes risk at rate N −(1+α)(2+α)/2(3+α) where N denotes the number of √ queried labels, and α > 0 is the exponent in the low noise condition. For all α > 3 − 1 ≈ 0.73 this convergence rate is asymptotically faster than the rate N −(1+α)/(2+α) achieved by the fully supervised version of the same classifier, which queries all labels, and for α → ∞ the two rates exhibit an exponential gap. Experiments on textual data reveal that simple variants of the proposed selective sampler perform much better than popular and similarly efficient competitors. 1

4 0.58881652 151 nips-2008-Non-parametric Regression Between Manifolds

Author: Florian Steinke, Matthias Hein

Abstract: This paper discusses non-parametric regression between Riemannian manifolds. This learning problem arises frequently in many application areas ranging from signal processing, computer vision, over robotics to computer graphics. We present a new algorithmic scheme for the solution of this general learning problem based on regularized empirical risk minimization. The regularization functional takes into account the geometry of input and output manifold, and we show that it implements a prior which is particularly natural. Moreover, we demonstrate that our algorithm performs well in a difficult surface registration problem. 1

5 0.40496713 86 nips-2008-Finding Latent Causes in Causal Networks: an Efficient Approach Based on Markov Blankets

Author: Jean-philippe Pellet, AndrĂŠ Elisseeff

Abstract: Causal structure-discovery techniques usually assume that all causes of more than one variable are observed. This is the so-called causal sufficiency assumption. In practice, it is untestable, and often violated. In this paper, we present an efficient causal structure-learning algorithm, suited for causally insufficient data. Similar to algorithms such as IC* and FCI, the proposed approach drops the causal sufficiency assumption and learns a structure that indicates (potential) latent causes for pairs of observed variables. Assuming a constant local density of the data-generating graph, our algorithm makes a quadratic number of conditionalindependence tests w.r.t. the number of variables. We show with experiments that our algorithm is comparable to the state-of-the-art FCI algorithm in accuracy, while being several orders of magnitude faster on large problems. We conclude that MBCS* makes a new range of causally insufficient problems computationally tractable. Keywords: Graphical Models, Structure Learning, Causal Inference. 1 Introduction: Task Definition & Related Work The statistical definition of causality pioneered by Pearl (2000) and Spirtes et al. (2001) has shed new light on how to detect causation. Central in this approach is the automated detection of causeeffect relationships using observational (i.e., non-experimental) data. This can be a necessary task, as in many situations, performing randomized controlled experiments to unveil causation can be impossible, unethical , or too costly. When the analysis deals with variables that cannot be manipulated, being able to learn from data collected by observing the running system is the only possibility. It turns out that learning the full causal structure of a set of variables is, in its most general form , impossible. If we suppose that the

6 0.39241999 118 nips-2008-Learning Transformational Invariants from Natural Movies

7 0.39187101 138 nips-2008-Modeling human function learning with Gaussian processes

8 0.39048997 200 nips-2008-Robust Kernel Principal Component Analysis

9 0.38902313 216 nips-2008-Sparse probabilistic projections

10 0.38902038 31 nips-2008-Bayesian Exponential Family PCA

11 0.38891923 4 nips-2008-A Scalable Hierarchical Distributed Language Model

12 0.38790137 184 nips-2008-Predictive Indexing for Fast Search

13 0.38773793 231 nips-2008-Temporal Dynamics of Cognitive Control

14 0.3874073 49 nips-2008-Clusters and Coarse Partitions in LP Relaxations

15 0.38706627 129 nips-2008-MAS: a multiplicative approximation scheme for probabilistic inference

16 0.3869822 50 nips-2008-Continuously-adaptive discretization for message-passing algorithms

17 0.38663438 62 nips-2008-Differentiable Sparse Coding

18 0.38592368 197 nips-2008-Relative Performance Guarantees for Approximate Inference in Latent Dirichlet Allocation

19 0.38586733 135 nips-2008-Model Selection in Gaussian Graphical Models: High-Dimensional Consistency of \boldmath$\ell 1$-regularized MLE

20 0.38455546 21 nips-2008-An Homotopy Algorithm for the Lasso with Online Observations