nips nips2007 nips2007-26 knowledge-graph by maker-knowledge-mining

26 nips-2007-An online Hebbian learning rule that performs Independent Component Analysis


Source: pdf

Author: Claudia Clopath, André Longtin, Wulfram Gerstner

Abstract: Independent component analysis (ICA) is a powerful method to decouple signals. Most of the algorithms performing ICA do not consider the temporal correlations of the signal, but only higher moments of its amplitude distribution. Moreover, they require some preprocessing of the data (whitening) so as to remove second order correlations. In this paper, we are interested in understanding the neural mechanism responsible for solving ICA. We present an online learning rule that exploits delayed correlations in the input. This rule performs ICA by detecting joint variations in the firing rates of pre- and postsynaptic neurons, similar to a local rate-based Hebbian learning rule. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 An online Hebbian learning rule that performs Independent Component Analysis Claudia Clopath School of Computer Science and Brain Mind Institute Ecole polytechnique federale de Lausanne 1015 Lausanne EPFL claudia. [sent-1, score-0.618]

2 ca Wulfram Gerstner School of Computer Science and Brain Mind Institute Ecole polytechnique federale de Lausanne 1015 Lausanne EPFL wulfram. [sent-4, score-0.298]

3 ch Abstract Independent component analysis (ICA) is a powerful method to decouple signals. [sent-6, score-0.187]

4 Most of the algorithms performing ICA do not consider the temporal correlations of the signal, but only higher moments of its amplitude distribution. [sent-7, score-0.591]

5 Moreover, they require some preprocessing of the data (whitening) so as to remove second order correlations. [sent-8, score-0.106]

6 In this paper, we are interested in understanding the neural mechanism responsible for solving ICA. [sent-9, score-0.183]

7 We present an online learning rule that exploits delayed correlations in the input. [sent-10, score-0.644]

8 This rule performs ICA by detecting joint variations in the firing rates of pre- and postsynaptic neurons, similar to a local rate-based Hebbian learning rule. [sent-11, score-0.614]

9 1 Introduction The so-called cocktail party problem refers to a situation where several sound sources are simultaneously active, e. [sent-12, score-0.637]

10 The goal is to recover the initial sound sources from the measurement of the mixed signals. [sent-15, score-0.408]

11 A standard method of solving the cocktail party problem is independent component analysis (ICA), which can be performed by a class of powerful algorithms. [sent-16, score-0.481]

12 However, classical algorithms based on higher moments of the signal distribution [1] do not consider temporal correlations, i. [sent-17, score-0.393]

13 data points corresponding to different time slices could be shuffled without a change in the results. [sent-19, score-0.062]

14 But time order is important since most natural signal sources have intrinsic temporal correlations that could potentially be exploited. [sent-20, score-0.665]

15 Therefore, some algorithms have been developed to take into account those temporal correlations, e. [sent-21, score-0.141]

16 algorithms based on delayed correlations [2, 3, 4, 5] potentially combined with higher-order statistics [6], based on innovation processes [7], or complexity pursuit [8]. [sent-23, score-0.601]

17 However, those methods are rather algorithmic and most of them are difficult to interpret biologically, e. [sent-24, score-0.058]

18 they are not online or not local or require a preprocessing of the data. [sent-26, score-0.195]

19 Biological learning algorithms are usually implemented as an online Hebbian learning rule that triggers changes of synaptic efficacy based on the correlations between pre- and postsynaptic neurons. [sent-27, score-0.874]

20 A Hebbian learning rule, like Oja’s learning rule [9], combined with a linear neuron model, has been shown to perform principal component analysis (PCA). [sent-28, score-0.469]

21 Simply using a nonlinear neuron combined with Oja’s learning rule allows one to compute higher moments of the distributions which yields ICA if the signals have been preprocessed (whitening) at an earlier stage [1]. [sent-29, score-0.774]

22 In this paper, we are 1 s x Hebbian Learning C W Mixing ICA y Figure 1: The sources s are mixed with a matrix C, x = Cs, x are the presynaptic signals. [sent-30, score-0.383]

23 Using a linear neuron y = W x, we want to find the matrix W which allows the postsynaptic signals y to recover the sources, y = P s, where P is a permutation matrix with different multiplicative constants. [sent-31, score-0.663]

24 interested in exploiting the correlation of the signals at different time delays, i. [sent-32, score-0.258]

25 We will show that a linear neuron model combined with a Hebbian learning rule based on the joint firing rates of the pre- and postsynaptic neurons of different time delays performs ICA by exploiting the temporal correlations of the presynaptic inputs. [sent-35, score-1.459]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('hebbian', 0.348), ('lausanne', 0.332), ('ica', 0.321), ('correlations', 0.239), ('postsynaptic', 0.232), ('rule', 0.177), ('cocktail', 0.166), ('epfl', 0.166), ('federale', 0.166), ('ottawa', 0.166), ('sources', 0.163), ('oja', 0.144), ('party', 0.144), ('moments', 0.134), ('delays', 0.132), ('polytechnique', 0.132), ('whitening', 0.132), ('neuron', 0.13), ('presynaptic', 0.123), ('signals', 0.122), ('temporal', 0.119), ('ecole', 0.106), ('online', 0.095), ('delayed', 0.092), ('sound', 0.08), ('preprocessing', 0.078), ('combined', 0.074), ('mind', 0.073), ('mixed', 0.073), ('decouple', 0.072), ('schuster', 0.072), ('wulfram', 0.072), ('ring', 0.066), ('talking', 0.066), ('innovation', 0.066), ('brain', 0.065), ('school', 0.064), ('neurons', 0.063), ('triggers', 0.062), ('slices', 0.062), ('gerstner', 0.062), ('louis', 0.062), ('pursuit', 0.062), ('shuf', 0.062), ('exploiting', 0.061), ('signal', 0.061), ('component', 0.06), ('cacy', 0.058), ('recover', 0.057), ('persons', 0.055), ('si', 0.055), ('powerful', 0.055), ('cs', 0.051), ('interested', 0.05), ('biologically', 0.049), ('performs', 0.048), ('synaptic', 0.047), ('responsible', 0.047), ('potentially', 0.046), ('amplitude', 0.045), ('preprocessed', 0.045), ('exploits', 0.041), ('institute', 0.039), ('permutation', 0.039), ('detecting', 0.038), ('intrinsic', 0.037), ('rates', 0.036), ('variations', 0.036), ('measurement', 0.035), ('multiplicative', 0.035), ('refers', 0.034), ('statistically', 0.033), ('stage', 0.033), ('higher', 0.032), ('mixing', 0.032), ('interpret', 0.031), ('mechanism', 0.03), ('pca', 0.03), ('derivation', 0.029), ('understanding', 0.028), ('remove', 0.028), ('source', 0.028), ('solving', 0.028), ('independent', 0.028), ('principal', 0.028), ('algorithmic', 0.027), ('earlier', 0.027), ('situation', 0.027), ('dynamics', 0.026), ('joint', 0.025), ('biological', 0.025), ('active', 0.025), ('classical', 0.025), ('correlation', 0.025), ('mathematical', 0.024), ('matrix', 0.024), ('center', 0.023), ('simultaneously', 0.023), ('local', 0.022), ('algorithms', 0.022)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 26 nips-2007-An online Hebbian learning rule that performs Independent Component Analysis

Author: Claudia Clopath, André Longtin, Wulfram Gerstner

Abstract: Independent component analysis (ICA) is a powerful method to decouple signals. Most of the algorithms performing ICA do not consider the temporal correlations of the signal, but only higher moments of its amplitude distribution. Moreover, they require some preprocessing of the data (whitening) so as to remove second order correlations. In this paper, we are interested in understanding the neural mechanism responsible for solving ICA. We present an online learning rule that exploits delayed correlations in the input. This rule performs ICA by detecting joint variations in the firing rates of pre- and postsynaptic neurons, similar to a local rate-based Hebbian learning rule. 1

2 0.17405351 177 nips-2007-Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons

Author: Lars Buesing, Wolfgang Maass

Abstract: We show that under suitable assumptions (primarily linearization) a simple and perspicuous online learning rule for Information Bottleneck optimization with spiking neurons can be derived. This rule performs on common benchmark tasks as well as a rather complex rule that has previously been proposed [1]. Furthermore, the transparency of this new learning rule makes a theoretical analysis of its convergence properties feasible. A variation of this learning rule (with sign changes) provides a theoretically founded method for performing Principal Component Analysis (PCA) with spiking neurons. By applying this rule to an ensemble of neurons, different principal components of the input can be extracted. In addition, it is possible to preferentially extract those principal components from incoming signals X that are related or are not related to some additional target signal YT . In a biological interpretation, this target signal YT (also called relevance variable) could represent proprioceptive feedback, input from other sensory modalities, or top-down signals. 1

3 0.11951649 205 nips-2007-Theoretical Analysis of Learning with Reward-Modulated Spike-Timing-Dependent Plasticity

Author: Dejan Pecevski, Wolfgang Maass, Robert A. Legenstein

Abstract: Reward-modulated spike-timing-dependent plasticity (STDP) has recently emerged as a candidate for a learning rule that could explain how local learning rules at single synapses support behaviorally relevant adaptive changes in complex networks of spiking neurons. However the potential and limitations of this learning rule could so far only be tested through computer simulations. This article provides tools for an analytic treatment of reward-modulated STDP, which allow us to predict under which conditions reward-modulated STDP will be able to achieve a desired learning effect. In particular, we can produce in this way a theoretical explanation and a computer model for a fundamental experimental finding on biofeedback in monkeys (reported in [1]).

4 0.10656495 14 nips-2007-A configurable analog VLSI neural network with spiking neurons and self-regulating plastic synapses

Author: Massimiliano Giulioni, Mario Pannunzi, Davide Badoni, Vittorio Dante, Paolo D. Giudice

Abstract: We summarize the implementation of an analog VLSI chip hosting a network of 32 integrate-and-fire (IF) neurons with spike-frequency adaptation and 2,048 Hebbian plastic bistable spike-driven stochastic synapses endowed with a selfregulating mechanism which stops unnecessary synaptic changes. The synaptic matrix can be flexibly configured and provides both recurrent and AER-based connectivity with external, AER compliant devices. We demonstrate the ability of the network to efficiently classify overlapping patterns, thanks to the self-regulating mechanism.

5 0.098826841 117 nips-2007-Learning to classify complex patterns using a VLSI network of spiking neurons

Author: Srinjoy Mitra, Giacomo Indiveri, Stefano Fusi

Abstract: We propose a compact, low power VLSI network of spiking neurons which can learn to classify complex patterns of mean firing rates on–line and in real–time. The network of integrate-and-fire neurons is connected by bistable synapses that can change their weight using a local spike–based plasticity mechanism. Learning is supervised by a teacher which provides an extra input to the output neurons during training. The synaptic weights are updated only if the current generated by the plastic synapses does not match the output desired by the teacher (as in the perceptron learning rule). We present experimental results that demonstrate how this VLSI network is able to robustly classify uncorrelated linearly separable spatial patterns of mean firing rates.

6 0.067351609 33 nips-2007-Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

7 0.062131111 164 nips-2007-Receptive Fields without Spike-Triggering

8 0.059195574 157 nips-2007-Privacy-Preserving Belief Propagation and Sampling

9 0.05377081 138 nips-2007-Near-Maximum Entropy Models for Binary Neural Representations of Natural Images

10 0.052131958 140 nips-2007-Neural characterization in partially observed populations of spiking neurons

11 0.051242087 60 nips-2007-Contraction Properties of VLSI Cooperative Competitive Neural Networks of Spiking Neurons

12 0.048726123 17 nips-2007-A neural network implementing optimal state estimation based on dynamic spike train decoding

13 0.04774056 111 nips-2007-Learning Horizontal Connections in a Sparse Coding Model of Natural Images

14 0.047122277 173 nips-2007-Second Order Bilinear Discriminant Analysis for single trial EEG analysis

15 0.042600706 154 nips-2007-Predicting Brain States from fMRI Data: Incremental Functional Principal Component Regression

16 0.042474046 7 nips-2007-A Kernel Statistical Test of Independence

17 0.040595513 182 nips-2007-Sparse deep belief net model for visual area V2

18 0.040436339 25 nips-2007-An in-silico Neural Model of Dynamic Routing through Neuronal Coherence

19 0.039129734 130 nips-2007-Modeling Natural Sounds with Modulation Cascade Processes

20 0.036994476 106 nips-2007-Invariant Common Spatial Patterns: Alleviating Nonstationarities in Brain-Computer Interfacing


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.106), (1, 0.07), (2, 0.168), (3, 0.033), (4, -0.001), (5, -0.026), (6, 0.076), (7, 0.007), (8, 0.003), (9, 0.042), (10, -0.012), (11, 0.004), (12, 0.005), (13, -0.051), (14, 0.014), (15, -0.061), (16, -0.046), (17, -0.098), (18, -0.085), (19, -0.074), (20, 0.01), (21, 0.032), (22, 0.057), (23, -0.097), (24, -0.049), (25, 0.1), (26, 0.008), (27, 0.003), (28, -0.042), (29, -0.008), (30, 0.009), (31, 0.072), (32, -0.032), (33, -0.067), (34, -0.008), (35, -0.031), (36, 0.012), (37, -0.015), (38, -0.059), (39, 0.151), (40, 0.016), (41, -0.009), (42, -0.062), (43, 0.065), (44, -0.031), (45, 0.062), (46, -0.03), (47, 0.021), (48, 0.092), (49, 0.024)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9645164 26 nips-2007-An online Hebbian learning rule that performs Independent Component Analysis

Author: Claudia Clopath, André Longtin, Wulfram Gerstner

Abstract: Independent component analysis (ICA) is a powerful method to decouple signals. Most of the algorithms performing ICA do not consider the temporal correlations of the signal, but only higher moments of its amplitude distribution. Moreover, they require some preprocessing of the data (whitening) so as to remove second order correlations. In this paper, we are interested in understanding the neural mechanism responsible for solving ICA. We present an online learning rule that exploits delayed correlations in the input. This rule performs ICA by detecting joint variations in the firing rates of pre- and postsynaptic neurons, similar to a local rate-based Hebbian learning rule. 1

2 0.6574499 177 nips-2007-Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons

Author: Lars Buesing, Wolfgang Maass

Abstract: We show that under suitable assumptions (primarily linearization) a simple and perspicuous online learning rule for Information Bottleneck optimization with spiking neurons can be derived. This rule performs on common benchmark tasks as well as a rather complex rule that has previously been proposed [1]. Furthermore, the transparency of this new learning rule makes a theoretical analysis of its convergence properties feasible. A variation of this learning rule (with sign changes) provides a theoretically founded method for performing Principal Component Analysis (PCA) with spiking neurons. By applying this rule to an ensemble of neurons, different principal components of the input can be extracted. In addition, it is possible to preferentially extract those principal components from incoming signals X that are related or are not related to some additional target signal YT . In a biological interpretation, this target signal YT (also called relevance variable) could represent proprioceptive feedback, input from other sensory modalities, or top-down signals. 1

3 0.63247311 14 nips-2007-A configurable analog VLSI neural network with spiking neurons and self-regulating plastic synapses

Author: Massimiliano Giulioni, Mario Pannunzi, Davide Badoni, Vittorio Dante, Paolo D. Giudice

Abstract: We summarize the implementation of an analog VLSI chip hosting a network of 32 integrate-and-fire (IF) neurons with spike-frequency adaptation and 2,048 Hebbian plastic bistable spike-driven stochastic synapses endowed with a selfregulating mechanism which stops unnecessary synaptic changes. The synaptic matrix can be flexibly configured and provides both recurrent and AER-based connectivity with external, AER compliant devices. We demonstrate the ability of the network to efficiently classify overlapping patterns, thanks to the self-regulating mechanism.

4 0.60778803 117 nips-2007-Learning to classify complex patterns using a VLSI network of spiking neurons

Author: Srinjoy Mitra, Giacomo Indiveri, Stefano Fusi

Abstract: We propose a compact, low power VLSI network of spiking neurons which can learn to classify complex patterns of mean firing rates on–line and in real–time. The network of integrate-and-fire neurons is connected by bistable synapses that can change their weight using a local spike–based plasticity mechanism. Learning is supervised by a teacher which provides an extra input to the output neurons during training. The synaptic weights are updated only if the current generated by the plastic synapses does not match the output desired by the teacher (as in the perceptron learning rule). We present experimental results that demonstrate how this VLSI network is able to robustly classify uncorrelated linearly separable spatial patterns of mean firing rates.

5 0.58242923 205 nips-2007-Theoretical Analysis of Learning with Reward-Modulated Spike-Timing-Dependent Plasticity

Author: Dejan Pecevski, Wolfgang Maass, Robert A. Legenstein

Abstract: Reward-modulated spike-timing-dependent plasticity (STDP) has recently emerged as a candidate for a learning rule that could explain how local learning rules at single synapses support behaviorally relevant adaptive changes in complex networks of spiking neurons. However the potential and limitations of this learning rule could so far only be tested through computer simulations. This article provides tools for an analytic treatment of reward-modulated STDP, which allow us to predict under which conditions reward-modulated STDP will be able to achieve a desired learning effect. In particular, we can produce in this way a theoretical explanation and a computer model for a fundamental experimental finding on biofeedback in monkeys (reported in [1]).

6 0.50227183 60 nips-2007-Contraction Properties of VLSI Cooperative Competitive Neural Networks of Spiking Neurons

7 0.36128393 25 nips-2007-An in-silico Neural Model of Dynamic Routing through Neuronal Coherence

8 0.34997022 164 nips-2007-Receptive Fields without Spike-Triggering

9 0.34700307 33 nips-2007-Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

10 0.33570397 130 nips-2007-Modeling Natural Sounds with Modulation Cascade Processes

11 0.33463848 96 nips-2007-Heterogeneous Component Analysis

12 0.32907525 111 nips-2007-Learning Horizontal Connections in a Sparse Coding Model of Natural Images

13 0.32906032 167 nips-2007-Regulator Discovery from Gene Expression Time Series of Malaria Parasites: a Hierachical Approach

14 0.29183802 173 nips-2007-Second Order Bilinear Discriminant Analysis for single trial EEG analysis

15 0.29008347 140 nips-2007-Neural characterization in partially observed populations of spiking neurons

16 0.28869912 138 nips-2007-Near-Maximum Entropy Models for Binary Neural Representations of Natural Images

17 0.28482887 9 nips-2007-A Probabilistic Approach to Language Change

18 0.28278363 144 nips-2007-On Ranking in Survival Analysis: Bounds on the Concordance Index

19 0.27651227 150 nips-2007-Optimal models of sound localization by barn owls

20 0.26898533 37 nips-2007-Blind channel identification for speech dereverberation using l1-norm sparse learning


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(5, 0.033), (13, 0.022), (14, 0.47), (16, 0.059), (19, 0.034), (21, 0.062), (34, 0.022), (35, 0.01), (47, 0.042), (49, 0.019), (83, 0.093), (90, 0.037)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.7558645 26 nips-2007-An online Hebbian learning rule that performs Independent Component Analysis

Author: Claudia Clopath, André Longtin, Wulfram Gerstner

Abstract: Independent component analysis (ICA) is a powerful method to decouple signals. Most of the algorithms performing ICA do not consider the temporal correlations of the signal, but only higher moments of its amplitude distribution. Moreover, they require some preprocessing of the data (whitening) so as to remove second order correlations. In this paper, we are interested in understanding the neural mechanism responsible for solving ICA. We present an online learning rule that exploits delayed correlations in the input. This rule performs ICA by detecting joint variations in the firing rates of pre- and postsynaptic neurons, similar to a local rate-based Hebbian learning rule. 1

2 0.45163617 18 nips-2007-A probabilistic model for generating realistic lip movements from speech

Author: Gwenn Englebienne, Tim Cootes, Magnus Rattray

Abstract: The present work aims to model the correspondence between facial motion and speech. The face and sound are modelled separately, with phonemes being the link between both. We propose a sequential model and evaluate its suitability for the generation of the facial animation from a sequence of phonemes, which we obtain from speech. We evaluate the results both by computing the error between generated sequences and real video, as well as with a rigorous double-blind test with human subjects. Experiments show that our model compares favourably to other existing methods and that the sequences generated are comparable to real video sequences. 1

3 0.29652825 205 nips-2007-Theoretical Analysis of Learning with Reward-Modulated Spike-Timing-Dependent Plasticity

Author: Dejan Pecevski, Wolfgang Maass, Robert A. Legenstein

Abstract: Reward-modulated spike-timing-dependent plasticity (STDP) has recently emerged as a candidate for a learning rule that could explain how local learning rules at single synapses support behaviorally relevant adaptive changes in complex networks of spiking neurons. However the potential and limitations of this learning rule could so far only be tested through computer simulations. This article provides tools for an analytic treatment of reward-modulated STDP, which allow us to predict under which conditions reward-modulated STDP will be able to achieve a desired learning effect. In particular, we can produce in this way a theoretical explanation and a computer model for a fundamental experimental finding on biofeedback in monkeys (reported in [1]).

4 0.2892679 104 nips-2007-Inferring Neural Firing Rates from Spike Trains Using Gaussian Processes

Author: Maneesh Sahani, Byron M. Yu, John P. Cunningham, Krishna V. Shenoy

Abstract: Neural spike trains present challenges to analytical efforts due to their noisy, spiking nature. Many studies of neuroscientific and neural prosthetic importance rely on a smoothed, denoised estimate of the spike train’s underlying firing rate. Current techniques to find time-varying firing rates require ad hoc choices of parameters, offer no confidence intervals on their estimates, and can obscure potentially important single trial variability. We present a new method, based on a Gaussian Process prior, for inferring probabilistically optimal estimates of firing rate functions underlying single or multiple neural spike trains. We test the performance of the method on simulated data and experimentally gathered neural spike trains, and we demonstrate improvements over conventional estimators. 1

5 0.28252783 36 nips-2007-Better than least squares: comparison of objective functions for estimating linear-nonlinear models

Author: Tatyana Sharpee

Abstract: This paper compares a family of methods for characterizing neural feature selectivity with natural stimuli in the framework of the linear-nonlinear model. In this model, the neural firing rate is a nonlinear function of a small number of relevant stimulus components. The relevant stimulus dimensions can be found by maximizing one of the family of objective functions, R´ nyi divergences of different e orders [1, 2]. We show that maximizing one of them, R´ nyi divergence of ore der 2, is equivalent to least-square fitting of the linear-nonlinear model to neural data. Next, we derive reconstruction errors in relevant dimensions found by maximizing R´ nyi divergences of arbitrary order in the asymptotic limit of large spike e numbers. We find that the smallest errors are obtained with R´ nyi divergence of e order 1, also known as Kullback-Leibler divergence. This corresponds to finding relevant dimensions by maximizing mutual information [2]. We numerically test how these optimization schemes perform in the regime of low signal-to-noise ratio (small number of spikes and increasing neural noise) for model visual neurons. We find that optimization schemes based on either least square fitting or information maximization perform well even when number of spikes is small. Information maximization provides slightly, but significantly, better reconstructions than least square fitting. This makes the problem of finding relevant dimensions, together with the problem of lossy compression [3], one of examples where informationtheoretic measures are no more data limited than those derived from least squares. 1

6 0.2800236 177 nips-2007-Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons

7 0.27903441 140 nips-2007-Neural characterization in partially observed populations of spiking neurons

8 0.2763502 195 nips-2007-The Generalized FITC Approximation

9 0.27546373 138 nips-2007-Near-Maximum Entropy Models for Binary Neural Representations of Natural Images

10 0.27501655 79 nips-2007-Efficient multiple hyperparameter learning for log-linear models

11 0.27360305 170 nips-2007-Robust Regression with Twinned Gaussian Processes

12 0.27261454 164 nips-2007-Receptive Fields without Spike-Triggering

13 0.27235258 212 nips-2007-Using Deep Belief Nets to Learn Covariance Kernels for Gaussian Processes

14 0.2711215 209 nips-2007-Ultrafast Monte Carlo for Statistical Summations

15 0.27090907 24 nips-2007-An Analysis of Inference with the Universum

16 0.27007285 200 nips-2007-The Tradeoffs of Large Scale Learning

17 0.26994246 186 nips-2007-Statistical Analysis of Semi-Supervised Regression

18 0.26993084 206 nips-2007-Topmoumoute Online Natural Gradient Algorithm

19 0.26987326 174 nips-2007-Selecting Observations against Adversarial Objectives

20 0.26907238 158 nips-2007-Probabilistic Matrix Factorization