nips nips2009 nips2009-164 knowledge-graph by maker-knowledge-mining

164 nips-2009-No evidence for active sparsification in the visual cortex


Source: pdf

Author: Pietro Berkes, Ben White, Jozsef Fiser

Abstract: The proposal that cortical activity in the visual cortex is optimized for sparse neural activity is one of the most established ideas in computational neuroscience. However, direct experimental evidence for optimal sparse coding remains inconclusive, mostly due to the lack of reference values on which to judge the measured sparseness. Here we analyze neural responses to natural movies in the primary visual cortex of ferrets at different stages of development and of rats while awake and under different levels of anesthesia. In contrast with prediction from a sparse coding model, our data shows that population and lifetime sparseness decrease with visual experience, and increase from the awake to anesthetized state. These results suggest that the representation in the primary visual cortex is not actively optimized to maximize sparseness. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 No evidence for active sparsification in the visual cortex o Pietro Berkes, Benjamin L. [sent-1, score-0.39]

2 White, and J´ zsef Fiser Volen Center for Complex Systems Brandeis University, Waltham, MA 02454 Abstract The proposal that cortical activity in the visual cortex is optimized for sparse neural activity is one of the most established ideas in computational neuroscience. [sent-2, score-0.784]

3 However, direct experimental evidence for optimal sparse coding remains inconclusive, mostly due to the lack of reference values on which to judge the measured sparseness. [sent-3, score-0.329]

4 Here we analyze neural responses to natural movies in the primary visual cortex of ferrets at different stages of development and of rats while awake and under different levels of anesthesia. [sent-4, score-1.044]

5 In contrast with prediction from a sparse coding model, our data shows that population and lifetime sparseness decrease with visual experience, and increase from the awake to anesthetized state. [sent-5, score-1.786]

6 These results suggest that the representation in the primary visual cortex is not actively optimized to maximize sparseness. [sent-6, score-0.429]

7 Computational models that optimize the sparseness of the responses of hidden units to natural images have been shown to reproduce the basic features of the receptive fields (RFs) of simple cells in V1 [3, 4, 5]. [sent-11, score-0.823]

8 Moreover, manipulation of the statistics of the environment of developing animals leads to changes in the RF structure that can be predicted by sparse coding models [6]. [sent-12, score-0.421]

9 Electrophysiological studies performed in primary visual cortex agree in reporting high sparseness values for neural activity [7, 8, 9, 10, 11, 12]. [sent-14, score-1.198]

10 However, it is contested whether the high degree of sparseness is due to a neural representation which is optimally sparse, or is an epiphenomenon due to neural selectivity [10, 12]. [sent-15, score-0.77]

11 This controversy is mostly due to a lack of reference measurement with which to judge the sparseness of the neural representation in relative, rather than absolute terms. [sent-16, score-0.63]

12 Another problem is that most of these studies have been performed on anesthetized animals [7, 9, 10, 11, 12], even though the effect of anesthesia might bias sparseness measurements (cf. [sent-17, score-1.007]

13 We compare this data 1 with theoretical predictions: 1) sparseness should increase with visual experience, and thus with age, as the visual system adapts to the statistics of the visual environment; 2) sparseness should be maximal in the “working regime” of the animal, i. [sent-21, score-1.75]

14 In both cases, the neural data shows a trend opposite to the one expected in a sparse coding system, suggesting that the visual system is not actively optimizing the sparseness of its representation. [sent-24, score-1.204]

15 The paper is organized as follows: We first introduce and discuss the lifetime and population sparseness measures we will be using throughout the paper. [sent-25, score-0.979]

16 Next, we present the classical, linear sparse coding model of natural images, and derive an equivalent, stochastic neural network, whose output firing rates correspond to Monte Carlo samples from the posterior distribution of visual elements given an image. [sent-26, score-0.638]

17 In the rest of the paper, we make use of this neural architecture in order to predict changes in sparseness over development and under anesthesia, and compare these predictions with electrophysiological recordings. [sent-27, score-0.773]

18 2 Lifetime and population sparseness The diverse benefits of sparseness mentioned in the introduction rely on different aspects of the neural code, which are captured to a different extent by two sparseness measures, referred to as lifetime and population sparseness. [sent-28, score-2.267]

19 Lifetime sparseness measures the distribution of the response of an individual cell to a set of stimuli, and is thus related to the cell’s selectivity. [sent-29, score-0.61]

20 These requirements of efficient coding are based upon the instantaneous population activity to stimuli and need to take into consideration the population sparseness of neural response. [sent-32, score-1.252]

21 Average lifetime and population sparseness are identical if the units are statistically independent, in which case the distribution is called ergodic [10, 14]. [sent-33, score-1.015]

22 Here we will use three measures of sparseness, two quantifying population sparseness, and one lifetime sparseness. [sent-36, score-0.412]

23 Moreover, in neural recordings we discard bins with no neural activity, as population TR is undefined in this case. [sent-43, score-0.336]

24 This seems to be adequate for our purposes, as the arguments for sparseness involve metabolic costs and coding arguments like redundancy reduction that are sensitive to overall firing rates. [sent-46, score-0.775]

25 Previous studies have shown that alternative measures of population and lifetime sparseness are highly correlated, therefore our choice does not affect the final results [15, 10]. [sent-47, score-1.007]

26 Here we set the sparse prior distribution to a Student-t distribution with α degrees of freedom, 1 p(xk ) = Z 1 1+ α xk λ 2 − α+1 2 , (5) with λ chosen such that the distribution has unit variance. [sent-58, score-0.277]

27 This is a common prior for sparse coding models [3], and its analytical form allows the development of efficient inference and learning algorithms [16, 17]. [sent-59, score-0.371]

28 3 Figure 2: Neural implementation of Gibbs sampling in a sparse coding model. [sent-77, score-0.329]

29 4 Sampling, sparse coding neural network In order to gain some intuition about the neural operations that may underlie inference in this model, we derive an equivalent neural network architecture. [sent-81, score-0.604]

30 It has been suggested that neural activity is best interpreted as samples from the posterior probability of an internal, probabilistic model of the sensory input. [sent-82, score-0.302]

31 This assumption is consistent with many experimental observations, including high trial-by-trial variability and spontaneous activity in awake animals [18, 19, 20]. [sent-83, score-0.446]

32 Expanding the exponent, eliminating the terms that do not depend on xk , and noting that Rkk = −1, since the generative weights have unit norm, we get   1 1 2 1 Gik yi )xk + 2 ( Rjk xj )xk − 2 xk + f (xk ) . [sent-86, score-0.38]

33 5 Active sparsification over learning A simple, intuitive prediction for a system that optimizes for sparseness is that the sparseness of its representation should increase over learning. [sent-94, score-1.199]

34 Since a sparse coding system, including our model, might not directly maximize our measures of sparseness, TR and AS, we verify this intuition by analyzing the model’s representation of images at various stages of learning. [sent-95, score-0.429]

35 (A) The lines indicate the average sparseness over units and samples. [sent-100, score-0.646]

36 Since the three measures have very different values, we report the change in sparseness in percent of the first iteration. [sent-102, score-0.61]

37 Colored text: absolute values of sparseness at the end of learning. [sent-103, score-0.567]

38 (B) The lines indicate the average sparseness for different animals. [sent-104, score-0.567]

39 As anticipated, both population and lifetime sparseness increase monotonically. [sent-111, score-0.965]

40 Having confirmed our intuition with the sparse coding model, we turn to data from electrophysiological recordings. [sent-112, score-0.401]

41 We analyzed multi-unit recordings from arrays of 16 electrodes implanted in the primary visual cortex of 15 ferrets at various stages of development, from eye opening at postnatal day 29 or 30 (P29-30) to adulthood at P151 (see Suppl Mat for experimental details). [sent-113, score-0.62]

42 Over this maturation period, the visual system of ferrets adapts to the statistics of the environment [22, 23]. [sent-114, score-0.382]

43 For each animal, neural activity was recorded and collected in 10 ms bins for 15 sessions of 100 seconds each (for a total of 25 minutes), during which the animal was shown scenes from a movie. [sent-115, score-0.323]

44 We find that all three measures of sparseness decrease significantly with age1 . [sent-116, score-0.664]

45 Thus, during a period when the cortex actively adapts to the visual environment, the representation in primary visual cortex becomes less sparse, suggesting that the optimization of sparseness is not a primary objective for learning in the visual system. [sent-117, score-1.593]

46 The decrease in population sparseness seems to be due to an increase in the dependencies between neurons: Fig. [sent-118, score-0.814]

47 3C shows the Kullback-Leibler divergence between the joint distribution P of neural activity in 2 ms bins and the same distribution, factorized to eliminate N ˜ neural dependencies, i. [sent-119, score-0.328]

48 6 Active sparsification and anesthesia The sparse coding neural network architecture of Fig. [sent-128, score-0.75]

49 2 makes explicit that an optimal sparse coding representation requires a process of active sparsification: In general, because of input noise and the overcompleteness of the representation, there are multiple possible combinations of visual elements that could account for a given image. [sent-129, score-0.621]

50 A) Percent change in sparseness as the recurrent connections are weakened for various values of α. [sent-159, score-0.758]

51 Colored text: absolute values of sparseness at the end of learning. [sent-161, score-0.567]

52 B) Average sparseness measures for V1 responses at various levels of anesthesia. [sent-162, score-0.737]

53 9, it is clear that the recurrent connections are necessary in order to keep the activity of the neurons on the solution line, while the stochastic activation function makes sparse neural responses more likely. [sent-167, score-0.712]

54 In a system that optimizes sparseness, disrupting the active sparsification process will lead to lower lifetime and population sparseness. [sent-169, score-0.523]

55 For example, if we reduce the strength of the recurrent connections in the neural network architecture (Eq. [sent-170, score-0.29]

56 9) by a factor α,   1 1 2 1 Gik yi )xk + 2 α( Rjk xj )xk − 2 xk + f (xk ) , (10) p(xk |xi=k , y) ∝ exp  2 ( σy i σy 2σy j=k the neurons become more decoupled, and try to separately account for the input, as illustrated in Fig. [sent-171, score-0.267]

57 The decoupling will result in a reduction of population sparseness, as multiple neurons become active to explain the same input. [sent-173, score-0.327]

58 Also, lifetime sparseness will decrease, as the lack of competition between units means that individual units will be active more often. [sent-174, score-1.072]

59 We analyzed the parameters of the sparse coding model at the end of learning, and substituted the Gibbs sampling posterior distribution of Eq. [sent-177, score-0.37]

60 As predicted, decreasing α leads to a decrease in all sparseness measures. [sent-180, score-0.621]

61 We argue that a similar disruption of the active sparsification process can be obtained in electrophysiological experiments by comparing neural responses at different levels of isoflurane anesthesia. [sent-181, score-0.344]

62 In general, the evoked, feed-forward responses of V1 neurons under anesthesia are thought to remain 6 Figure 6: Neuronal response to a 3. [sent-182, score-0.482]

63 largely intact: Despite a decrease in average firing rate, the selectivity of neurons to orientation, frequency, and direction of motion has been shown to be very similar in awake and anesthetized animals [24, 25, 26]. [sent-187, score-0.569]

64 On the other hand, anesthesia disrupts contextual effects like figure-ground modulation [26] and pattern motion [27], which are known to be mediated by top-down and recurrent connections. [sent-188, score-0.388]

65 Other studies have shown that, at low concentrations, isoflurane anesthesia leaves the visual input to the cortex mostly intact, while the intracortical recurrent and top-down signals are disrupted [28, 29]. [sent-189, score-0.724]

66 Thus, if the representation in the visual cortex is optimally sparse, disrupting the active sparsification by anesthesia should decrease sparseness. [sent-190, score-0.766]

67 We analyzed multi-unit neural activity from bundles of 16 electrodes implanted in primary visual cortex of 3 adult Long-Evans rats (5-11 units per recording session, for a total of 39 units). [sent-191, score-0.796]

68 Recordings were made in the awake state and under four levels on anesthesia, from very light to deep (corresponding to concentrations of isoflurane between 0. [sent-192, score-0.317]

69 In order to confirm that the effect of the anesthetic does not prevent visual information to reach the cortex, we presented the animals with a full-field periodic stimulus (flashing) at 3. [sent-195, score-0.269]

70 8 Hz, and defined the amplitude of the noise, due to spontaneous activity and neural variability, as the average amplitude between 1 and 3. [sent-200, score-0.329]

71 The amplitude of the evoked signal decreases with increasing isoflurane concentration, due to a decrease in overall firing rate; however, the background noise is also suppressed with anesthesia, so that overall the signal-to-noise ratio does not decrease significantly with anesthesia (Fig. [sent-202, score-0.47]

72 All three sparseness measures increase significantly with increasing concentration of isoflurane2 (Fig. [sent-207, score-0.639]

73 Contrary to what is expected in a sparse-coding system, the data suggests that the contribution of lateral and top-down connections in the awake state leads to a less sparse code. [sent-209, score-0.375]

74 7 Discussion We examined multi-electrode recordings from primary visual cortex of ferrets over development, and of rats at different levels of anesthesia. [sent-210, score-0.638]

75 We found that, contrary to predictions based on theoretical considerations regarding optimal sparse coding systems, sparseness decreases with visual experience, and increases with increasing concentration of anesthetic. [sent-211, score-1.07]

76 These data suggest that the 2 Lifetime sparseness, TR: ANOVA with different anesthesia groups, P < 0. [sent-212, score-0.286]

77 01; multiple comparison tests with Tukey-Kramer correction shows the mean of awake group is different from the mean of all other groups with P < 0. [sent-213, score-0.262]

78 01; multiple comparison shows the mean of the awake group is different from that of the light, medium, and deep anesthesia groups, P < 0. [sent-215, score-0.54]

79 01, multiple comparison shows the mean of the awake group is different from that of the light, medium, and deep anesthesia groups, P < 0. [sent-217, score-0.54]

80 7 high sparseness levels that have been reported in previous accounts of sparseness in the visual cortex [7, 8, 9, 10, 11, 12], and which are otherwise consistent with our measurements (Fig. [sent-219, score-1.484]

81 3B, 5), are most likely a side effect of the high selectivity of neurons, or an overestimation due to the effect of anesthesia (Fig. [sent-220, score-0.363]

82 5; with the exception of [8], where sparseness was measured on awake animals), but do not indicate an active optimization of sparse responses (cf. [sent-221, score-1.056]

83 Our measurements of sparseness from neural data are based on multi-unit recording. [sent-223, score-0.63]

84 By collecting spikes from multiple cells, we are in fact reporting a lower bound of the true sparseness values. [sent-224, score-0.595]

85 While a precise measurement of the absolute value of these quantities would require single-unit measurement, our conclusions are based on relative comparisons of sparseness under different conditions, and are thus not affected. [sent-225, score-0.567]

86 Our theoretical predictions were verified with a common sparse coding model [3]. [sent-226, score-0.329]

87 Despite these specific choices, we expect the model results to be general to the entire class of sparse coding models. [sent-228, score-0.329]

88 Alternatively, one could assume a deterministic neural architecture, with a network dynamic that would drive the activity of the units to values that maximize the image probability [3, 30, 31]. [sent-230, score-0.331]

89 Although our analysis found no evidence for active sparsification in the primary visual cortex, ideas derived from and closely related to the sparse coding principle are likely to remain important for our understanding of visual processing. [sent-233, score-0.845]

90 Efficient coding remains a most plausible functional account of coding in more peripheral parts of the sensory pathway, and particularly in the retina, from where raw visual input has to be sent through the bottleneck formed by the optic nerve without significant loss of information [32, 33]. [sent-234, score-0.642]

91 Independent component filters of natural images compared with simple cells in primary visual cortex. [sent-267, score-0.318]

92 Responses of neurons in primary and inferior temporal visual cortices to natural scenes. [sent-291, score-0.402]

93 Sparse coding and decorrelation in primary visual cortex during natural vision. [sent-298, score-0.633]

94 Selectivity and sparseness in the responses of striate complex cells. [sent-315, score-0.7]

95 Heterogeneity in the responses of adjacent neurons to natural stimuli in cat striate cortex. [sent-323, score-0.275]

96 The sparseness of neuronal responses in ferret primary visual cortex. [sent-331, score-0.955]

97 Development of orientation selectivity in ferret visual cortex and effects of deprivation. [sent-399, score-0.455]

98 The contribution of sensory experience to the maturation of orientation selectivity in ferret visual cortex. [sent-407, score-0.44]

99 Figure-ground activity in primary visual cortex is suppressed by anesthesia. [sent-432, score-0.54]

100 Sparse coding via thresholding and local competition in neural circuits. [sent-470, score-0.301]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('sparseness', 0.567), ('anesthesia', 0.286), ('lifetime', 0.235), ('coding', 0.208), ('awake', 0.201), ('visual', 0.174), ('xk', 0.156), ('activity', 0.146), ('sparsi', 0.14), ('cortex', 0.134), ('population', 0.134), ('ring', 0.124), ('sparse', 0.121), ('neurons', 0.111), ('iso', 0.107), ('recurrent', 0.102), ('anova', 0.094), ('urane', 0.089), ('primary', 0.086), ('responses', 0.085), ('active', 0.082), ('units', 0.079), ('rats', 0.078), ('ferrets', 0.078), ('selectivity', 0.077), ('electrophysiological', 0.072), ('age', 0.065), ('neural', 0.063), ('animals', 0.063), ('anesthetized', 0.063), ('spearman', 0.057), ('tr', 0.054), ('decrease', 0.054), ('berkes', 0.054), ('gik', 0.054), ('rjk', 0.054), ('connections', 0.053), ('sensory', 0.052), ('striate', 0.048), ('treves', 0.047), ('concentrations', 0.047), ('recordings', 0.046), ('neuron', 0.045), ('network', 0.043), ('ferret', 0.043), ('measures', 0.043), ('levels', 0.042), ('generative', 0.042), ('development', 0.042), ('amplitude', 0.042), ('bars', 0.042), ('posterior', 0.041), ('hz', 0.037), ('spontaneous', 0.036), ('system', 0.036), ('adulthood', 0.036), ('alert', 0.036), ('disrupting', 0.036), ('fiser', 0.036), ('implanted', 0.036), ('maturation', 0.036), ('overcompleteness', 0.036), ('rolls', 0.036), ('vinje', 0.036), ('weakened', 0.036), ('groups', 0.035), ('actively', 0.035), ('receptive', 0.034), ('evoked', 0.034), ('neuroscience', 0.033), ('stimulus', 0.032), ('natural', 0.031), ('activation', 0.031), ('experience', 0.031), ('competition', 0.03), ('stages', 0.03), ('dependencies', 0.03), ('bins', 0.03), ('animal', 0.03), ('architecture', 0.029), ('patches', 0.029), ('overcomplete', 0.029), ('adapts', 0.029), ('vem', 0.029), ('mat', 0.029), ('intact', 0.029), ('hateren', 0.029), ('suppl', 0.029), ('environment', 0.029), ('increase', 0.029), ('spikes', 0.028), ('studies', 0.028), ('scenes', 0.028), ('neurophysiology', 0.028), ('orientation', 0.027), ('images', 0.027), ('deep', 0.027), ('weights', 0.026), ('group', 0.026), ('ms', 0.026)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9999997 164 nips-2009-No evidence for active sparsification in the visual cortex

Author: Pietro Berkes, Ben White, Jozsef Fiser

Abstract: The proposal that cortical activity in the visual cortex is optimized for sparse neural activity is one of the most established ideas in computational neuroscience. However, direct experimental evidence for optimal sparse coding remains inconclusive, mostly due to the lack of reference values on which to judge the measured sparseness. Here we analyze neural responses to natural movies in the primary visual cortex of ferrets at different stages of development and of rats while awake and under different levels of anesthesia. In contrast with prediction from a sparse coding model, our data shows that population and lifetime sparseness decrease with visual experience, and increase from the awake to anesthetized state. These results suggest that the representation in the primary visual cortex is not actively optimized to maximize sparseness. 1

2 0.15986733 163 nips-2009-Neurometric function analysis of population codes

Author: Philipp Berens, Sebastian Gerwinn, Alexander Ecker, Matthias Bethge

Abstract: The relative merits of different population coding schemes have mostly been analyzed in the framework of stimulus reconstruction using Fisher Information. Here, we consider the case of stimulus discrimination in a two alternative forced choice paradigm and compute neurometric functions in terms of the minimal discrimination error and the Jensen-Shannon information to study neural population codes. We first explore the relationship between minimum discrimination error, JensenShannon Information and Fisher Information and show that the discrimination framework is more informative about the coding accuracy than Fisher Information as it defines an error for any pair of possible stimuli. In particular, it includes Fisher Information as a special case. Second, we use the framework to study population codes of angular variables. Specifically, we assess the impact of different noise correlations structures on coding accuracy in long versus short decoding time windows. That is, for long time window we use the common Gaussian noise approximation. To address the case of short time windows we analyze the Ising model with identical noise correlation structure. In this way, we provide a new rigorous framework for assessing the functional consequences of noise correlation structures for the representational accuracy of neural population codes that is in particular applicable to short-time population coding. 1

3 0.15962508 169 nips-2009-Nonlinear Learning using Local Coordinate Coding

Author: Kai Yu, Tong Zhang, Yihong Gong

Abstract: This paper introduces a new method for semi-supervised learning on high dimensional nonlinear manifolds, which includes a phase of unsupervised basis learning and a phase of supervised function learning. The learned bases provide a set of anchor points to form a local coordinate system, such that each data point x on the manifold can be locally approximated by a linear combination of its nearby anchor points, and the linear weights become its local coordinate coding. We show that a high dimensional nonlinear function can be approximated by a global linear function with respect to this coding scheme, and the approximation quality is ensured by the locality of such coding. The method turns a difficult nonlinear learning problem into a simple global linear learning problem, which overcomes some drawbacks of traditional local learning methods. 1

4 0.15645255 162 nips-2009-Neural Implementation of Hierarchical Bayesian Inference by Importance Sampling

Author: Lei Shi, Thomas L. Griffiths

Abstract: The goal of perception is to infer the hidden states in the hierarchical process by which sensory data are generated. Human behavior is consistent with the optimal statistical solution to this problem in many tasks, including cue combination and orientation detection. Understanding the neural mechanisms underlying this behavior is of particular importance, since probabilistic computations are notoriously challenging. Here we propose a simple mechanism for Bayesian inference which involves averaging over a few feature detection neurons which fire at a rate determined by their similarity to a sensory stimulus. This mechanism is based on a Monte Carlo method known as importance sampling, commonly used in computer science and statistics. Moreover, a simple extension to recursive importance sampling can be used to perform hierarchical Bayesian inference. We identify a scheme for implementing importance sampling with spiking neurons, and show that this scheme can account for human behavior in cue combination and the oblique effect. 1

5 0.13411361 99 nips-2009-Functional network reorganization in motor cortex can be explained by reward-modulated Hebbian learning

Author: Steven Chase, Andrew Schwartz, Wolfgang Maass, Robert A. Legenstein

Abstract: The control of neuroprosthetic devices from the activity of motor cortex neurons benefits from learning effects where the function of these neurons is adapted to the control task. It was recently shown that tuning properties of neurons in monkey motor cortex are adapted selectively in order to compensate for an erroneous interpretation of their activity. In particular, it was shown that the tuning curves of those neurons whose preferred directions had been misinterpreted changed more than those of other neurons. In this article, we show that the experimentally observed self-tuning properties of the system can be explained on the basis of a simple learning rule. This learning rule utilizes neuronal noise for exploration and performs Hebbian weight updates that are modulated by a global reward signal. In contrast to most previously proposed reward-modulated Hebbian learning rules, this rule does not require extraneous knowledge about what is noise and what is signal. The learning rule is able to optimize the performance of the model system within biologically realistic periods of time and under high noise levels. When the neuronal noise is fitted to experimental data, the model produces learning effects similar to those found in monkey experiments.

6 0.12322239 19 nips-2009-A joint maximum-entropy model for binary neural population patterns and continuous signals

7 0.12168708 52 nips-2009-Code-specific policy gradient rules for spiking neurons

8 0.11723342 224 nips-2009-Sparse and Locally Constant Gaussian Graphical Models

9 0.11391543 200 nips-2009-Reconstruction of Sparse Circuits Using Multi-neuronal Excitation (RESCUME)

10 0.10158526 38 nips-2009-Augmenting Feature-driven fMRI Analyses: Semi-supervised learning and resting state activity

11 0.098589137 13 nips-2009-A Neural Implementation of the Kalman Filter

12 0.094958022 183 nips-2009-Optimal context separation of spiking haptic signals by second-order somatosensory neurons

13 0.091532014 151 nips-2009-Measuring Invariances in Deep Networks

14 0.089023657 43 nips-2009-Bayesian estimation of orientation preference maps

15 0.087921381 219 nips-2009-Slow, Decorrelated Features for Pretraining Complex Cell-like Networks

16 0.086247228 231 nips-2009-Statistical Models of Linear and Nonlinear Contextual Interactions in Early Visual Processing

17 0.084628806 83 nips-2009-Estimating image bases for visual image reconstruction from human brain activity

18 0.082864568 88 nips-2009-Extending Phase Mechanism to Differential Motion Opponency for Motion Pop-out

19 0.079625838 104 nips-2009-Group Sparse Coding

20 0.073813662 6 nips-2009-A Biologically Plausible Model for Rapid Natural Scene Identification


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.2), (1, -0.204), (2, 0.166), (3, 0.152), (4, 0.044), (5, 0.023), (6, -0.022), (7, 0.0), (8, 0.051), (9, 0.046), (10, -0.011), (11, -0.007), (12, -0.045), (13, 0.053), (14, 0.008), (15, -0.028), (16, 0.002), (17, 0.025), (18, -0.045), (19, -0.004), (20, -0.078), (21, 0.025), (22, 0.07), (23, -0.026), (24, 0.075), (25, -0.009), (26, 0.017), (27, 0.025), (28, -0.086), (29, 0.023), (30, 0.166), (31, -0.124), (32, -0.068), (33, 0.108), (34, -0.084), (35, -0.027), (36, -0.028), (37, -0.084), (38, -0.168), (39, 0.035), (40, 0.024), (41, 0.03), (42, -0.079), (43, -0.084), (44, -0.005), (45, -0.087), (46, 0.071), (47, -0.114), (48, -0.015), (49, 0.015)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96744049 164 nips-2009-No evidence for active sparsification in the visual cortex

Author: Pietro Berkes, Ben White, Jozsef Fiser

Abstract: The proposal that cortical activity in the visual cortex is optimized for sparse neural activity is one of the most established ideas in computational neuroscience. However, direct experimental evidence for optimal sparse coding remains inconclusive, mostly due to the lack of reference values on which to judge the measured sparseness. Here we analyze neural responses to natural movies in the primary visual cortex of ferrets at different stages of development and of rats while awake and under different levels of anesthesia. In contrast with prediction from a sparse coding model, our data shows that population and lifetime sparseness decrease with visual experience, and increase from the awake to anesthetized state. These results suggest that the representation in the primary visual cortex is not actively optimized to maximize sparseness. 1

2 0.80748564 163 nips-2009-Neurometric function analysis of population codes

Author: Philipp Berens, Sebastian Gerwinn, Alexander Ecker, Matthias Bethge

Abstract: The relative merits of different population coding schemes have mostly been analyzed in the framework of stimulus reconstruction using Fisher Information. Here, we consider the case of stimulus discrimination in a two alternative forced choice paradigm and compute neurometric functions in terms of the minimal discrimination error and the Jensen-Shannon information to study neural population codes. We first explore the relationship between minimum discrimination error, JensenShannon Information and Fisher Information and show that the discrimination framework is more informative about the coding accuracy than Fisher Information as it defines an error for any pair of possible stimuli. In particular, it includes Fisher Information as a special case. Second, we use the framework to study population codes of angular variables. Specifically, we assess the impact of different noise correlations structures on coding accuracy in long versus short decoding time windows. That is, for long time window we use the common Gaussian noise approximation. To address the case of short time windows we analyze the Ising model with identical noise correlation structure. In this way, we provide a new rigorous framework for assessing the functional consequences of noise correlation structures for the representational accuracy of neural population codes that is in particular applicable to short-time population coding. 1

3 0.6600669 169 nips-2009-Nonlinear Learning using Local Coordinate Coding

Author: Kai Yu, Tong Zhang, Yihong Gong

Abstract: This paper introduces a new method for semi-supervised learning on high dimensional nonlinear manifolds, which includes a phase of unsupervised basis learning and a phase of supervised function learning. The learned bases provide a set of anchor points to form a local coordinate system, such that each data point x on the manifold can be locally approximated by a linear combination of its nearby anchor points, and the linear weights become its local coordinate coding. We show that a high dimensional nonlinear function can be approximated by a global linear function with respect to this coding scheme, and the approximation quality is ensured by the locality of such coding. The method turns a difficult nonlinear learning problem into a simple global linear learning problem, which overcomes some drawbacks of traditional local learning methods. 1

4 0.62445056 162 nips-2009-Neural Implementation of Hierarchical Bayesian Inference by Importance Sampling

Author: Lei Shi, Thomas L. Griffiths

Abstract: The goal of perception is to infer the hidden states in the hierarchical process by which sensory data are generated. Human behavior is consistent with the optimal statistical solution to this problem in many tasks, including cue combination and orientation detection. Understanding the neural mechanisms underlying this behavior is of particular importance, since probabilistic computations are notoriously challenging. Here we propose a simple mechanism for Bayesian inference which involves averaging over a few feature detection neurons which fire at a rate determined by their similarity to a sensory stimulus. This mechanism is based on a Monte Carlo method known as importance sampling, commonly used in computer science and statistics. Moreover, a simple extension to recursive importance sampling can be used to perform hierarchical Bayesian inference. We identify a scheme for implementing importance sampling with spiking neurons, and show that this scheme can account for human behavior in cue combination and the oblique effect. 1

5 0.58744687 19 nips-2009-A joint maximum-entropy model for binary neural population patterns and continuous signals

Author: Sebastian Gerwinn, Philipp Berens, Matthias Bethge

Abstract: Second-order maximum-entropy models have recently gained much interest for describing the statistics of binary spike trains. Here, we extend this approach to take continuous stimuli into account as well. By constraining the joint secondorder statistics, we obtain a joint Gaussian-Boltzmann distribution of continuous stimuli and binary neural firing patterns, for which we also compute marginal and conditional distributions. This model has the same computational complexity as pure binary models and fitting it to data is a convex problem. We show that the model can be seen as an extension to the classical spike-triggered average/covariance analysis and can be used as a non-linear method for extracting features which a neural population is sensitive to. Further, by calculating the posterior distribution of stimuli given an observed neural response, the model can be used to decode stimuli and yields a natural spike-train metric. Therefore, extending the framework of maximum-entropy models to continuous variables allows us to gain novel insights into the relationship between the firing patterns of neural ensembles and the stimuli they are processing. 1

6 0.54787266 183 nips-2009-Optimal context separation of spiking haptic signals by second-order somatosensory neurons

7 0.52766538 231 nips-2009-Statistical Models of Linear and Nonlinear Contextual Interactions in Early Visual Processing

8 0.51914173 99 nips-2009-Functional network reorganization in motor cortex can be explained by reward-modulated Hebbian learning

9 0.46894073 13 nips-2009-A Neural Implementation of the Kalman Filter

10 0.44609848 188 nips-2009-Perceptual Multistability as Markov Chain Monte Carlo Inference

11 0.43546447 6 nips-2009-A Biologically Plausible Model for Rapid Natural Scene Identification

12 0.43338215 52 nips-2009-Code-specific policy gradient rules for spiking neurons

13 0.4327184 138 nips-2009-Learning with Compressible Priors

14 0.41505715 224 nips-2009-Sparse and Locally Constant Gaussian Graphical Models

15 0.40674165 43 nips-2009-Bayesian estimation of orientation preference maps

16 0.3996928 200 nips-2009-Reconstruction of Sparse Circuits Using Multi-neuronal Excitation (RESCUME)

17 0.39787313 62 nips-2009-Correlation Coefficients are Insufficient for Analyzing Spike Count Dependencies

18 0.39043516 104 nips-2009-Group Sparse Coding

19 0.38123864 38 nips-2009-Augmenting Feature-driven fMRI Analyses: Semi-supervised learning and resting state activity

20 0.38033614 219 nips-2009-Slow, Decorrelated Features for Pretraining Complex Cell-like Networks


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(7, 0.022), (24, 0.036), (25, 0.066), (35, 0.051), (36, 0.054), (39, 0.046), (58, 0.112), (61, 0.019), (62, 0.317), (71, 0.04), (81, 0.038), (86, 0.091), (91, 0.018)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.82469124 164 nips-2009-No evidence for active sparsification in the visual cortex

Author: Pietro Berkes, Ben White, Jozsef Fiser

Abstract: The proposal that cortical activity in the visual cortex is optimized for sparse neural activity is one of the most established ideas in computational neuroscience. However, direct experimental evidence for optimal sparse coding remains inconclusive, mostly due to the lack of reference values on which to judge the measured sparseness. Here we analyze neural responses to natural movies in the primary visual cortex of ferrets at different stages of development and of rats while awake and under different levels of anesthesia. In contrast with prediction from a sparse coding model, our data shows that population and lifetime sparseness decrease with visual experience, and increase from the awake to anesthetized state. These results suggest that the representation in the primary visual cortex is not actively optimized to maximize sparseness. 1

2 0.77299058 13 nips-2009-A Neural Implementation of the Kalman Filter

Author: Robert Wilson, Leif Finkel

Abstract: Recent experimental evidence suggests that the brain is capable of approximating Bayesian inference in the face of noisy input stimuli. Despite this progress, the neural underpinnings of this computation are still poorly understood. In this paper we focus on the Bayesian filtering of stochastic time series and introduce a novel neural network, derived from a line attractor architecture, whose dynamics map directly onto those of the Kalman filter in the limit of small prediction error. When the prediction error is large we show that the network responds robustly to changepoints in a way that is qualitatively compatible with the optimal Bayesian model. The model suggests ways in which probability distributions are encoded in the brain and makes a number of testable experimental predictions. 1

3 0.63466918 41 nips-2009-Bayesian Source Localization with the Multivariate Laplace Prior

Author: Marcel V. Gerven, Botond Cseke, Robert Oostenveld, Tom Heskes

Abstract: We introduce a novel multivariate Laplace (MVL) distribution as a sparsity promoting prior for Bayesian source localization that allows the specification of constraints between and within sources. We represent the MVL distribution as a scale mixture that induces a coupling between source variances instead of their means. Approximation of the posterior marginals using expectation propagation is shown to be very efficient due to properties of the scale mixture representation. The computational bottleneck amounts to computing the diagonal elements of a sparse matrix inverse. Our approach is illustrated using a mismatch negativity paradigm for which MEG data and a structural MRI have been acquired. We show that spatial coupling leads to sources which are active over larger cortical areas as compared with an uncoupled prior. 1

4 0.54221588 99 nips-2009-Functional network reorganization in motor cortex can be explained by reward-modulated Hebbian learning

Author: Steven Chase, Andrew Schwartz, Wolfgang Maass, Robert A. Legenstein

Abstract: The control of neuroprosthetic devices from the activity of motor cortex neurons benefits from learning effects where the function of these neurons is adapted to the control task. It was recently shown that tuning properties of neurons in monkey motor cortex are adapted selectively in order to compensate for an erroneous interpretation of their activity. In particular, it was shown that the tuning curves of those neurons whose preferred directions had been misinterpreted changed more than those of other neurons. In this article, we show that the experimentally observed self-tuning properties of the system can be explained on the basis of a simple learning rule. This learning rule utilizes neuronal noise for exploration and performs Hebbian weight updates that are modulated by a global reward signal. In contrast to most previously proposed reward-modulated Hebbian learning rules, this rule does not require extraneous knowledge about what is noise and what is signal. The learning rule is able to optimize the performance of the model system within biologically realistic periods of time and under high noise levels. When the neuronal noise is fitted to experimental data, the model produces learning effects similar to those found in monkey experiments.

5 0.53530914 19 nips-2009-A joint maximum-entropy model for binary neural population patterns and continuous signals

Author: Sebastian Gerwinn, Philipp Berens, Matthias Bethge

Abstract: Second-order maximum-entropy models have recently gained much interest for describing the statistics of binary spike trains. Here, we extend this approach to take continuous stimuli into account as well. By constraining the joint secondorder statistics, we obtain a joint Gaussian-Boltzmann distribution of continuous stimuli and binary neural firing patterns, for which we also compute marginal and conditional distributions. This model has the same computational complexity as pure binary models and fitting it to data is a convex problem. We show that the model can be seen as an extension to the classical spike-triggered average/covariance analysis and can be used as a non-linear method for extracting features which a neural population is sensitive to. Further, by calculating the posterior distribution of stimuli given an observed neural response, the model can be used to decode stimuli and yields a natural spike-train metric. Therefore, extending the framework of maximum-entropy models to continuous variables allows us to gain novel insights into the relationship between the firing patterns of neural ensembles and the stimuli they are processing. 1

6 0.52527905 162 nips-2009-Neural Implementation of Hierarchical Bayesian Inference by Importance Sampling

7 0.52355605 62 nips-2009-Correlation Coefficients are Insufficient for Analyzing Spike Count Dependencies

8 0.51330835 163 nips-2009-Neurometric function analysis of population codes

9 0.50442153 210 nips-2009-STDP enables spiking neurons to detect hidden causes of their inputs

10 0.49703658 9 nips-2009-A Game-Theoretic Approach to Hypergraph Clustering

11 0.49504253 155 nips-2009-Modelling Relational Data using Bayesian Clustered Tensor Factorization

12 0.49255237 158 nips-2009-Multi-Label Prediction via Sparse Infinite CCA

13 0.49213082 104 nips-2009-Group Sparse Coding

14 0.49146229 254 nips-2009-Variational Gaussian-process factor analysis for modeling spatio-temporal data

15 0.48460871 145 nips-2009-Manifold Embeddings for Model-Based Reinforcement Learning under Partial Observability

16 0.48457408 70 nips-2009-Discriminative Network Models of Schizophrenia

17 0.4839958 3 nips-2009-AUC optimization and the two-sample problem

18 0.48350972 148 nips-2009-Matrix Completion from Power-Law Distributed Samples

19 0.48321539 142 nips-2009-Locality-sensitive binary codes from shift-invariant kernels

20 0.4830955 137 nips-2009-Learning transport operators for image manifolds