nips nips2000 nips2000-147 knowledge-graph by maker-knowledge-mining

147 nips-2000-Who Does What? A Novel Algorithm to Determine Function Localization


Source: pdf

Author: Ranit Aharonov-Barki, Isaac Meilijson, Eytan Ruppin

Abstract: We introduce a novel algorithm, termed PPA (Performance Prediction Algorithm), that quantitatively measures the contributions of elements of a neural system to the tasks it performs. The algorithm identifies the neurons or areas which participate in a cognitive or behavioral task, given data about performance decrease in a small set of lesions. It also allows the accurate prediction of performances due to multi-element lesions. The effectiveness of the new algorithm is demonstrated in two models of recurrent neural networks with complex interactions among the elements. The algorithm is scalable and applicable to the analysis of large neural networks. Given the recent advances in reversible inactivation techniques, it has the potential to significantly contribute to the understanding of the organization of biological nervous systems, and to shed light on the long-lasting debate about local versus distributed computation in the brain.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 il Abstract We introduce a novel algorithm, termed PPA (Performance Prediction Algorithm), that quantitatively measures the contributions of elements of a neural system to the tasks it performs. [sent-12, score-0.489]

2 The algorithm identifies the neurons or areas which participate in a cognitive or behavioral task, given data about performance decrease in a small set of lesions. [sent-13, score-0.622]

3 The effectiveness of the new algorithm is demonstrated in two models of recurrent neural networks with complex interactions among the elements. [sent-15, score-0.316]

4 Given the recent advances in reversible inactivation techniques, it has the potential to significantly contribute to the understanding of the organization of biological nervous systems, and to shed light on the long-lasting debate about local versus distributed computation in the brain. [sent-17, score-0.475]

5 Each task recruits some elements of the system (be it single neurons or cortical areas), and often the same element participates in several tasks. [sent-19, score-0.472]

6 This poses a difficult challenge when one attempts to identify the roles of the network elements, and to assess their contributions to the different tasks. [sent-20, score-0.278]

7 These classical methods suffer from two fundamental flaws: First, they do not take into account the probable case that there are complex interactions among elements in the system. [sent-22, score-0.219]

8 , if two neurons have a high degree of redundancy, lesioning of either one alone will not reveal its influence. [sent-25, score-0.532]

9 Moreover, the very nature of the contribution of a neural element is quite elusive and ill defined. [sent-27, score-0.321]

10 In this paper we propose both a rigorous, operative definition for the neuron's contribution and a novel algorithm to measure it. [sent-28, score-0.424]

11 Identifying the contributions of elements of a system to varying tasks is often used as a basis for claims concerning the degree of the distribution of computation in that system (e. [sent-29, score-0.383]

12 The distributed representation approach hypothesizes that computation emerges from the interaction between many simple elements, and is supported by evidence that many elements are important in a given task [2, 3, 4]. [sent-32, score-0.331]

13 The local representation hypothesis suggests that activity in single neurons represents specific concepts (the grandmother cell notion) or performs specific computations (see [5]). [sent-33, score-0.42]

14 This question of distributed versus localized computation in nervous systems is fundamental and has attracted ample attention. [sent-34, score-0.248]

15 The ability of the new algorithm suggested here, to quantify the contribution of elements to tasks, allows us to deduce both the distribution of the different tasks in the network and the degree of specialization of each neuron. [sent-36, score-0.786]

16 We applied the Performance Prediction Algorithm (PPA) to two models of recurrent neural networks: The first model is a network hand-crafted to exhibit redundancy, feedback and modulatory effects. [sent-37, score-0.366]

17 The second consists of evolved neurocontrollers for behaving autonomous agents [6]. [sent-38, score-0.431]

18 The fact that these are recurrent networks, and not simple feed-forward ones, suggests that the algorithm can be used in many classes of neural systems which pose a difficult challenge for existing analysis tools. [sent-40, score-0.311]

19 It can thus make a major contribution to studying the organization of tasks in biological nervous systems as well as to the long-debated issue of local versus distributed computation in the brain. [sent-42, score-0.77]

20 1 The Contribution Matrix Assume a network (either natural or artificial) of N neurons performing a set of P different functional tasks. [sent-44, score-0.352]

21 For any given task, we would like to find the contribution vector c = (Cl' . [sent-45, score-0.288]

22 , CN), where Ci is the contribution of neuron i to the task in question. [sent-48, score-0.602]

23 We suggest a rigorous and operative definition for this contribution vector, and propose an algorithm for its computation. [sent-49, score-0.42]

24 Suppose a set of neurons in the network is lesioned and the network then performs the specified task. [sent-50, score-0.588]

25 The result of this experiment is described by the pair < m, Pm > where m is an incidence vector of length N, such that m(i) = 0 if neuron i was lesioned and 1 if it was intact. [sent-51, score-0.38]

26 Pm is the peiformance of the network divided by the baseline case of a fully intact network. [sent-52, score-0.187]

27 (1) {m} lIt is assumed that as more important elements are lesioned (m . [sent-55, score-0.249]

28 This c will be taken as the contribution vector for the task tested, and the corresponding f will be called its adjoint performance prediction function. [sent-57, score-0.602]

29 Given a configuration m of lesioned and intact neurons, the predicted performance of the network is the sum of the contribution values of the intact neurons (m . [sent-58, score-1.094]

30 The contribution vector c accompanied by f is optimal in the sense that this predicted value minimizes the Mean Square Error relative to the real performance, over all possible lesion configurations. [sent-60, score-0.48]

31 The computation of the contribution vectors is done separately for each task, using some small subset of all the 2N possible lesioning configurations. [sent-61, score-0.509]

32 The Performance Prediction Algorithm (PPA): • Step 1: Choose an initial normalized contribution vector c for the task. [sent-63, score-0.288]

33 The output of the algorithm is thus a contribution value for every neuron, accompanied by a function, such that given any configuration of lesioned neurons, one can predict with high confidence the performance of the damaged network. [sent-75, score-0.64]

34 Thus, the algorithm achieves two important goals: a) It identifies automatically the neurons or areas which participate in a cognitive or behavioral task. [sent-76, score-0.543]

35 b) The function f predicts the result of multiple lesions, allowing for non linear combinations of the effects of single lesions 2 . [sent-77, score-0.234]

36 The application of the PPA to all tasks defines a contribution matrix C, whose kth column (k = L. [sent-78, score-0.371]

37 P) is the contribution vector computed using the above algorithm for task k, i. [sent-79, score-0.412]

38 2 Localization and Specialization Introducing the contribution matrix allows us to approach issues relating to the distribution of computation in a network in a quantitative manner. [sent-83, score-0.485]

39 Here we suggest quantitative measures for localization of function and specialization of neurons. [sent-84, score-0.361]

40 If a task is completely distributed in the network, the contributions of all neurons to that task should be identical (full equipotentiality [2]). [sent-85, score-0.568]

41 Thus, we define the localization Lk of task k as a deviation from equipotentiality. [sent-86, score-0.209]

42 Formally, Lk is the standard deviation of column k of the contribution matrix divided by the maximal possible standard deviation. [sent-87, score-0.288]

43 L _ k - t, std(C*k) J(N _ 1)jN2 (2) 2Tbe computation of involving a simple perceptron-based function approximation, implies the immediate applicability of the PPA for large networks, given weB-behaved performance prediction functions. [sent-88, score-0.23]

44 The performance of the network is taken to be the activity of neuron 10. [sent-93, score-0.546]

45 If both neurons 2 and 3 are switched on they activate a modulating effect on neuron 8 which switches its activation function from the inactive case to the active case. [sent-97, score-0.527]

46 ° Note that Lk is in the range [0,1] where Lk = indicates full distribution and Lk = 1 indicates localization of the task to one neuron alone. [sent-98, score-0.449]

47 The degree of localization of function in the whole network, L, is the simple average of Lk over all tasks. [sent-99, score-0.184]

48 Similarly, if neuron i is highly specialized for a certain task, Ci * will deviate strongly from a uniform distribution, and thus we define Si, the specialization of neuron i as (3) 3 Results We tested the proposed index on two types of recurrent networks. [sent-100, score-0.728]

49 We chose to study recurrent networks because they pose an especially difficult challenge, as the output units also participate in the computation, and in general complex interactions among elements may arise3 . [sent-101, score-0.487]

50 We begin with a hand-crafted example containing redundancy, feedback and modulation, and continue with networks that emerge from an evolutionary process. [sent-102, score-0.267]

51 The evolved networks are not hand-crafted but rather their structure emerges as an outcome of the selection pressure to successfully perform the tasks defined. [sent-103, score-0.395]

52 1 Hand-Crafted Example Figure 1 depicts a neural network we designed to include potential pitfalls for analysis procedures aimed at identifying important neurons of the system (see details in the caption). [sent-106, score-0.43]

53 Figure 2(a) shows the contribution values computed by three methods applied to this network. [sent-107, score-0.288]

54 The first estimation was computed as the correlation between the activity of the 3In order to single out the role of output units in the computation, lesioning was performed by decoupling their activity from the rest of the network and not by knocking them out completely. [sent-108, score-0.617]

55 8 Figure 2: Results of the PPA: a) Contribution values obtained using three methods: The correlation of activity to performance, single neuron lesions, and the PPA. [sent-123, score-0.45]

56 b) Predicted versus actual performance using c and its adjoint performance prediction function f obtained by the PPA. [sent-124, score-0.414]

57 The second estimation was computed as the decrease in performance due to lesioning of single neurons. [sent-128, score-0.292]

58 Note that as expected the activity correlation method assigns a high contribution value to neuron 9, even though it actually has no significance in determining the performance. [sent-130, score-0.765]

59 Single lesions fail to detect the significance of neurons involved in redundant interactions (neurons 4 - 6). [sent-131, score-0.588]

60 The PPA successfully identifies the underlying importance of all neurons in the network, even the subtle significance of the feedback from neuron 10. [sent-132, score-0.771]

61 We used a small training set (64 out of 210 configurations) containing lesions of either small (up to 20% chance for each neuron to be lesioned) or large (more than 90% chance of lesioning) degree. [sent-133, score-0.657]

62 As opposed to the two other methods, the PPA not only identifies and quantifies the significance of elements in the network, but also allows for the prediction of performances from multi-element lesions, even if they were absent from the training set. [sent-135, score-0.46]

63 The predicted performance following a given configuration of lesioned neurons is given by f(m . [sent-136, score-0.59]

64 Figure 2(b) depicts the predicted versus actual performances on a test set containing 230 configurations of varying degrees (0 - 100% chance of lesioning). [sent-139, score-0.494]

65 In principle, the other methods do not give the possibility to predict the performance in any straightforward way, as is evident from the non-linear form of the performance prediction error (see insert of figure 2(b». [sent-143, score-0.312]

66 The shape of the performance prediction function depends on the organization of the network, and can vary widely between different models (results not shown here). [sent-144, score-0.271]

67 2 Evolved Neurocontrollers Using evolutionary simulations we developed autonomous agents controlled by fully recurrent artificial neural networks. [sent-146, score-0.272]

68 High performance levels were attained by agents performing simple life-like tasks of foraging and navigation. [sent-147, score-0.207]

69 Using various analysis tools we found a common structure of a command neuron switching the dynamics of the network between 4Neuron 10 was omitted in this method of analysis since it is by definition in full correlation with the performance. [sent-148, score-0.536]

70 Although the command neuron mechanism was a robust phenomenon, the evolved networks did differ in the role other neurons performed. [sent-150, score-0.918]

71 When only limited sensory information was available, the command neuron relied on feedback from the motor units. [sent-151, score-0.617]

72 In other cases no such feedback was needed, but other neurons performed some auxiliary computation on the sensory input. [sent-152, score-0.397]

73 We applied the PPA to the evolved neurocontrollers in order to test its capabilities in a system on which we have previously obtained qualitative understanding, yet is still relatively complex. [sent-153, score-0.528]

74 Figure 3 depicts the contribution values of the neurons of three successful evolved neurocontrollers obtained using the PPA. [sent-154, score-0.975]

75 Figure 3(a) corresponds to a neurocontroller of an agent equipped with a position sensor (see [6] for details), which does not require any feedback from the motor units. [sent-155, score-0.186]

76 As can be seen these motor units indeed receive contribution values of near zero. [sent-156, score-0.374]

77 Figures 3(b) and 3(c) correspond to neurocontrollers who strongly relied on motor feedback for their memory mechanism to function properly. [sent-157, score-0.403]

78 In all three cases the command neuron receives high values as expected. [sent-159, score-0.394]

79 The performance prediction capabilities are extremely high, giving correlations of 0. [sent-160, score-0.245]

80 9967 for the three neurocontrollers, on a test set containing 100 lesion configurations of mixed degrees (0 - 100% chance of lesioning). [sent-163, score-0.302]

81 We also obtained the degree of localization of each network, as explained in section 2. [sent-164, score-0.184]

82 These values are in good agreement with the qualitative descriptions of the networks we have obtained using classical neuroscience tools [6]. [sent-170, score-0.187]

83 11 1315 17 19 21 NelronNumber Figure 3: Contribution values of neurons in three evolved neurocontrollers: Neurons 1-4 are motor neurons. [sent-178, score-0.548]

84 eN is the command neuron that emerged spontaneously in all evolutionary runs. [sent-179, score-0.538]

85 4 Discussion We have introduced a novel algorithm termed PPA (Performance Prediction Algorithm) to measure the contribution of neurons to the tasks that a neural network performs. [sent-180, score-0.881]

86 These contributions allowed us to quantitatively define an index of the degree of localization of function in the network, as well as for task-specialization of the neurons. [sent-181, score-0.318]

87 The algorithm uses data from performance measures of the network when different sets of neurons are lesioned. [sent-182, score-0.536]

88 It is predicted that larger training sets containing different degrees of damage will be needed to achieve good results for systems with higher redundancy and complex interactions. [sent-187, score-0.323]

89 We are currently working on studying the nature of the training set needed to achieve satisfying results, as this in itself may reveal information on the types of interactions between elements in the system. [sent-188, score-0.332]

90 We have applied the algorithm to two types of artificial recurrent neural networks, and demonstrated that it results in agreement with our qualitative a-priori notions and with qualitative classical analysis methods. [sent-189, score-0.428]

91 We have shown that estimation of the importance of system elements using simple activity measures and single lesions, may be misleading. [sent-190, score-0.361]

92 Moreover it serves as a powerful tool for predicting damage caused by multiple lesions, a feat that is difficult even when one can accurately estimate the contributions of single elements. [sent-192, score-0.203]

93 The shape of the performance prediction function itself may also reveal important features of the organization of the network, e. [sent-193, score-0.318]

94 The prediction capabilities of the algorithm can be used for regularization of recurrent networks. [sent-196, score-0.351]

95 Elman-like networks) pose a difficult problem, as it is hard to determine which elements should be pruned. [sent-200, score-0.188]

96 As the PPA can be applied on the level of single synapses as well as single neurons, it suggests a natural algorithm for effective regularization, pruning the elements by order of their contribution values. [sent-201, score-0.55]

97 These methods alleviate many of the problematic aspects of the classical lesion technique (ablation), enabling the acquisition of reliable data from multiple lesions of different configurations (for a review see [10]). [sent-205, score-0.395]

98 The promising results achieved with artificial networks and the potential scalability of the PPA lead us to believe that it will prove extremely useful in obtaining insights into the organization of natural nervous systems. [sent-208, score-0.265]

99 Neuronal activity during different behaviors in aplysia: A distributed organiation? [sent-216, score-0.194]

100 Emergence of memory-driven command neurons in evolved artificial agents. [sent-246, score-0.65]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('ppa', 0.386), ('contribution', 0.288), ('neurons', 0.256), ('neuron', 0.24), ('evolved', 0.206), ('lesions', 0.201), ('lesioning', 0.18), ('neurocontrollers', 0.18), ('command', 0.154), ('lesioned', 0.14), ('localization', 0.135), ('activity', 0.131), ('specialization', 0.111), ('prediction', 0.11), ('elements', 0.109), ('contributions', 0.101), ('recurrent', 0.1), ('feedback', 0.1), ('network', 0.096), ('lk', 0.094), ('nervous', 0.087), ('motor', 0.086), ('qualitative', 0.086), ('tasks', 0.083), ('identifies', 0.082), ('organization', 0.082), ('lesion', 0.08), ('performance', 0.079), ('configurations', 0.075), ('task', 0.074), ('predicted', 0.072), ('interactions', 0.071), ('participate', 0.066), ('chance', 0.066), ('distributed', 0.063), ('redundancy', 0.063), ('pm', 0.063), ('networks', 0.062), ('intact', 0.06), ('reversible', 0.06), ('performances', 0.06), ('evolutionary', 0.06), ('significance', 0.06), ('quantitative', 0.06), ('versus', 0.057), ('capabilities', 0.056), ('cn', 0.056), ('measures', 0.055), ('adjoint', 0.051), ('beker', 0.051), ('inactivation', 0.051), ('operative', 0.051), ('ranit', 0.051), ('ruppin', 0.051), ('scalable', 0.051), ('algorithm', 0.05), ('challenge', 0.049), ('degree', 0.049), ('reveal', 0.047), ('behavioral', 0.047), ('pose', 0.047), ('correlation', 0.046), ('depicts', 0.045), ('agents', 0.045), ('containing', 0.045), ('emerges', 0.044), ('insert', 0.044), ('chicago', 0.044), ('spontaneously', 0.044), ('configuration', 0.043), ('areas', 0.042), ('computation', 0.041), ('termed', 0.04), ('emerged', 0.04), ('accompanied', 0.04), ('classical', 0.039), ('perceptron', 0.039), ('training', 0.039), ('actual', 0.038), ('pruning', 0.037), ('deviate', 0.037), ('damage', 0.037), ('modulatory', 0.037), ('relied', 0.037), ('degrees', 0.036), ('regularization', 0.035), ('novel', 0.035), ('studying', 0.035), ('biological', 0.034), ('artificial', 0.034), ('quantitatively', 0.033), ('single', 0.033), ('neural', 0.033), ('importance', 0.033), ('difficult', 0.032), ('rigorous', 0.031), ('baseline', 0.031), ('assessing', 0.031), ('activation', 0.031), ('needed', 0.031)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999994 147 nips-2000-Who Does What? A Novel Algorithm to Determine Function Localization

Author: Ranit Aharonov-Barki, Isaac Meilijson, Eytan Ruppin

Abstract: We introduce a novel algorithm, termed PPA (Performance Prediction Algorithm), that quantitatively measures the contributions of elements of a neural system to the tasks it performs. The algorithm identifies the neurons or areas which participate in a cognitive or behavioral task, given data about performance decrease in a small set of lesions. It also allows the accurate prediction of performances due to multi-element lesions. The effectiveness of the new algorithm is demonstrated in two models of recurrent neural networks with complex interactions among the elements. The algorithm is scalable and applicable to the analysis of large neural networks. Given the recent advances in reversible inactivation techniques, it has the potential to significantly contribute to the understanding of the organization of biological nervous systems, and to shed light on the long-lasting debate about local versus distributed computation in the brain.

2 0.12953074 81 nips-2000-Learning Winner-take-all Competition Between Groups of Neurons in Lateral Inhibitory Networks

Author: Xiaohui Xie, Richard H. R. Hahnloser, H. Sebastian Seung

Abstract: It has long been known that lateral inhibition in neural networks can lead to a winner-take-all competition, so that only a single neuron is active at a steady state. Here we show how to organize lateral inhibition so that groups of neurons compete to be active. Given a collection of potentially overlapping groups, the inhibitory connectivity is set by a formula that can be interpreted as arising from a simple learning rule. Our analysis demonstrates that such inhibition generally results in winner-take-all competition between the given groups, with the exception of some degenerate cases. In a broader context, the network serves as a particular illustration of the general distinction between permitted and forbidden sets, which was introduced recently. From this viewpoint, the computational function of our network is to store and retrieve memories as permitted sets of coactive neurons. In traditional winner-take-all networks, lateral inhibition is used to enforce a localized, or

3 0.11259821 67 nips-2000-Homeostasis in a Silicon Integrate and Fire Neuron

Author: Shih-Chii Liu, Bradley A. Minch

Abstract: In this work, we explore homeostasis in a silicon integrate-and-fire neuron. The neuron adapts its firing rate over long time periods on the order of seconds or minutes so that it returns to its spontaneous firing rate after a lasting perturbation. Homeostasis is implemented via two schemes. One scheme looks at the presynaptic activity and adapts the synaptic weight depending on the presynaptic spiking rate. The second scheme adapts the synaptic

4 0.10028227 129 nips-2000-Temporally Dependent Plasticity: An Information Theoretic Account

Author: Gal Chechik, Naftali Tishby

Abstract: The paradigm of Hebbian learning has recently received a novel interpretation with the discovery of synaptic plasticity that depends on the relative timing of pre and post synaptic spikes. This paper derives a temporally dependent learning rule from the basic principle of mutual information maximization and studies its relation to the experimentally observed plasticity. We find that a supervised spike-dependent learning rule sharing similar structure with the experimentally observed plasticity increases mutual information to a stable near optimal level. Moreover, the analysis reveals how the temporal structure of time-dependent learning rules is determined by the temporal filter applied by neurons over their inputs. These results suggest experimental prediction as to the dependency of the learning rule on neuronal biophysical parameters 1

5 0.099316709 100 nips-2000-Permitted and Forbidden Sets in Symmetric Threshold-Linear Networks

Author: Richard H. R. Hahnloser, H. Sebastian Seung

Abstract: Ascribing computational principles to neural feedback circuits is an important problem in theoretical neuroscience. We study symmetric threshold-linear networks and derive stability results that go beyond the insights that can be gained from Lyapunov theory or energy functions. By applying linear analysis to subnetworks composed of coactive neurons, we determine the stability of potential steady states. We find that stability depends on two types of eigenmodes. One type determines global stability and the other type determines whether or not multistability is possible. We can prove the equivalence of our stability criteria with criteria taken from quadratic programming. Also, we show that there are permitted sets of neurons that can be coactive at a steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we can provide a formulation of longterm memory that is more general than the traditional perspective of fixed point attractor networks. A Lyapunov-function can be used to prove that a given set of differential equations is convergent. For example, if a neural network possesses a Lyapunov-function, then for almost any initial condition, the outputs of the neurons converge to a stable steady state. In the past, this stability-property was used to construct attractor networks that associatively recall memorized patterns. Lyapunov theory applies mainly to symmetric networks in which neurons have monotonic activation functions [1, 2]. Here we show that the restriction of activation functions to threshold-linear ones is not a mere limitation, but can yield new insights into the computational behavior of recurrent networks (for completeness, see also [3]). We present three main theorems about the neural responses to constant inputs. The first theorem provides necessary and sufficient conditions on the synaptic weight matrix for the existence of a globally asymptotically stable set of fixed points. These conditions can be expressed in terms of copositivity, a concept from quadratic programming and linear complementarity theory. Alternatively, they can be expressed in terms of certain eigenvalues and eigenvectors of submatrices of the synaptic weight matrix, making a connection to linear systems theory. The theorem guarantees that the network will produce a steady state response to any constant input. We regard this response as the computational output of the network, and its characterization is the topic of the second and third theorems. In the second theorem, we introduce the idea of permitted and forbidden sets. Under certain conditions on the synaptic weight matrix, we show that there exist sets of neurons that are

6 0.096946537 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics

7 0.095745109 24 nips-2000-An Information Maximization Approach to Overcomplete and Recurrent Representations

8 0.094812512 146 nips-2000-What Can a Single Neuron Compute?

9 0.08139047 102 nips-2000-Position Variance, Recurrence and Perceptual Learning

10 0.075809754 45 nips-2000-Emergence of Movement Sensitive Neurons' Properties by Learning a Sparse Code for Natural Moving Images

11 0.066602416 40 nips-2000-Dendritic Compartmentalization Could Underlie Competition and Attentional Biasing of Simultaneous Visual Stimuli

12 0.062643766 13 nips-2000-A Tighter Bound for Graphical Models

13 0.061489757 41 nips-2000-Discovering Hidden Variables: A Structure-Based Approach

14 0.061457548 8 nips-2000-A New Model of Spatial Representation in Multimodal Brain Areas

15 0.059310038 145 nips-2000-Weak Learners and Improved Rates of Convergence in Boosting

16 0.057078704 42 nips-2000-Divisive and Subtractive Mask Effects: Linking Psychophysics and Biophysics

17 0.056775488 89 nips-2000-Natural Sound Statistics and Divisive Normalization in the Auditory System

18 0.055341087 142 nips-2000-Using Free Energies to Represent Q-values in a Multiagent Reinforcement Learning Task

19 0.055284142 56 nips-2000-Foundations for a Circuit Complexity Theory of Sensory Processing

20 0.052983679 88 nips-2000-Multiple Timescales of Adaptation in a Neural Code


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.206), (1, -0.121), (2, -0.144), (3, -0.045), (4, 0.038), (5, -0.006), (6, 0.044), (7, -0.127), (8, -0.057), (9, 0.01), (10, 0.022), (11, -0.006), (12, 0.015), (13, -0.059), (14, 0.037), (15, 0.005), (16, -0.079), (17, 0.013), (18, 0.058), (19, 0.061), (20, 0.069), (21, -0.115), (22, 0.063), (23, -0.025), (24, 0.058), (25, 0.057), (26, 0.016), (27, 0.045), (28, -0.035), (29, -0.063), (30, -0.034), (31, 0.097), (32, -0.022), (33, 0.012), (34, 0.059), (35, -0.076), (36, -0.035), (37, 0.149), (38, -0.056), (39, 0.033), (40, -0.026), (41, -0.068), (42, 0.144), (43, -0.108), (44, 0.078), (45, -0.295), (46, -0.005), (47, -0.034), (48, 0.017), (49, -0.01)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95230049 147 nips-2000-Who Does What? A Novel Algorithm to Determine Function Localization

Author: Ranit Aharonov-Barki, Isaac Meilijson, Eytan Ruppin

Abstract: We introduce a novel algorithm, termed PPA (Performance Prediction Algorithm), that quantitatively measures the contributions of elements of a neural system to the tasks it performs. The algorithm identifies the neurons or areas which participate in a cognitive or behavioral task, given data about performance decrease in a small set of lesions. It also allows the accurate prediction of performances due to multi-element lesions. The effectiveness of the new algorithm is demonstrated in two models of recurrent neural networks with complex interactions among the elements. The algorithm is scalable and applicable to the analysis of large neural networks. Given the recent advances in reversible inactivation techniques, it has the potential to significantly contribute to the understanding of the organization of biological nervous systems, and to shed light on the long-lasting debate about local versus distributed computation in the brain.

2 0.58202803 24 nips-2000-An Information Maximization Approach to Overcomplete and Recurrent Representations

Author: Oren Shriki, Haim Sompolinsky, Daniel D. Lee

Abstract: The principle of maximizing mutual information is applied to learning overcomplete and recurrent representations. The underlying model consists of a network of input units driving a larger number of output units with recurrent interactions. In the limit of zero noise, the network is deterministic and the mutual information can be related to the entropy of the output units. Maximizing this entropy with respect to both the feedforward connections as well as the recurrent interactions results in simple learning rules for both sets of parameters. The conventional independent components (ICA) learning algorithm can be recovered as a special case where there is an equal number of output units and no recurrent connections. The application of these new learning rules is illustrated on a simple two-dimensional input example.

3 0.52169931 102 nips-2000-Position Variance, Recurrence and Perceptual Learning

Author: Zhaoping Li, Peter Dayan

Abstract: Stimulus arrays are inevitably presented at different positions on the retina in visual tasks, even those that nominally require fixation. In particular, this applies to many perceptual learning tasks. We show that perceptual inference or discrimination in the face of positional variance has a structurally different quality from inference about fixed position stimuli, involving a particular, quadratic, non-linearity rather than a purely linear discrimination. We show the advantage taking this non-linearity into account has for discrimination, and suggest it as a role for recurrent connections in area VI, by demonstrating the superior discrimination performance of a recurrent network. We propose that learning the feedforward and recurrent neural connections for these tasks corresponds to the fast and slow components of learning observed in perceptual learning tasks.

4 0.48585862 129 nips-2000-Temporally Dependent Plasticity: An Information Theoretic Account

Author: Gal Chechik, Naftali Tishby

Abstract: The paradigm of Hebbian learning has recently received a novel interpretation with the discovery of synaptic plasticity that depends on the relative timing of pre and post synaptic spikes. This paper derives a temporally dependent learning rule from the basic principle of mutual information maximization and studies its relation to the experimentally observed plasticity. We find that a supervised spike-dependent learning rule sharing similar structure with the experimentally observed plasticity increases mutual information to a stable near optimal level. Moreover, the analysis reveals how the temporal structure of time-dependent learning rules is determined by the temporal filter applied by neurons over their inputs. These results suggest experimental prediction as to the dependency of the learning rule on neuronal biophysical parameters 1

5 0.48237371 132 nips-2000-The Interplay of Symbolic and Subsymbolic Processes in Anagram Problem Solving

Author: David B. Grimes, Michael Mozer

Abstract: Although connectionist models have provided insights into the nature of perception and motor control, connectionist accounts of higher cognition seldom go beyond an implementation of traditional symbol-processing theories. We describe a connectionist constraint satisfaction model of how people solve anagram problems. The model exploits statistics of English orthography, but also addresses the interplay of sub symbolic and symbolic computation by a mechanism that extracts approximate symbolic representations (partial orderings of letters) from sub symbolic structures and injects the extracted representation back into the model to assist in the solution of the anagram. We show the computational benefit of this extraction-injection process and discuss its relationship to conscious mental processes and working memory. We also account for experimental data concerning the difficulty of anagram solution based on the orthographic structure of the anagram string and the target word. Historically, the mind has been viewed from two opposing computational perspectives. The symbolic perspective views the mind as a symbolic information processing engine. According to this perspective, cognition operates on representations that encode logical relationships among discrete symbolic elements, such as stacks and structured trees, and cognition involves basic operations such as means-ends analysis and best-first search. In contrast, the subsymbolic perspective views the mind as performing statistical inference, and involves basic operations such as constraint-satisfaction search. The data structures on which these operations take place are numerical vectors. In some domains of cognition, significant progress has been made through analysis from one computational perspective or the other. The thesis of our work is that many of these domains might be understood more completely by focusing on the interplay of subsymbolic and symbolic information processing. Consider the higher-cognitive domain of problem solving. At an abstract level of description, problem solving tasks can readily be formalized in terms of symbolic representations and operations. However, the neurobiological hardware that underlies human cognition appears to be subsymbolic-representations are noisy and graded, and the brain operates and adapts in a continuous fashion that is difficult to characterize in discrete symbolic terms. At some level-between the computational level of the task description and the implementation level of human neurobiology-the symbolic and subsymbolic accounts must come into contact with one another. We focus on this point of contact by proposing mechanisms by which symbolic representations can modulate sub symbolic processing, and mechanisms by which subsymbolic representations are made symbolic. We conjecture that these mechanisms can not only provide an account for the interplay of symbolic and sub symbolic processes in cognition, but that they form a sensible computational strategy that outperforms purely subsymbolic computation, and hence, symbolic reasoning makes sense from an evolutionary perspective. In this paper, we apply our approach to a high-level cognitive task, anagram problem solving. An anagram is a nonsense string of letters whose letters can be rearranged to form a word. For example, the solution to the anagram puzzle RYTEHO is THEORY. Anagram solving is a interesting task because it taps higher cognitive abilities and issues of awareness, it has a tractable state space, and interesting psychological data is available to model. 1 A Sub symbolic Computational Model We start by presenting a purely subsymbolic model of anagram processing. By subsymbolic, we mean that the model utilizes only English orthographic statistics and does not have access to an English lexicon. We will argue that this model proves insufficient to explain human performance on anagram problem solving. However, it is a key component of a hybrid symbolic-subsymbolic model we propose, and is thus described in detail. 1.1 Problem Representation A computational model of anagram processing must represent letter orderings. For example, the model must be capable of representing a solution such as

6 0.46004921 67 nips-2000-Homeostasis in a Silicon Integrate and Fire Neuron

7 0.44564682 81 nips-2000-Learning Winner-take-all Competition Between Groups of Neurons in Lateral Inhibitory Networks

8 0.42426306 100 nips-2000-Permitted and Forbidden Sets in Symmetric Threshold-Linear Networks

9 0.37076434 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics

10 0.32403773 45 nips-2000-Emergence of Movement Sensitive Neurons' Properties by Learning a Sparse Code for Natural Moving Images

11 0.31458616 42 nips-2000-Divisive and Subtractive Mask Effects: Linking Psychophysics and Biophysics

12 0.30035976 146 nips-2000-What Can a Single Neuron Compute?

13 0.29592377 41 nips-2000-Discovering Hidden Variables: A Structure-Based Approach

14 0.29457226 11 nips-2000-A Silicon Primitive for Competitive Learning

15 0.29322585 10 nips-2000-A Productive, Systematic Framework for the Representation of Visual Structure

16 0.28800383 32 nips-2000-Color Opponency Constitutes a Sparse Representation for the Chromatic Structure of Natural Scenes

17 0.28195664 73 nips-2000-Kernel-Based Reinforcement Learning in Average-Cost Problems: An Application to Optimal Portfolio Choice

18 0.27768263 120 nips-2000-Sparse Greedy Gaussian Process Regression

19 0.26936328 138 nips-2000-The Use of Classifiers in Sequential Inference

20 0.26340625 29 nips-2000-Bayes Networks on Ice: Robotic Search for Antarctic Meteorites


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.038), (16, 0.313), (17, 0.112), (32, 0.029), (33, 0.038), (36, 0.012), (55, 0.054), (62, 0.034), (65, 0.021), (67, 0.065), (75, 0.015), (76, 0.052), (79, 0.011), (81, 0.042), (90, 0.034), (97, 0.029)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.82470119 147 nips-2000-Who Does What? A Novel Algorithm to Determine Function Localization

Author: Ranit Aharonov-Barki, Isaac Meilijson, Eytan Ruppin

Abstract: We introduce a novel algorithm, termed PPA (Performance Prediction Algorithm), that quantitatively measures the contributions of elements of a neural system to the tasks it performs. The algorithm identifies the neurons or areas which participate in a cognitive or behavioral task, given data about performance decrease in a small set of lesions. It also allows the accurate prediction of performances due to multi-element lesions. The effectiveness of the new algorithm is demonstrated in two models of recurrent neural networks with complex interactions among the elements. The algorithm is scalable and applicable to the analysis of large neural networks. Given the recent advances in reversible inactivation techniques, it has the potential to significantly contribute to the understanding of the organization of biological nervous systems, and to shed light on the long-lasting debate about local versus distributed computation in the brain.

2 0.76243448 64 nips-2000-High-temperature Expansions for Learning Models of Nonnegative Data

Author: Oliver B. Downs

Abstract: Recent work has exploited boundedness of data in the unsupervised learning of new types of generative model. For nonnegative data it was recently shown that the maximum-entropy generative model is a Nonnegative Boltzmann Distribution not a Gaussian distribution, when the model is constrained to match the first and second order statistics of the data. Learning for practical sized problems is made difficult by the need to compute expectations under the model distribution. The computational cost of Markov chain Monte Carlo methods and low fidelity of naive mean field techniques has led to increasing interest in advanced mean field theories and variational methods. Here I present a secondorder mean-field approximation for the Nonnegative Boltzmann Machine model, obtained using a

3 0.45474645 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics

Author: Thomas Natschläger, Wolfgang Maass, Eduardo D. Sontag, Anthony M. Zador

Abstract: Experimental data show that biological synapses behave quite differently from the symbolic synapses in common artificial neural network models. Biological synapses are dynamic, i.e., their

4 0.4498719 106 nips-2000-Propagation Algorithms for Variational Bayesian Learning

Author: Zoubin Ghahramani, Matthew J. Beal

Abstract: Variational approximations are becoming a widespread tool for Bayesian learning of graphical models. We provide some theoretical results for the variational updates in a very general family of conjugate-exponential graphical models. We show how the belief propagation and the junction tree algorithms can be used in the inference step of variational Bayesian learning. Applying these results to the Bayesian analysis of linear-Gaussian state-space models we obtain a learning procedure that exploits the Kalman smoothing propagation, while integrating over all model parameters. We demonstrate how this can be used to infer the hidden state dimensionality of the state-space model in a variety of synthetic problems and one real high-dimensional data set. 1

5 0.44978473 122 nips-2000-Sparse Representation for Gaussian Process Models

Author: Lehel Csatč´¸, Manfred Opper

Abstract: We develop an approach for a sparse representation for Gaussian Process (GP) models in order to overcome the limitations of GPs caused by large data sets. The method is based on a combination of a Bayesian online algorithm together with a sequential construction of a relevant subsample of the data which fully specifies the prediction of the model. Experimental results on toy examples and large real-world data sets indicate the efficiency of the approach.

6 0.44593138 146 nips-2000-What Can a Single Neuron Compute?

7 0.44338641 74 nips-2000-Kernel Expansions with Unlabeled Examples

8 0.44162369 79 nips-2000-Learning Segmentation by Random Walks

9 0.44154438 98 nips-2000-Partially Observable SDE Models for Image Sequence Recognition Tasks

10 0.44083104 107 nips-2000-Rate-coded Restricted Boltzmann Machines for Face Recognition

11 0.4398956 102 nips-2000-Position Variance, Recurrence and Perceptual Learning

12 0.43650252 37 nips-2000-Convergence of Large Margin Separable Linear Classification

13 0.43323576 71 nips-2000-Interactive Parts Model: An Application to Recognition of On-line Cursive Script

14 0.43322027 49 nips-2000-Explaining Away in Weight Space

15 0.43218195 4 nips-2000-A Linear Programming Approach to Novelty Detection

16 0.43214542 60 nips-2000-Gaussianization

17 0.43171877 10 nips-2000-A Productive, Systematic Framework for the Representation of Visual Structure

18 0.43156564 130 nips-2000-Text Classification using String Kernels

19 0.43002257 7 nips-2000-A New Approximate Maximal Margin Classification Algorithm

20 0.42961469 111 nips-2000-Regularized Winnow Methods