nips nips2000 nips2000-100 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Richard H. R. Hahnloser, H. Sebastian Seung
Abstract: Ascribing computational principles to neural feedback circuits is an important problem in theoretical neuroscience. We study symmetric threshold-linear networks and derive stability results that go beyond the insights that can be gained from Lyapunov theory or energy functions. By applying linear analysis to subnetworks composed of coactive neurons, we determine the stability of potential steady states. We find that stability depends on two types of eigenmodes. One type determines global stability and the other type determines whether or not multistability is possible. We can prove the equivalence of our stability criteria with criteria taken from quadratic programming. Also, we show that there are permitted sets of neurons that can be coactive at a steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we can provide a formulation of longterm memory that is more general than the traditional perspective of fixed point attractor networks. A Lyapunov-function can be used to prove that a given set of differential equations is convergent. For example, if a neural network possesses a Lyapunov-function, then for almost any initial condition, the outputs of the neurons converge to a stable steady state. In the past, this stability-property was used to construct attractor networks that associatively recall memorized patterns. Lyapunov theory applies mainly to symmetric networks in which neurons have monotonic activation functions [1, 2]. Here we show that the restriction of activation functions to threshold-linear ones is not a mere limitation, but can yield new insights into the computational behavior of recurrent networks (for completeness, see also [3]). We present three main theorems about the neural responses to constant inputs. The first theorem provides necessary and sufficient conditions on the synaptic weight matrix for the existence of a globally asymptotically stable set of fixed points. These conditions can be expressed in terms of copositivity, a concept from quadratic programming and linear complementarity theory. Alternatively, they can be expressed in terms of certain eigenvalues and eigenvectors of submatrices of the synaptic weight matrix, making a connection to linear systems theory. The theorem guarantees that the network will produce a steady state response to any constant input. We regard this response as the computational output of the network, and its characterization is the topic of the second and third theorems. In the second theorem, we introduce the idea of permitted and forbidden sets. Under certain conditions on the synaptic weight matrix, we show that there exist sets of neurons that are
Reference: text
sentIndex sentText sentNum sentScore
1 We study symmetric threshold-linear networks and derive stability results that go beyond the insights that can be gained from Lyapunov theory or energy functions. [sent-11, score-0.217]
2 By applying linear analysis to subnetworks composed of coactive neurons, we determine the stability of potential steady states. [sent-12, score-0.494]
3 One type determines global stability and the other type determines whether or not multistability is possible. [sent-14, score-0.218]
4 Also, we show that there are permitted sets of neurons that can be coactive at a steady state and forbidden sets that cannot. [sent-16, score-1.874]
5 Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. [sent-17, score-2.047]
6 By viewing permitted sets as memories stored in the synaptic connections, we can provide a formulation of longterm memory that is more general than the traditional perspective of fixed point attractor networks. [sent-18, score-1.067]
7 For example, if a neural network possesses a Lyapunov-function, then for almost any initial condition, the outputs of the neurons converge to a stable steady state. [sent-20, score-0.793]
8 Lyapunov theory applies mainly to symmetric networks in which neurons have monotonic activation functions [1, 2]. [sent-22, score-0.33]
9 The first theorem provides necessary and sufficient conditions on the synaptic weight matrix for the existence of a globally asymptotically stable set of fixed points. [sent-25, score-0.716]
10 Alternatively, they can be expressed in terms of certain eigenvalues and eigenvectors of submatrices of the synaptic weight matrix, making a connection to linear systems theory. [sent-27, score-0.296]
11 The theorem guarantees that the network will produce a steady state response to any constant input. [sent-28, score-0.624]
12 In the second theorem, we introduce the idea of permitted and forbidden sets. [sent-30, score-0.951]
13 Under certain conditions on the synaptic weight matrix, we show that there exist sets of neurons that are "forbidden" by the recurrent synaptic connections from being coactivated at a stable steady state, no matter what input is applied. [sent-31, score-1.256]
14 Other sets are "permitted," in the sense that they can be coactivated for some input. [sent-32, score-0.202]
15 The same conditions on the synaptic weight matrix also lead to conditional multistability, meaning that there exists an input for which there is more than one stable steady state. [sent-33, score-0.798]
16 In other words, forbidden sets and conditional multistability are inseparable concepts. [sent-34, score-0.54]
17 The existence of permitted and forbidden sets suggests a new way of thinking about memory in neural networks. [sent-35, score-1.158]
18 When an input is applied, the network must select a set of active neurons, and this selection is constrained to be one of the permitted sets. [sent-36, score-0.806]
19 Therefore the permitted sets can be regarded as memories stored in the synaptic connections. [sent-37, score-0.956]
20 Our third theorem states that there are constraints on the groups of permitted and forbidden sets that can be stored by a network. [sent-38, score-1.286]
21 No matter which learning algorithm is used to store memories, active neurons cannot arbitrarily be divided into permitted and forbidden sets, because subsets of permitted sets have to be permitted and supersets of forbidden sets have to be forbidden. [sent-39, score-3.178]
22 J dx· 'J J (1) j where [u]+ = maxi u, O} is a rectification nonlinearity and the synaptic weight matrix is symmetric, W ij = W ji . [sent-43, score-0.226]
23 The existence of outputs and their relationship to the input are determined by the synaptic weight matrix W. [sent-50, score-0.248]
24 The nonnegative orthant {v : v ~ O} is the set of all nonnegative vectors. [sent-52, score-0.437]
25 It can be shown that any trajectory starting in the nonnegative orthant remains in the nonnegative orthant. [sent-53, score-0.477]
26 Therefore, for simplicity we will consider initial conditions that are confined to the nonnegative orthant x ~ O. [sent-54, score-0.332]
27 2 Global asymptotic stability Definition 1 A steady state;! [sent-55, score-0.493]
28 is stable if for all initial conditions sufficiently close to ;! [sent-57, score-0.212]
29 A steady state is asymptotically stable if for all initial conditions sufficiently close to ;! [sent-62, score-0.716]
30 A set of steady states is globally asymptotically stable if from almost all initial conditions, state trajectories converge to one of the steady states. [sent-67, score-1.121]
31 Definition 2 A principal submatrix A of a square matrix B is a square matrix that is constructed by deleting a certain set of rows and the corresponding columns of B. [sent-69, score-0.309]
32 The following theorem establishes necessary and sufficient conditions on W for global asymptotic stability. [sent-70, score-0.27]
33 All nonnegative eigenvectors of all principal submatrices of I - W have positive eigenvalues. [sent-72, score-0.398]
34 For all b, the network has a nonempty set of steady states that are globally asymptotically stable. [sent-77, score-0.62]
35 Let v* be the minimum of vT(I - W)v over nonnegative v on the unit sphere. [sent-79, score-0.223]
36 It follows from Lagrange multiplier methods that the nonzero elements of v* comprise a nonnegative eigenvector of the corresponding principal submatrix of W with eigenvalue greater than or equal to unity. [sent-81, score-0.445]
37 It is also nonincreasing under the network dynamics in the nonnegative orthant, and constant only at steady states. [sent-84, score-0.654]
38 By the Lyapunov stability theorem, the stable steady states are globally asymptotically stable. [sent-85, score-0.764]
39 In the language of optimization theory, the network dynamics converges to a local minimum of L subject to the nonnegativity constraint x ~ O. [sent-86, score-0.178]
40 Then there exists a nonnegative eigenvector of a principal submatrix of W with eigenvalue greater than or equal to unity. [sent-89, score-0.474]
41 • The meaning of these stability conditions is best appreciated by comparing with the analogous conditions for the purely linear network obtained by dropping the rectification from (1). [sent-91, score-0.363]
42 Here only nonnegative eigenvectors are able to grow without bound, due to the rectification, so that only their eigenvalues must be less than unity. [sent-93, score-0.309]
43 All principal submatrices of W must be considered, because different sets of feedback connections are active, depending on the set of neurons that are above threshold. [sent-94, score-0.468]
44 In a linear network, I - W would have to be positive definite to ensure asymptotic stability, but because of the rectification, here this condition is replaced by the weaker condition of copositivity. [sent-95, score-0.18]
45 The conditions of Theorem 1 for global asymptotic stability depend only on W, but not on b. [sent-96, score-0.24]
46 Lemma 1 For any nonnegative vector v a steady state of equation 1 with input b. [sent-99, score-0.622]
47 • This Lemma states that any nonnegative vector can be realized as a fixed point. [sent-105, score-0.274]
48 Indeed, the principal submatrix of I - W corresponding to a single active neuron corresponds to a diagonal elements, which according to (1) must be positive. [sent-107, score-0.278]
49 Hence it is always possible to activate only a single neuron at an asymptotically stable fixed point. [sent-108, score-0.328]
50 However, as will become clear from the following Theorem, not all nonnegative vectors can be realized as an asymptotically stable fixed point. [sent-109, score-0.486]
51 3 Forbidden and permitted sets The following characterizations of stable steady states are based on the interlacing Theorem [4]. [sent-110, score-1.327]
52 This Theorem says that if A is an - 1 by n - 1 principal submatrix of a n by n symmetric matrix B, then the eigenvalues of A fall in between the eigenvalues of B. [sent-111, score-0.486]
53 Definition 3 A set of neurons is permitted if the neurons can be coactivated at an asymptotically stable steady state for some input b. [sent-113, score-1.779]
54 On the other hand, a set of neurons is forbidden, if they cannot be coactivated at an asymptotically stable steady state no matter what the input b. [sent-114, score-0.972]
55 Alternatively, we might have defined a permitted set as a set for which the corresponding square sub-matrix of I - W has only positive eigenvalues. [sent-115, score-0.698]
56 And, similarly, a forbidden set could be defined as a set for which there is at least one non-positive eigenvalue. [sent-116, score-0.322]
57 It follows from Theorem 1 that if the matrix I - W is copositive, then the eigenvectors corresponding to non-positive eigenvalues of forbidden sets have to have both positive and non-positive components. [sent-117, score-0.679]
58 That is, there exists an input b such that there is more than one stable steady state. [sent-124, score-0.547]
59 I - W is not positive definite and so there can be no asymptotically stable steady state in which all neurons are active, e. [sent-126, score-0.922]
60 Denote the forbidden set with k active neurons by 1::. [sent-130, score-0.588]
61 By choosing bi > 0 for neurons i belonging to 1; and bj « 0 for neurons j not belonging to 1;, the quadratic Lyapunov function L defined in Theorem 1 forms a saddle in the nonnegative quadrant defined by 1;. [sent-132, score-0.651]
62 But because neurons can be initialized to lower values of L on either side of the hyperplane and because L is non-increasing along trajectories, there is no way trajectories can cross the hyperplane. [sent-134, score-0.237]
63 • If I - W is positive definite, then a symmetric threshold-linear network has a unique steady state. [sent-141, score-0.551]
64 The next Theorem is an expansion of this result, stating an equivalent condition using the concept of permitted sets. [sent-143, score-0.677]
65 For all b there is a unique steady state, and it is stable. [sent-149, score-0.351]
66 Suppose (1) is false, so the set of all neurons active must be forbidden, not all sets are permitted. [sent-156, score-0.403]
67 • The following Theorem characterizes the forbidden and the permitted sets. [sent-159, score-0.951]
68 Theorem 4 Any subset of a permitted set is permitted. [sent-160, score-0.629]
69 Proof: According to the interlacing Theorem, if the smallest eigenvalue of a symmetric matrix is positive, then so are the smallest eigenvalues of all its principal submatrices. [sent-162, score-0.469]
70 And, if the smallest eigenvalue of a principal submatrix is negative, then so is the smallest eigenvalue of the original matrix . [sent-163, score-0.423]
71 • 4 An example - the ring network A symmetric threshold-linear network with local excitation and larger range inhibition has been studied in the past as a model for how simple cells in primary visual cortex obtain their orientation tuning to visual stimulation [6, 7]. [sent-164, score-0.445]
72 We have argued that the fixed tuning width of the neurons in the network arises because active sets consisting of more than a fixed number of contiguous neurons are forbidden. [sent-166, score-0.855]
73 Here we give a more detailed account of this fact and provide a surprising result about the existence of some spurious permitted sets. [sent-167, score-0.708]
74 Let the synaptic matrix of a 10 neuron ring-network be translationally invariant. [sent-168, score-0.205]
75 The connection between neurons i and j is given by Wij = -(3 +o:oclij + 0:1 (cli,j+l + cli+l,j) + 0:2 (cli,j+2 + cli+2,j), where (3 quantifies global inhibition, 0:0 self-excitation, 0:1 first-neighbor lateral excitation and 0:2 second-neighbor lateral excitation. [sent-169, score-0.341]
76 In Figure 1 we have numerically computed the permitted sets of this network, with the parameters taken from [3], e. [sent-170, score-0.766]
77 The permitted sets were determined by diagonalising the 210 square sub-matrices of I - W and by classifying the eigenvalues corresponding to nonnegative eigenvectors. [sent-175, score-1.059]
78 The Figure 1 shows the resulting parent permitted sets (those that have no permitted supersets). [sent-176, score-1.446]
79 Consistent with the finding that such ring-networks can explain contrast invariant tuning of VI cells and multiplicative response modulation of parietal cells, we found that there are no permitted sets that consist of more than 5 contiguous active neurons. [sent-177, score-0.945]
80 However, as can be seen, there are many non-contiguous permitted sets that could in principle be activated by exciting neurons in white and strongly inhibiting neurons in black. [sent-178, score-1.18]
81 Neuron number Neuron number Figure 1: Left: Output of a ring network of 10 neurons to uniform input (random initial condition). [sent-192, score-0.441]
82 Right: The 9 parent permitted sets (x-axis: neuron number, y-axis: set number). [sent-193, score-0.873]
83 White means that a neurons belongs to a set and black means that it does not. [sent-194, score-0.207]
84 Left-right and translation symmetric parent permitted sets of the ones shown have been excluded. [sent-195, score-0.89]
85 The first parent permitted set (first row from the bottom) corresponds to the output on the left. [sent-196, score-0.68]
86 5 Discussion We have shown that pattern memorization in threshold linear networks can be viewed in terms of permitted sets of neurons, e. [sent-197, score-0.793]
87 sets of neurons that can be coactive at a steady state. [sent-199, score-0.743]
88 According to this definition, the memories are stored by the synaptic weights, independently of the inputs. [sent-200, score-0.19]
89 A typical input will not allow for the retrieval of arbitrary stored permitted sets. [sent-203, score-0.737]
90 This comes from the fact that multistability is not just dependent on the existence of forbidden sets, but also on the input (theorem 2). [sent-204, score-0.479]
91 For example, in the ring network, positive input will always retrieve permitted sets consisting of a group of contiguous neurons, but not any of the spurious permitted sets, Figure 1. [sent-205, score-1.65]
92 Generally, multistability in the ring network is only possible when more than a single neuron is excited. [sent-206, score-0.304]
93 Notice that threshold-linear networks can behave as traditional attractor networks when the inputs are represented as initial conditions of the dynamics. [sent-207, score-0.182]
94 For example, by fixing b = 1 and initializing a copositive network with some input, the permitted sets unequivocally determine the stable fixed points. [sent-208, score-1.063]
95 Thus, in this case, the notion of permitted sets is no different from fixed point attractors. [sent-209, score-0.8]
96 However, the hierarchical grouping of permitted sets (Theorem 4) becomes irrelevant, since there can be only one attractive fixed point per hierarchical group defined by a parent permitted set. [sent-210, score-1.504]
97 The fact that no permitted set can have a forbidden subset represents a constraint on the possible computations of symmetric networks. [sent-211, score-1.054]
98 In a similar way, grouping problems that do not obey the natural hierarchy inherent in symmetric networks, might necessitate the introduction of hidden neurons to realize the right geometry. [sent-215, score-0.304]
99 For the interested reader, see also [8] for a simple procedure of how to store a given family of possibly overlapping patterns as permitted sets. [sent-216, score-0.651]
100 Learning winnertake-all competition between groups of neurons in lateral inhibitory networks. [sent-278, score-0.235]
wordName wordTfidf (topN-words)
[('permitted', 0.629), ('steady', 0.351), ('forbidden', 0.322), ('neurons', 0.207), ('nonnegative', 0.189), ('sets', 0.137), ('stable', 0.128), ('theorem', 0.125), ('asymptotically', 0.11), ('synaptic', 0.097), ('submatrix', 0.095), ('stability', 0.095), ('ring', 0.088), ('eigenvalues', 0.083), ('lyapunov', 0.081), ('multistability', 0.081), ('network', 0.079), ('symmetric', 0.073), ('eigenvalue', 0.071), ('principal', 0.068), ('coactivated', 0.065), ('active', 0.059), ('orthant', 0.059), ('copositive', 0.056), ('interlacing', 0.056), ('submatrices', 0.056), ('supersets', 0.056), ('neuron', 0.056), ('conditions', 0.056), ('rectification', 0.054), ('globally', 0.054), ('matrix', 0.052), ('parent', 0.051), ('coactive', 0.048), ('hahnloser', 0.048), ('positive', 0.048), ('stored', 0.047), ('asymptotic', 0.047), ('false', 0.046), ('memories', 0.046), ('attractor', 0.044), ('state', 0.043), ('spurious', 0.042), ('global', 0.042), ('trajectory', 0.04), ('input', 0.039), ('contiguous', 0.038), ('copositivity', 0.038), ('rti', 0.038), ('existence', 0.037), ('eigenvectors', 0.037), ('excitation', 0.036), ('tuning', 0.035), ('definite', 0.035), ('dynamics', 0.035), ('minimum', 0.034), ('richard', 0.034), ('sebastian', 0.034), ('inhibition', 0.034), ('fixed', 0.034), ('smallest', 0.033), ('memory', 0.033), ('says', 0.032), ('proof', 0.032), ('trajectories', 0.03), ('constraint', 0.03), ('exists', 0.029), ('matter', 0.029), ('cli', 0.029), ('vi', 0.029), ('lateral', 0.028), ('initial', 0.028), ('recurrent', 0.027), ('lemma', 0.027), ('douglas', 0.027), ('networks', 0.027), ('states', 0.026), ('response', 0.026), ('mahowald', 0.025), ('realized', 0.025), ('argued', 0.025), ('condition', 0.025), ('bi', 0.024), ('definition', 0.024), ('saddle', 0.024), ('grouping', 0.024), ('weight', 0.023), ('sketch', 0.023), ('meaning', 0.023), ('concept', 0.023), ('activation', 0.023), ('retrieval', 0.022), ('seung', 0.022), ('eigenvector', 0.022), ('store', 0.022), ('insights', 0.022), ('cells', 0.021), ('usa', 0.021), ('criteria', 0.021), ('square', 0.021)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000005 100 nips-2000-Permitted and Forbidden Sets in Symmetric Threshold-Linear Networks
Author: Richard H. R. Hahnloser, H. Sebastian Seung
Abstract: Ascribing computational principles to neural feedback circuits is an important problem in theoretical neuroscience. We study symmetric threshold-linear networks and derive stability results that go beyond the insights that can be gained from Lyapunov theory or energy functions. By applying linear analysis to subnetworks composed of coactive neurons, we determine the stability of potential steady states. We find that stability depends on two types of eigenmodes. One type determines global stability and the other type determines whether or not multistability is possible. We can prove the equivalence of our stability criteria with criteria taken from quadratic programming. Also, we show that there are permitted sets of neurons that can be coactive at a steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we can provide a formulation of longterm memory that is more general than the traditional perspective of fixed point attractor networks. A Lyapunov-function can be used to prove that a given set of differential equations is convergent. For example, if a neural network possesses a Lyapunov-function, then for almost any initial condition, the outputs of the neurons converge to a stable steady state. In the past, this stability-property was used to construct attractor networks that associatively recall memorized patterns. Lyapunov theory applies mainly to symmetric networks in which neurons have monotonic activation functions [1, 2]. Here we show that the restriction of activation functions to threshold-linear ones is not a mere limitation, but can yield new insights into the computational behavior of recurrent networks (for completeness, see also [3]). We present three main theorems about the neural responses to constant inputs. The first theorem provides necessary and sufficient conditions on the synaptic weight matrix for the existence of a globally asymptotically stable set of fixed points. These conditions can be expressed in terms of copositivity, a concept from quadratic programming and linear complementarity theory. Alternatively, they can be expressed in terms of certain eigenvalues and eigenvectors of submatrices of the synaptic weight matrix, making a connection to linear systems theory. The theorem guarantees that the network will produce a steady state response to any constant input. We regard this response as the computational output of the network, and its characterization is the topic of the second and third theorems. In the second theorem, we introduce the idea of permitted and forbidden sets. Under certain conditions on the synaptic weight matrix, we show that there exist sets of neurons that are
2 0.70127285 81 nips-2000-Learning Winner-take-all Competition Between Groups of Neurons in Lateral Inhibitory Networks
Author: Xiaohui Xie, Richard H. R. Hahnloser, H. Sebastian Seung
Abstract: It has long been known that lateral inhibition in neural networks can lead to a winner-take-all competition, so that only a single neuron is active at a steady state. Here we show how to organize lateral inhibition so that groups of neurons compete to be active. Given a collection of potentially overlapping groups, the inhibitory connectivity is set by a formula that can be interpreted as arising from a simple learning rule. Our analysis demonstrates that such inhibition generally results in winner-take-all competition between the given groups, with the exception of some degenerate cases. In a broader context, the network serves as a particular illustration of the general distinction between permitted and forbidden sets, which was introduced recently. From this viewpoint, the computational function of our network is to store and retrieve memories as permitted sets of coactive neurons. In traditional winner-take-all networks, lateral inhibition is used to enforce a localized, or
3 0.099316709 147 nips-2000-Who Does What? A Novel Algorithm to Determine Function Localization
Author: Ranit Aharonov-Barki, Isaac Meilijson, Eytan Ruppin
Abstract: We introduce a novel algorithm, termed PPA (Performance Prediction Algorithm), that quantitatively measures the contributions of elements of a neural system to the tasks it performs. The algorithm identifies the neurons or areas which participate in a cognitive or behavioral task, given data about performance decrease in a small set of lesions. It also allows the accurate prediction of performances due to multi-element lesions. The effectiveness of the new algorithm is demonstrated in two models of recurrent neural networks with complex interactions among the elements. The algorithm is scalable and applicable to the analysis of large neural networks. Given the recent advances in reversible inactivation techniques, it has the potential to significantly contribute to the understanding of the organization of biological nervous systems, and to shed light on the long-lasting debate about local versus distributed computation in the brain.
4 0.097664945 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics
Author: Thomas Natschläger, Wolfgang Maass, Eduardo D. Sontag, Anthony M. Zador
Abstract: Experimental data show that biological synapses behave quite differently from the symbolic synapses in common artificial neural network models. Biological synapses are dynamic, i.e., their
5 0.089118868 64 nips-2000-High-temperature Expansions for Learning Models of Nonnegative Data
Author: Oliver B. Downs
Abstract: Recent work has exploited boundedness of data in the unsupervised learning of new types of generative model. For nonnegative data it was recently shown that the maximum-entropy generative model is a Nonnegative Boltzmann Distribution not a Gaussian distribution, when the model is constrained to match the first and second order statistics of the data. Learning for practical sized problems is made difficult by the need to compute expectations under the model distribution. The computational cost of Markov chain Monte Carlo methods and low fidelity of naive mean field techniques has led to increasing interest in advanced mean field theories and variational methods. Here I present a secondorder mean-field approximation for the Nonnegative Boltzmann Machine model, obtained using a
6 0.086632349 129 nips-2000-Temporally Dependent Plasticity: An Information Theoretic Account
7 0.08579202 88 nips-2000-Multiple Timescales of Adaptation in a Neural Code
8 0.074420393 124 nips-2000-Spike-Timing-Dependent Learning for Oscillatory Networks
9 0.061104223 125 nips-2000-Stability and Noise in Biochemical Switches
10 0.060815975 146 nips-2000-What Can a Single Neuron Compute?
11 0.059472091 21 nips-2000-Algorithmic Stability and Generalization Performance
12 0.05672038 18 nips-2000-Active Support Vector Machine Classification
13 0.056378465 40 nips-2000-Dendritic Compartmentalization Could Underlie Competition and Attentional Biasing of Simultaneous Visual Stimuli
14 0.056080963 42 nips-2000-Divisive and Subtractive Mask Effects: Linking Psychophysics and Biophysics
15 0.053749699 110 nips-2000-Regularization with Dot-Product Kernels
16 0.053167842 17 nips-2000-Active Learning for Parameter Estimation in Bayesian Networks
17 0.053033646 56 nips-2000-Foundations for a Circuit Complexity Theory of Sensory Processing
18 0.049571969 112 nips-2000-Reinforcement Learning with Function Approximation Converges to a Region
19 0.048916876 69 nips-2000-Incorporating Second-Order Functional Knowledge for Better Option Pricing
20 0.04846248 145 nips-2000-Weak Learners and Improved Rates of Convergence in Boosting
topicId topicWeight
[(0, 0.183), (1, -0.115), (2, -0.281), (3, -0.102), (4, 0.135), (5, 0.044), (6, 0.032), (7, -0.607), (8, -0.49), (9, -0.053), (10, 0.143), (11, 0.007), (12, 0.065), (13, 0.071), (14, -0.064), (15, 0.024), (16, -0.024), (17, -0.041), (18, -0.072), (19, -0.04), (20, 0.014), (21, 0.008), (22, 0.021), (23, 0.028), (24, -0.026), (25, 0.001), (26, -0.006), (27, -0.007), (28, -0.014), (29, 0.026), (30, -0.021), (31, -0.035), (32, 0.003), (33, -0.019), (34, 0.009), (35, -0.013), (36, 0.064), (37, 0.004), (38, -0.004), (39, -0.045), (40, 0.016), (41, -0.016), (42, -0.003), (43, -0.005), (44, 0.038), (45, 0.048), (46, -0.01), (47, 0.047), (48, 0.009), (49, -0.028)]
simIndex simValue paperId paperTitle
1 0.98001522 81 nips-2000-Learning Winner-take-all Competition Between Groups of Neurons in Lateral Inhibitory Networks
Author: Xiaohui Xie, Richard H. R. Hahnloser, H. Sebastian Seung
Abstract: It has long been known that lateral inhibition in neural networks can lead to a winner-take-all competition, so that only a single neuron is active at a steady state. Here we show how to organize lateral inhibition so that groups of neurons compete to be active. Given a collection of potentially overlapping groups, the inhibitory connectivity is set by a formula that can be interpreted as arising from a simple learning rule. Our analysis demonstrates that such inhibition generally results in winner-take-all competition between the given groups, with the exception of some degenerate cases. In a broader context, the network serves as a particular illustration of the general distinction between permitted and forbidden sets, which was introduced recently. From this viewpoint, the computational function of our network is to store and retrieve memories as permitted sets of coactive neurons. In traditional winner-take-all networks, lateral inhibition is used to enforce a localized, or
same-paper 2 0.9761886 100 nips-2000-Permitted and Forbidden Sets in Symmetric Threshold-Linear Networks
Author: Richard H. R. Hahnloser, H. Sebastian Seung
Abstract: Ascribing computational principles to neural feedback circuits is an important problem in theoretical neuroscience. We study symmetric threshold-linear networks and derive stability results that go beyond the insights that can be gained from Lyapunov theory or energy functions. By applying linear analysis to subnetworks composed of coactive neurons, we determine the stability of potential steady states. We find that stability depends on two types of eigenmodes. One type determines global stability and the other type determines whether or not multistability is possible. We can prove the equivalence of our stability criteria with criteria taken from quadratic programming. Also, we show that there are permitted sets of neurons that can be coactive at a steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we can provide a formulation of longterm memory that is more general than the traditional perspective of fixed point attractor networks. A Lyapunov-function can be used to prove that a given set of differential equations is convergent. For example, if a neural network possesses a Lyapunov-function, then for almost any initial condition, the outputs of the neurons converge to a stable steady state. In the past, this stability-property was used to construct attractor networks that associatively recall memorized patterns. Lyapunov theory applies mainly to symmetric networks in which neurons have monotonic activation functions [1, 2]. Here we show that the restriction of activation functions to threshold-linear ones is not a mere limitation, but can yield new insights into the computational behavior of recurrent networks (for completeness, see also [3]). We present three main theorems about the neural responses to constant inputs. The first theorem provides necessary and sufficient conditions on the synaptic weight matrix for the existence of a globally asymptotically stable set of fixed points. These conditions can be expressed in terms of copositivity, a concept from quadratic programming and linear complementarity theory. Alternatively, they can be expressed in terms of certain eigenvalues and eigenvectors of submatrices of the synaptic weight matrix, making a connection to linear systems theory. The theorem guarantees that the network will produce a steady state response to any constant input. We regard this response as the computational output of the network, and its characterization is the topic of the second and third theorems. In the second theorem, we introduce the idea of permitted and forbidden sets. Under certain conditions on the synaptic weight matrix, we show that there exist sets of neurons that are
3 0.3815873 147 nips-2000-Who Does What? A Novel Algorithm to Determine Function Localization
Author: Ranit Aharonov-Barki, Isaac Meilijson, Eytan Ruppin
Abstract: We introduce a novel algorithm, termed PPA (Performance Prediction Algorithm), that quantitatively measures the contributions of elements of a neural system to the tasks it performs. The algorithm identifies the neurons or areas which participate in a cognitive or behavioral task, given data about performance decrease in a small set of lesions. It also allows the accurate prediction of performances due to multi-element lesions. The effectiveness of the new algorithm is demonstrated in two models of recurrent neural networks with complex interactions among the elements. The algorithm is scalable and applicable to the analysis of large neural networks. Given the recent advances in reversible inactivation techniques, it has the potential to significantly contribute to the understanding of the organization of biological nervous systems, and to shed light on the long-lasting debate about local versus distributed computation in the brain.
4 0.20038106 42 nips-2000-Divisive and Subtractive Mask Effects: Linking Psychophysics and Biophysics
Author: Barbara Zenger, Christof Koch
Abstract: We describe an analogy between psychophysically measured effects in contrast masking, and the behavior of a simple integrate-andfire neuron that receives time-modulated inhibition. In the psychophysical experiments, we tested observers ability to discriminate contrasts of peripheral Gabor patches in the presence of collinear Gabor flankers. The data reveal a complex interaction pattern that we account for by assuming that flankers provide divisive inhibition to the target unit for low target contrasts, but provide subtractive inhibition to the target unit for higher target contrasts. A similar switch from divisive to subtractive inhibition is observed in an integrate-and-fire unit that receives inhibition modulated in time such that the cell spends part of the time in a high-inhibition state and part of the time in a low-inhibition state. The similarity between the effects suggests that one may cause the other. The biophysical model makes testable predictions for physiological single-cell recordings. 1 Psychophysics Visual images of Gabor patches are thought to excite a small and specific subset of neurons in the primary visual cortex and beyond. By measuring psychophysically in humans the contrast detection and discrimination thresholds of peripheral Gabor patches, one can estimate the sensitivity of this subset of neurons. Furthermore, spatial interactions between different neuronal populations can be probed by testing the effects of additional Gabor patches (masks) on performance. Such experiments have revealed a highly configuration-specific pattern of excitatory and inhibitory spatial interactions [1, 2]. 1.1 Methods Two vertical Gabor patches with a spatial frequency of 4cyc/deg were presented at 4 deg eccentricity left and right of fixation, and observers had to report which patch had the higher contrast (spatial 2AFC). In the
5 0.19268003 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics
Author: Thomas Natschläger, Wolfgang Maass, Eduardo D. Sontag, Anthony M. Zador
Abstract: Experimental data show that biological synapses behave quite differently from the symbolic synapses in common artificial neural network models. Biological synapses are dynamic, i.e., their
6 0.19196351 125 nips-2000-Stability and Noise in Biochemical Switches
7 0.18880664 64 nips-2000-High-temperature Expansions for Learning Models of Nonnegative Data
8 0.17656055 129 nips-2000-Temporally Dependent Plasticity: An Information Theoretic Account
9 0.17463766 56 nips-2000-Foundations for a Circuit Complexity Theory of Sensory Processing
10 0.16917995 34 nips-2000-Competition and Arbors in Ocular Dominance
11 0.16442998 18 nips-2000-Active Support Vector Machine Classification
12 0.16036725 40 nips-2000-Dendritic Compartmentalization Could Underlie Competition and Attentional Biasing of Simultaneous Visual Stimuli
13 0.15594031 146 nips-2000-What Can a Single Neuron Compute?
14 0.15408769 20 nips-2000-Algebraic Information Geometry for Learning Machines with Singularities
15 0.15302408 124 nips-2000-Spike-Timing-Dependent Learning for Oscillatory Networks
16 0.14240539 112 nips-2000-Reinforcement Learning with Function Approximation Converges to a Region
17 0.14185135 22 nips-2000-Algorithms for Non-negative Matrix Factorization
18 0.1305088 17 nips-2000-Active Learning for Parameter Estimation in Bayesian Networks
19 0.12880442 110 nips-2000-Regularization with Dot-Product Kernels
20 0.12031117 21 nips-2000-Algorithmic Stability and Generalization Performance
topicId topicWeight
[(10, 0.024), (16, 0.013), (17, 0.118), (32, 0.018), (33, 0.046), (55, 0.027), (62, 0.051), (65, 0.012), (67, 0.078), (75, 0.011), (76, 0.038), (77, 0.318), (79, 0.022), (81, 0.022), (90, 0.075), (97, 0.017)]
simIndex simValue paperId paperTitle
same-paper 1 0.84723896 100 nips-2000-Permitted and Forbidden Sets in Symmetric Threshold-Linear Networks
Author: Richard H. R. Hahnloser, H. Sebastian Seung
Abstract: Ascribing computational principles to neural feedback circuits is an important problem in theoretical neuroscience. We study symmetric threshold-linear networks and derive stability results that go beyond the insights that can be gained from Lyapunov theory or energy functions. By applying linear analysis to subnetworks composed of coactive neurons, we determine the stability of potential steady states. We find that stability depends on two types of eigenmodes. One type determines global stability and the other type determines whether or not multistability is possible. We can prove the equivalence of our stability criteria with criteria taken from quadratic programming. Also, we show that there are permitted sets of neurons that can be coactive at a steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we can provide a formulation of longterm memory that is more general than the traditional perspective of fixed point attractor networks. A Lyapunov-function can be used to prove that a given set of differential equations is convergent. For example, if a neural network possesses a Lyapunov-function, then for almost any initial condition, the outputs of the neurons converge to a stable steady state. In the past, this stability-property was used to construct attractor networks that associatively recall memorized patterns. Lyapunov theory applies mainly to symmetric networks in which neurons have monotonic activation functions [1, 2]. Here we show that the restriction of activation functions to threshold-linear ones is not a mere limitation, but can yield new insights into the computational behavior of recurrent networks (for completeness, see also [3]). We present three main theorems about the neural responses to constant inputs. The first theorem provides necessary and sufficient conditions on the synaptic weight matrix for the existence of a globally asymptotically stable set of fixed points. These conditions can be expressed in terms of copositivity, a concept from quadratic programming and linear complementarity theory. Alternatively, they can be expressed in terms of certain eigenvalues and eigenvectors of submatrices of the synaptic weight matrix, making a connection to linear systems theory. The theorem guarantees that the network will produce a steady state response to any constant input. We regard this response as the computational output of the network, and its characterization is the topic of the second and third theorems. In the second theorem, we introduce the idea of permitted and forbidden sets. Under certain conditions on the synaptic weight matrix, we show that there exist sets of neurons that are
2 0.83188045 105 nips-2000-Programmable Reinforcement Learning Agents
Author: David Andre, Stuart J. Russell
Abstract: We present an expressive agent design language for reinforcement learning that allows the user to constrain the policies considered by the learning process.The language includes standard features such as parameterized subroutines, temporary interrupts, aborts, and memory variables, but also allows for unspecified choices in the agent program. For learning that which isn't specified, we present provably convergent learning algorithms. We demonstrate by example that agent programs written in the language are concise as well as modular. This facilitates state abstraction and the transferability of learned skills.
3 0.70356846 37 nips-2000-Convergence of Large Margin Separable Linear Classification
Author: Tong Zhang
Abstract: Large margin linear classification methods have been successfully applied to many applications. For a linearly separable problem, it is known that under appropriate assumptions, the expected misclassification error of the computed
4 0.67519855 81 nips-2000-Learning Winner-take-all Competition Between Groups of Neurons in Lateral Inhibitory Networks
Author: Xiaohui Xie, Richard H. R. Hahnloser, H. Sebastian Seung
Abstract: It has long been known that lateral inhibition in neural networks can lead to a winner-take-all competition, so that only a single neuron is active at a steady state. Here we show how to organize lateral inhibition so that groups of neurons compete to be active. Given a collection of potentially overlapping groups, the inhibitory connectivity is set by a formula that can be interpreted as arising from a simple learning rule. Our analysis demonstrates that such inhibition generally results in winner-take-all competition between the given groups, with the exception of some degenerate cases. In a broader context, the network serves as a particular illustration of the general distinction between permitted and forbidden sets, which was introduced recently. From this viewpoint, the computational function of our network is to store and retrieve memories as permitted sets of coactive neurons. In traditional winner-take-all networks, lateral inhibition is used to enforce a localized, or
5 0.52300239 111 nips-2000-Regularized Winnow Methods
Author: Tong Zhang
Abstract: In theory, the Winnow multiplicative update has certain advantages over the Perceptron additive update when there are many irrelevant attributes. Recently, there has been much effort on enhancing the Perceptron algorithm by using regularization, leading to a class of linear classification methods called support vector machines. Similarly, it is also possible to apply the regularization idea to the Winnow algorithm, which gives methods we call regularized Winnows. We show that the resulting methods compare with the basic Winnows in a similar way that a support vector machine compares with the Perceptron. We investigate algorithmic issues and learning properties of the derived methods. Some experimental results will also be provided to illustrate different methods.
6 0.47672984 74 nips-2000-Kernel Expansions with Unlabeled Examples
7 0.46740478 7 nips-2000-A New Approximate Maximal Margin Classification Algorithm
8 0.46655196 79 nips-2000-Learning Segmentation by Random Walks
9 0.46486676 4 nips-2000-A Linear Programming Approach to Novelty Detection
10 0.46454173 21 nips-2000-Algorithmic Stability and Generalization Performance
11 0.46202001 52 nips-2000-Fast Training of Support Vector Classifiers
12 0.45813528 107 nips-2000-Rate-coded Restricted Boltzmann Machines for Face Recognition
13 0.45640254 133 nips-2000-The Kernel Gibbs Sampler
14 0.45551568 106 nips-2000-Propagation Algorithms for Variational Bayesian Learning
15 0.45303628 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics
16 0.45154229 69 nips-2000-Incorporating Second-Order Functional Knowledge for Better Option Pricing
17 0.45148057 98 nips-2000-Partially Observable SDE Models for Image Sequence Recognition Tasks
18 0.45053831 22 nips-2000-Algorithms for Non-negative Matrix Factorization
19 0.45027679 122 nips-2000-Sparse Representation for Gaussian Process Models
20 0.44935599 60 nips-2000-Gaussianization