nips nips2010 nips2010-16 knowledge-graph by maker-knowledge-mining

16 nips-2010-A VLSI Implementation of the Adaptive Exponential Integrate-and-Fire Neuron Model


Source: pdf

Author: Sebastian Millner, Andreas Grübl, Karlheinz Meier, Johannes Schemmel, Marc-olivier Schwartz

Abstract: We describe an accelerated hardware neuron being capable of emulating the adaptive exponential integrate-and-fire neuron model. Firing patterns of the membrane stimulated by a step current are analyzed in transistor level simulations and in silicon on a prototype chip. The neuron is destined to be the hardware neuron of a highly integrated wafer-scale system reaching out for new computational paradigms and opening new experimentation possibilities. As the neuron is dedicated as a universal device for neuroscientific experiments, the focus lays on parameterizability and reproduction of the analytical model. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 de Abstract We describe an accelerated hardware neuron being capable of emulating the adaptive exponential integrate-and-fire neuron model. [sent-3, score-0.882]

2 Firing patterns of the membrane stimulated by a step current are analyzed in transistor level simulations and in silicon on a prototype chip. [sent-4, score-0.544]

3 The neuron is destined to be the hardware neuron of a highly integrated wafer-scale system reaching out for new computational paradigms and opening new experimentation possibilities. [sent-5, score-0.829]

4 As the neuron is dedicated as a universal device for neuroscientific experiments, the focus lays on parameterizability and reproduction of the analytical model. [sent-6, score-0.349]

5 1 Introduction Since the beginning of neuromorphic engineering [1, 2] designers have had great success in building VLSI1 neurons mimicking the behavior of biological neurons using analog circuits [3–8]. [sent-7, score-0.583]

6 It has been argued [4] whether it is best to emulate an established model or to create a new one using analog circuits. [sent-9, score-0.181]

7 We approach gaining access to the computational power of neural systems and creating a device being able to emulate biologically relevant spiking neural networks that can be reproduced in a traditional simulation environment for modeling. [sent-12, score-0.469]

8 The use of a commonly known model enables modelers to do experiments on neuromorphic hardware and compare them to simulations. [sent-13, score-0.35]

9 The software framework PyNN [11, 12] even allows for directly switching between a simulator and the neuromorphic hardware device, allowing modelers to access the hardware on a high level without knowing all implementation details. [sent-15, score-0.593]

10 The hardware neuron presented here can emulate the adaptive exponential integrate-and-fire neuron model (AdEx) [13], developed within the FACETS-project [14]. [sent-16, score-0.953]

11 The AdEx model can produce complex firing patterns observed in biology [15], like spike-frequency-adaptation, bursting, regular spiking, irregular spiking and transient spiking by tuning a limited number of parameters [16]. [sent-17, score-0.421]

12 (2) dt Cm , gl , ge and gi are the membrane capacitance, the leakage conductance and the conductances for excitatory and inhibitory synaptic inputs, where ge and gi depend on time and the inputs from other neurons. [sent-19, score-0.706]

13 El , Ei and Ee are the leakage reversal potential and the synaptic reversal potentials. [sent-20, score-0.191]

14 The time constant of the adaptation variable is τw and a is called adaptation parameter. [sent-22, score-0.292]

15 If the membrane voltage crosses a certain threshold voltage Θ, the neuron is reset: V → Vreset ; w → w + b. [sent-24, score-0.936]

16 Due to the sharp rise, created by the exponential term in equation 1, the exact value of Θ is not critical for the determination of the moment of a spike [13]. [sent-26, score-0.213]

17 The neuron is integrated on a prototype chip called HICANN2 [17–19] (figure 2) which has been produced in 2009. [sent-34, score-0.393]

18 Each HICANN contains 512 dendrite membrane (DenMem) circuits (figure 3), each being connected to 224 dynamic input synapses. [sent-35, score-0.308]

19 Neurons are built of DenMems by shorting their membrane capacitances gaining up to 14336 input synapses for a single neuron. [sent-36, score-0.259]

20 The HICANN is prepared for integration in the FACETS wafer-scale system [17–19] allowing to interconnect 384 HICANNs on an uncut silicon wafer via a high speed bus system, so networks of up to 196 608 neurons can be emulated on a single wafer. [sent-37, score-0.263]

21 Another VLSI neuron designed with a time scaling factor is presented in [7]. [sent-41, score-0.293]

22 This implementation is capable of reproducing lots of different firing patterns of cortical neurons, but has no direct correspondence to a neuron from the modeling area. [sent-42, score-0.339]

23 2 High Input Count Analog Neural Network 2 bus system synapse array neuron block floating gates 10 mm Figure 2: Photograph of the HICANN-chip 2 2. [sent-43, score-0.405]

24 1 Neuron implementation Neuron The smallest part of a neuron is a DenMem, which implements the terms of the AdEx neuron described above. [sent-44, score-0.586]

25 Each term is constructed by a single circuit using operational amplifiers (OP) and operational transconductance amplifiers (OTA) and can be switched off separately, so less complex models like the leaky integrate-and-fire model implemented in [9] can be emulated. [sent-45, score-0.198]

26 A first, not completely implemented version of the neuron has been proposed in [17]. [sent-48, score-0.328]

27 Some simulation results of the actual neuron can be found in [19]. [sent-49, score-0.332]

28 Input Neighbour-Neurons Leak SynIn Exp Spiking/ Connection Membrane CMembrane Reset SynIn Spikes VReset In/Out Input STDP/ Network Adapt Current-Input Membrane-Output Figure 3: Schematic diagram of AdEx neuron circuit Figure 3 shows a block diagram of a DenMem. [sent-50, score-0.42]

29 During normal operation, the neuron gets rectangular shaped current pulses as input from the synapse array (figure 2) at one of the two synaptic input circuits. [sent-51, score-0.539]

30 Inside these circuits the current is integrated by a leaky integrator OP-circuit resulting in a voltage that is transformed to a current by an OTA. [sent-52, score-0.426]

31 Using this current as bias for another OTA, a sharply rising and exponentially decaying synaptic input conductance is created. [sent-53, score-0.241]

32 Each DenMem is equipped with two synaptic input circuits, each having its own connection to the synapse array. [sent-54, score-0.17]

33 The output of a synapse can be chosen between them, which allows for two independent synaptic channels which could be inhibitory or excitatory. [sent-55, score-0.17]

34 The leakage term of equation 1 can be implemented directly using an OTA, building a conductance between the leakage potential El and the membrane voltage V . [sent-56, score-0.744]

35 dt 3 (5) Now the time constant τw shall be created by a capacitance Cadapt and a conductance gadapt and we get: dVadapt −Cadapt = gadapt (Vadapt − V ). [sent-58, score-0.289]

36 (6) dt We need to transform b into a voltage using the conductance a and get Cadapt Ib tpulse = b a (7) where the fixed tpulse is the time a current Ib increases Vadapt on Cadapt at each detected spike of a neuron. [sent-59, score-0.6]

37 These resulting equations for adaptation can be directly implemented as a circuit. [sent-60, score-0.181]

38 To generate the correct gate source voltage, a non inverting amplifier multiplies the difference between the membrane voltage and a voltage Vt by an adjustable factor. [sent-62, score-0.724]

39 The gate source voltage of M1 is : R1 (V − Vt ) (8) R2 Deployed in the equation for a MOSFET in sub-threshold mode this results in a current depending exponentially on V following equation 1 where ∆t can be adjusted via the resistors R1 and R2 . [sent-64, score-0.304]

40 The factor in front of the exponential gl ∆t and Vt of the model can be changed by moving the circuits Vt . [sent-65, score-0.276]

41 VGSM1 = Figure 4: Simplified schematic of the exponential circuit Our neuron detects a spike at a directly adjustable threshold voltage Θ - this is especially necessary as the circuit cannot only implement the AdEx model, but also less complex models. [sent-67, score-0.993]

42 In a model without a sharp spike, like the one created by the positive feedback of the exponential term, spike timing very much depends on the exact voltage Θ. [sent-68, score-0.442]

43 A detected spike triggers reseting of the membrane by a current pulse to a potential Vreset for an adjustable time. [sent-69, score-0.422]

44 2 Parameterization In contrast to most other systems, we are using analog floating gate memories similar to [20] as storage device for the analog parameters of a neuron. [sent-72, score-0.314]

45 This way, matching issues can be counterbalanced, and different types of neurons can be implemented on a single chip enhancing the universality of the wafer-scale system. [sent-74, score-0.218]

46 Technical biasing parameters and parameters of the synaptic input circuits are excluded. [sent-76, score-0.258]

47 As these switches are parameterized globally, ranges of a parameter of a neuron group(one quarter of a HICANN) need to be in the same order of magnitude. [sent-79, score-0.333]

48 3 metal-oxide-semiconductor field-effect transistor 4 Table 1: Neuron parameters PARAMETER SHARING gl a gadapt Ib tpulse Vreset Vexp treset Cmem Cadapt ∆t Θ individual individual individual individual fixed global individual global global fixed individual individual RANGE 34 nS. [sent-80, score-0.271]

49 The chosen ranges allow leakage time constants τmem = Cmem /gl at an acceleration factor of 104 between 1 ms and 588 ms and an adaptation time constant τw between 10 ms and 5 s in terms of biological real time. [sent-104, score-0.41]

50 As OTAs are used for modeling conductances, and linear operation for this type of devices can only be achieved for smaller voltage differences, it is necessary to limit the operating range of the variables V and Vadapt to some hundreds of millivolts. [sent-107, score-0.229]

51 If this area is left, the OTAs will not work as a conductance anymore, but as a constant current, hence there will not be any more spike triggered adaptation for example. [sent-108, score-0.374]

52 A neuron can be composed of up to 64 DenMem circuit hence several different adaptation variables with different time constants for each are allowed. [sent-109, score-0.566]

53 3 Parameter mapping For a given set of parameters from the AdEx model, we want to reproduce the exact same behavior with our hardware neuron. [sent-111, score-0.243]

54 Therefore, a simple two-steps procedure was developed to translate biological parameters from the AdEx model to hardware parameters. [sent-112, score-0.32]

55 The translation procedure is summarized in figure 5: Biological AdEx parameters Scaling Scaled AdEx parameters Translation Hardware parameters Figure 5: Biology to hardware parameter translation The first step is to scale the biological AdEx parameters in terms of time and voltage. [sent-113, score-0.32]

56 Then, a voltage scaling factor is defined, by which the biological voltages parameters are multiplied. [sent-115, score-0.306]

57 The second step is to translate the parameters from the scaled AdEx model to hardware parameters. [sent-117, score-0.243]

58 For this purpose, each part of the DenMem circuit was characterized in transistor-level simulations using a circuit simulator. [sent-118, score-0.308]

59 This theoretical characterization was then used to establish mathematical relations between scaled AdEx parameters and hardware parameters. [sent-119, score-0.243]

60 4 Measurement capabilities For neuron measuring purposes, the membrane can be either stimulated by incoming events from the synapse array - as an additional feature a Poisson event source is implemented on the chip - or by a programmable current. [sent-121, score-0.76]

61 Four current sources are implemented on the chip allowing to stimulate adjacent neurons individually. [sent-123, score-0.255]

62 The membrane voltage and all stored parameters in the floating gates can directly be measured via one of the two analog outputs of the HICANN chip. [sent-125, score-0.524]

63 To characterize the chip, parameters like the membrane capacitance need to be measured indirectly using the OTA, emulating gl , as a current source example. [sent-127, score-0.362]

64 3 Results Different firing patterns have been reproduced using our hardware neuron and the current stimulus in circuit simulation and in silicon, inducing a periodic step current onto the membrane. [sent-128, score-0.917]

65 The examined neuron consists of two DenMem circuits with their membrane capacitances switched to 2 pF each. [sent-129, score-0.673]

66 Figure 6 shows results of some reproduced patterns according to [23] or [16] neighbored by their phase plane trajectory of V and Vadapt . [sent-130, score-0.172]

67 gadapt and gl have been chosen equal in all simulations except tonic spiking to facilitate the nullclines: gl gl (V − El ) + ∆T e a a =V; Vadapt = − Vadapt V −VT ∆T + El + I a (9) (10) As described in [16], the AdEx model allows different types of spike after potentials (SAP). [sent-133, score-0.837]

68 Sharp SAPs are reached if the reset after a spike sets the trajectory to a point, below the V-nullcline. [sent-134, score-0.209]

69 If reset ends in a point, above the V-nullcline, the membrane voltage will be pulled down below the reset voltage Vreset by the adaptation current. [sent-135, score-0.965]

70 Here, a has been set to zero, while gl has been doubled to keep the total conductance at a similar level. [sent-137, score-0.207]

71 Parameters between simulation and measurement are only roughly mapped, as the precise mapping algorithm is still in progress - on a real chip there is a variation of transistor parameters which still needs to be counterbalanced by parameter choice. [sent-138, score-0.222]

72 As metric, for adaptation [24] and [16] use the accommodation index: A= 1 N −k−1 N i=k ISIi − ISIi−1 ISIi + ISIi−1 (11) Here k determines the number of ISI excluded from A to exclude transient behavior [15, 24] and can be chosen as one fifth for small numbers of ISIs [24]. [sent-140, score-0.223]

73 The metric calculates the average of the difference between two neighbored ISIs weighted by their sum, so it should be zero for ideal tonic spiking. [sent-141, score-0.16]

74 15 0 5 10 15 20 Time[µs] 25 30 0 5 10 15 20 Time[µs] 25 30 0 5 10 15 20 Time[µs] 25 30 0 5 10 15 20 Time[µs] 25 30 0 5 10 15 20 Time[µs] 25 30 (a) Tonic spiking 0. [sent-175, score-0.167]

75 75 0 5 10 15 20 Time[µs] 25 30 (d) Tonic burst Figure 6: Phase plane and transient plot from simulations and measurement results of the neuron stimulated by a step current of 600 nA. [sent-272, score-0.554]

76 As parameters have been chosen to reproduce the patterns obviously (adaptation is switched of for tonic spiking and strong for spike frequency adaptation) they are a little bit extreme in comparison to the calculated ones in [24] which are 0. [sent-276, score-0.494]

77 It is ambiguous to define a burst looking just at the spike frequency. [sent-281, score-0.215]

78 To generate bursting behavior, the reset has to be set to a value above the exponential threshold so that V is pulled upwards by the exponential directly after a spike. [sent-284, score-0.294]

79 As can be seen in figure 1, depending on the sharpness ∆t of the exponential term, the exact reset voltage Vr might be critical in bursting, when reseting above the exponential threshold and the nullcline is already steep at this point. [sent-285, score-0.495]

80 The AdEx model is capable of irregular spiking in contrast to the Izhikevich neuron [25] which uses a quadratic term to simulate the rise at a spike. [sent-286, score-0.46]

81 The 7 chaotic spiking capability of the AdEx model has been shown in [16]. [sent-287, score-0.167]

82 In Hardware, we observe that it is common to reach regimes, where the exact number of spikes in a burst is not constant, thus the distance to the next spike or burst may differ in the next period. [sent-288, score-0.309]

83 Another effect is that if the equilibrium potential - the potential, where the nullclines cross - is near Vt , noise may cause the membrane to cross Vt and hence generate a spike (Compare phase planes in figure 6 c) and d) ). [sent-289, score-0.416]

84 In phasic bursting, the nullclines are still crossing in a stable fix point - the resting potential caused by adaptation, leakage and stimulus is below the firing threshold of the exponential. [sent-291, score-0.28]

85 Patterns reproduced in experiment and simulations but not shown here are phasic spiking and initial bursting. [sent-292, score-0.35]

86 4 Discussion The main feature of our neuron is the capability of directly reproducing the AdEx model. [sent-293, score-0.293]

87 Nevertheless, it is low power in comparison to simulation on a supercomputer (estimated 100 µW in comparison to 370 mW on a Blue Gene/P [26] at an acceleration factor of 104 , computing time of Izhikevich neuron model [23] used as estimate. [sent-295, score-0.385]

88 ) and does not consume much chip area in comparison to the synapse array and communication infrastructure on the HICANN (figure 2). [sent-296, score-0.212]

89 Due to the design approach - implementing an established model instead of developing a new model fitting best to hardware devices - we gain a neuron allowing neuroscientist to do experiments without being a hardware specialist. [sent-302, score-0.779]

90 5 Outlook The neuron topology - several DenMems are interconnected to form a neuron - is predestined to be enhanced to a multi-compartment model. [sent-303, score-0.62]

91 The simulations and measurements in this work qualitatively reproduce patterns observed in biology and reproduced by the AdEx model in [16]. [sent-305, score-0.158]

92 Nested in the FACETS wafer-scale system, our neuron will complete the universality of the system by a versatile core for analog computation. [sent-308, score-0.403]

93 Encapsulation of the parameter mapping into low level software and PyNN [12] integration of the system will allow computational neural scientists to do experiments on the hardware and compare them to simulations, or to do large experiments, currently not implementable in a simulation. [sent-309, score-0.243]

94 A current-mode conductance-based silicon neuron for address-event neuromorphic systems. [sent-342, score-0.54]

95 Compact silicon neuron circuit with spiking and bursting behaviour. [sent-349, score-0.827]

96 Implementing synaptic plasticity in a VLSI spiking u neural network model. [sent-357, score-0.264]

97 u u Establishing a novel modeling tool: A python-based interface for a neuromorphic hardware system. [sent-373, score-0.35]

98 Realizing biological spiking network models in a configurable wafer-scale hardware system. [sent-419, score-0.487]

99 A wafer-scale neuromorphic u u hardware system for large-scale neural modeling. [sent-428, score-0.35]

100 A novel multiple objective optimization framework for constraining conductance-based neuron models by experimental data. [sent-450, score-0.293]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('adex', 0.373), ('neuron', 0.293), ('vadapt', 0.266), ('hardware', 0.243), ('voltage', 0.229), ('membrane', 0.185), ('spiking', 0.167), ('adaptation', 0.146), ('silicon', 0.14), ('circuit', 0.127), ('denmem', 0.124), ('schemmel', 0.124), ('tonic', 0.124), ('circuits', 0.123), ('spike', 0.121), ('analog', 0.11), ('conductance', 0.107), ('hicann', 0.107), ('neuromorphic', 0.107), ('chip', 0.1), ('bursting', 0.1), ('gl', 0.1), ('synaptic', 0.097), ('burst', 0.094), ('leakage', 0.094), ('cadapt', 0.089), ('reset', 0.088), ('vt', 0.087), ('neurons', 0.083), ('ijcnn', 0.078), ('nullclines', 0.078), ('vreset', 0.078), ('biological', 0.077), ('synapse', 0.073), ('derle', 0.071), ('emulate', 0.071), ('gadapt', 0.071), ('iscas', 0.071), ('isii', 0.071), ('phasic', 0.071), ('ota', 0.062), ('vlsi', 0.062), ('reproduced', 0.058), ('device', 0.056), ('simulations', 0.054), ('meier', 0.054), ('exponential', 0.053), ('el', 0.053), ('otas', 0.053), ('pynn', 0.053), ('tpulse', 0.053), ('acceleration', 0.053), ('transistor', 0.047), ('conductances', 0.047), ('patterns', 0.046), ('gure', 0.043), ('ib', 0.043), ('adjustable', 0.043), ('henry', 0.043), ('markram', 0.043), ('br', 0.041), ('transient', 0.041), ('capacitance', 0.04), ('networks', 0.04), ('ranges', 0.04), ('sharp', 0.039), ('array', 0.039), ('simulation', 0.039), ('ampli', 0.038), ('biasing', 0.038), ('gate', 0.038), ('alain', 0.038), ('facets', 0.038), ('gaining', 0.038), ('ge', 0.038), ('current', 0.037), ('stimulus', 0.037), ('pf', 0.036), ('switched', 0.036), ('accommodation', 0.036), ('capacitances', 0.036), ('cmem', 0.036), ('counterbalanced', 0.036), ('denmems', 0.036), ('dvadapt', 0.036), ('fieres', 0.036), ('isis', 0.036), ('karlheinz', 0.036), ('livi', 0.036), ('mosfet', 0.036), ('neighbored', 0.036), ('nullcline', 0.036), ('reseting', 0.036), ('synin', 0.036), ('nov', 0.035), ('stimulated', 0.035), ('implemented', 0.035), ('enhanced', 0.034), ('ring', 0.033), ('phase', 0.032)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000013 16 nips-2010-A VLSI Implementation of the Adaptive Exponential Integrate-and-Fire Neuron Model

Author: Sebastian Millner, Andreas Grübl, Karlheinz Meier, Johannes Schemmel, Marc-olivier Schwartz

Abstract: We describe an accelerated hardware neuron being capable of emulating the adaptive exponential integrate-and-fire neuron model. Firing patterns of the membrane stimulated by a step current are analyzed in transistor level simulations and in silicon on a prototype chip. The neuron is destined to be the hardware neuron of a highly integrated wafer-scale system reaching out for new computational paradigms and opening new experimentation possibilities. As the neuron is dedicated as a universal device for neuroscientific experiments, the focus lays on parameterizability and reproduction of the analytical model. 1

2 0.26248118 10 nips-2010-A Novel Kernel for Learning a Neuron Model from Spike Train Data

Author: Nicholas Fisher, Arunava Banerjee

Abstract: From a functional viewpoint, a spiking neuron is a device that transforms input spike trains on its various synapses into an output spike train on its axon. We demonstrate in this paper that the function mapping underlying the device can be tractably learned based on input and output spike train data alone. We begin by posing the problem in a classification based framework. We then derive a novel kernel for an SRM0 model that is based on PSP and AHP like functions. With the kernel we demonstrate how the learning problem can be posed as a Quadratic Program. Experimental results demonstrate the strength of our approach. 1

3 0.17429763 115 nips-2010-Identifying Dendritic Processing

Author: Aurel A. Lazar, Yevgeniy Slutskiy

Abstract: In system identification both the input and the output of a system are available to an observer and an algorithm is sought to identify parameters of a hypothesized model of that system. Here we present a novel formal methodology for identifying dendritic processing in a neural circuit consisting of a linear dendritic processing filter in cascade with a spiking neuron model. The input to the circuit is an analog signal that belongs to the space of bandlimited functions. The output is a time sequence associated with the spike train. We derive an algorithm for identification of the dendritic processing filter and reconstruct its kernel with arbitrary precision. 1

4 0.16483968 8 nips-2010-A Log-Domain Implementation of the Diffusion Network in Very Large Scale Integration

Author: Yi-da Wu, Shi-jie Lin, Hsin Chen

Abstract: The Diffusion Network(DN) is a stochastic recurrent network which has been shown capable of modeling the distributions of continuous-valued, continuoustime paths. However, the dynamics of the DN are governed by stochastic differential equations, making the DN unfavourable for simulation in a digital computer. This paper presents the implementation of the DN in analogue Very Large Scale Integration, enabling the DN to be simulated in real time. Moreover, the logdomain representation is applied to the DN, allowing the supply voltage and thus the power consumption to be reduced without limiting the dynamic ranges for diffusion processes. A VLSI chip containing a DN with two stochastic units has been designed and fabricated. The design of component circuits will be described, so will the simulation of the full system be presented. The simulation results demonstrate that the DN in VLSI is able to regenerate various types of continuous paths in real-time. 1

5 0.13872112 252 nips-2010-SpikeAnts, a spiking neuron network modelling the emergence of organization in a complex system

Author: Sylvain Chevallier, Hél\`ene Paugam-moisy, Michele Sebag

Abstract: Many complex systems, ranging from neural cell assemblies to insect societies, involve and rely on some division of labor. How to enforce such a division in a decentralized and distributed way, is tackled in this paper, using a spiking neuron network architecture. Specifically, a spatio-temporal model called SpikeAnts is shown to enforce the emergence of synchronized activities in an ant colony. Each ant is modelled from two spiking neurons; the ant colony is a sparsely connected spiking neuron network. Each ant makes its decision (among foraging, sleeping and self-grooming) from the competition between its two neurons, after the signals received from its neighbor ants. Interestingly, three types of temporal patterns emerge in the ant colony: asynchronous, synchronous, and synchronous periodic foraging activities − similar to the actual behavior of some living ant colonies. A phase diagram of the emergent activity patterns with respect to two control parameters, respectively accounting for ant sociability and receptivity, is presented and discussed. 1

6 0.135794 96 nips-2010-Fractionally Predictive Spiking Neurons

7 0.12982634 244 nips-2010-Sodium entry efficiency during action potentials: A novel single-parameter family of Hodgkin-Huxley models

8 0.11683626 157 nips-2010-Learning to localise sounds with spiking neural networks

9 0.10668322 21 nips-2010-Accounting for network effects in neuronal responses using L1 regularized point process models

10 0.10110703 253 nips-2010-Spike timing-dependent plasticity as dynamic filter

11 0.086963408 161 nips-2010-Linear readout from a neural population with partial correlation data

12 0.084459327 268 nips-2010-The Neural Costs of Optimal Control

13 0.081029139 227 nips-2010-Rescaling, thinning or complementing? On goodness-of-fit procedures for point process models and Generalized Linear Models

14 0.077088721 34 nips-2010-Attractor Dynamics with Synaptic Depression

15 0.067331649 200 nips-2010-Over-complete representations on recurrent neural networks can support persistent percepts

16 0.063673995 18 nips-2010-A novel family of non-parametric cumulative based divergences for point processes

17 0.060854483 119 nips-2010-Implicit encoding of prior probabilities in optimal neural populations

18 0.059227854 68 nips-2010-Effects of Synaptic Weight Diffusion on Learning in Decision Making Networks

19 0.053138662 32 nips-2010-Approximate Inference by Compilation to Arithmetic Circuits

20 0.050621834 263 nips-2010-Switching state space model for simultaneously estimating state transitions and nonstationary firing rates


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.109), (1, 0.023), (2, -0.166), (3, 0.206), (4, 0.091), (5, 0.235), (6, -0.067), (7, 0.097), (8, 0.09), (9, -0.032), (10, 0.023), (11, 0.076), (12, 0.05), (13, 0.078), (14, 0.066), (15, 0.016), (16, 0.041), (17, -0.087), (18, -0.018), (19, -0.102), (20, 0.003), (21, 0.065), (22, -0.113), (23, -0.009), (24, -0.043), (25, -0.056), (26, 0.045), (27, -0.075), (28, 0.002), (29, -0.12), (30, 0.047), (31, 0.033), (32, 0.005), (33, -0.13), (34, 0.07), (35, 0.035), (36, 0.099), (37, 0.047), (38, 0.008), (39, 0.02), (40, -0.017), (41, -0.001), (42, 0.104), (43, -0.067), (44, -0.007), (45, 0.032), (46, -0.016), (47, 0.009), (48, 0.117), (49, 0.115)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97028315 16 nips-2010-A VLSI Implementation of the Adaptive Exponential Integrate-and-Fire Neuron Model

Author: Sebastian Millner, Andreas Grübl, Karlheinz Meier, Johannes Schemmel, Marc-olivier Schwartz

Abstract: We describe an accelerated hardware neuron being capable of emulating the adaptive exponential integrate-and-fire neuron model. Firing patterns of the membrane stimulated by a step current are analyzed in transistor level simulations and in silicon on a prototype chip. The neuron is destined to be the hardware neuron of a highly integrated wafer-scale system reaching out for new computational paradigms and opening new experimentation possibilities. As the neuron is dedicated as a universal device for neuroscientific experiments, the focus lays on parameterizability and reproduction of the analytical model. 1

2 0.85550416 115 nips-2010-Identifying Dendritic Processing

Author: Aurel A. Lazar, Yevgeniy Slutskiy

Abstract: In system identification both the input and the output of a system are available to an observer and an algorithm is sought to identify parameters of a hypothesized model of that system. Here we present a novel formal methodology for identifying dendritic processing in a neural circuit consisting of a linear dendritic processing filter in cascade with a spiking neuron model. The input to the circuit is an analog signal that belongs to the space of bandlimited functions. The output is a time sequence associated with the spike train. We derive an algorithm for identification of the dendritic processing filter and reconstruct its kernel with arbitrary precision. 1

3 0.79789138 157 nips-2010-Learning to localise sounds with spiking neural networks

Author: Dan Goodman, Romain Brette

Abstract: To localise the source of a sound, we use location-specific properties of the signals received at the two ears caused by the asymmetric filtering of the original sound by our head and pinnae, the head-related transfer functions (HRTFs). These HRTFs change throughout an organism’s lifetime, during development for example, and so the required neural circuitry cannot be entirely hardwired. Since HRTFs are not directly accessible from perceptual experience, they can only be inferred from filtered sounds. We present a spiking neural network model of sound localisation based on extracting location-specific synchrony patterns, and a simple supervised algorithm to learn the mapping between synchrony patterns and locations from a set of example sounds, with no previous knowledge of HRTFs. After learning, our model was able to accurately localise new sounds in both azimuth and elevation, including the difficult task of distinguishing sounds coming from the front and back. Keywords: Auditory Perception & Modeling (Primary); Computational Neural Models, Neuroscience, Supervised Learning (Secondary) 1

4 0.76223534 10 nips-2010-A Novel Kernel for Learning a Neuron Model from Spike Train Data

Author: Nicholas Fisher, Arunava Banerjee

Abstract: From a functional viewpoint, a spiking neuron is a device that transforms input spike trains on its various synapses into an output spike train on its axon. We demonstrate in this paper that the function mapping underlying the device can be tractably learned based on input and output spike train data alone. We begin by posing the problem in a classification based framework. We then derive a novel kernel for an SRM0 model that is based on PSP and AHP like functions. With the kernel we demonstrate how the learning problem can be posed as a Quadratic Program. Experimental results demonstrate the strength of our approach. 1

5 0.73660392 252 nips-2010-SpikeAnts, a spiking neuron network modelling the emergence of organization in a complex system

Author: Sylvain Chevallier, Hél\`ene Paugam-moisy, Michele Sebag

Abstract: Many complex systems, ranging from neural cell assemblies to insect societies, involve and rely on some division of labor. How to enforce such a division in a decentralized and distributed way, is tackled in this paper, using a spiking neuron network architecture. Specifically, a spatio-temporal model called SpikeAnts is shown to enforce the emergence of synchronized activities in an ant colony. Each ant is modelled from two spiking neurons; the ant colony is a sparsely connected spiking neuron network. Each ant makes its decision (among foraging, sleeping and self-grooming) from the competition between its two neurons, after the signals received from its neighbor ants. Interestingly, three types of temporal patterns emerge in the ant colony: asynchronous, synchronous, and synchronous periodic foraging activities − similar to the actual behavior of some living ant colonies. A phase diagram of the emergent activity patterns with respect to two control parameters, respectively accounting for ant sociability and receptivity, is presented and discussed. 1

6 0.69346392 253 nips-2010-Spike timing-dependent plasticity as dynamic filter

7 0.65494972 96 nips-2010-Fractionally Predictive Spiking Neurons

8 0.575773 244 nips-2010-Sodium entry efficiency during action potentials: A novel single-parameter family of Hodgkin-Huxley models

9 0.54342216 8 nips-2010-A Log-Domain Implementation of the Diffusion Network in Very Large Scale Integration

10 0.4661662 227 nips-2010-Rescaling, thinning or complementing? On goodness-of-fit procedures for point process models and Generalized Linear Models

11 0.42516005 18 nips-2010-A novel family of non-parametric cumulative based divergences for point processes

12 0.39663231 21 nips-2010-Accounting for network effects in neuronal responses using L1 regularized point process models

13 0.39561582 34 nips-2010-Attractor Dynamics with Synaptic Depression

14 0.39221996 68 nips-2010-Effects of Synaptic Weight Diffusion on Learning in Decision Making Networks

15 0.32556814 65 nips-2010-Divisive Normalization: Justification and Effectiveness as Efficient Coding Transform

16 0.29791877 207 nips-2010-Phoneme Recognition with Large Hierarchical Reservoirs

17 0.29581404 161 nips-2010-Linear readout from a neural population with partial correlation data

18 0.27415791 200 nips-2010-Over-complete representations on recurrent neural networks can support persistent percepts

19 0.2433825 268 nips-2010-The Neural Costs of Optimal Control

20 0.22622156 167 nips-2010-Mixture of time-warped trajectory models for movement decoding


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(13, 0.026), (22, 0.023), (27, 0.05), (30, 0.027), (45, 0.104), (50, 0.031), (52, 0.027), (60, 0.011), (63, 0.011), (77, 0.584), (90, 0.014)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.91089207 34 nips-2010-Attractor Dynamics with Synaptic Depression

Author: K. Wong, He Wang, Si Wu, Chi Fung

Abstract: Neuronal connection weights exhibit short-term depression (STD). The present study investigates the impact of STD on the dynamics of a continuous attractor neural network (CANN) and its potential roles in neural information processing. We find that the network with STD can generate both static and traveling bumps, and STD enhances the performance of the network in tracking external inputs. In particular, we find that STD endows the network with slow-decaying plateau behaviors, namely, the network being initially stimulated to an active state will decay to silence very slowly in the time scale of STD rather than that of neural signaling. We argue that this provides a mechanism for neural systems to hold short-term memory easily and shut off persistent activities naturally.

same-paper 2 0.89043725 16 nips-2010-A VLSI Implementation of the Adaptive Exponential Integrate-and-Fire Neuron Model

Author: Sebastian Millner, Andreas Grübl, Karlheinz Meier, Johannes Schemmel, Marc-olivier Schwartz

Abstract: We describe an accelerated hardware neuron being capable of emulating the adaptive exponential integrate-and-fire neuron model. Firing patterns of the membrane stimulated by a step current are analyzed in transistor level simulations and in silicon on a prototype chip. The neuron is destined to be the hardware neuron of a highly integrated wafer-scale system reaching out for new computational paradigms and opening new experimentation possibilities. As the neuron is dedicated as a universal device for neuroscientific experiments, the focus lays on parameterizability and reproduction of the analytical model. 1

3 0.80408782 230 nips-2010-Robust Clustering as Ensembles of Affinity Relations

Author: Hairong Liu, Longin J. Latecki, Shuicheng Yan

Abstract: In this paper, we regard clustering as ensembles of k-ary affinity relations and clusters correspond to subsets of objects with maximal average affinity relations. The average affinity relation of a cluster is relaxed and well approximated by a constrained homogenous function. We present an efficient procedure to solve this optimization problem, and show that the underlying clusters can be robustly revealed by using priors systematically constructed from the data. Our method can automatically select some points to form clusters, leaving other points un-grouped; thus it is inherently robust to large numbers of outliers, which has seriously limited the applicability of classical methods. Our method also provides a unified solution to clustering from k-ary affinity relations with k ≥ 2, that is, it applies to both graph-based and hypergraph-based clustering problems. Both theoretical analysis and experimental results show the superiority of our method over classical solutions to the clustering problem, especially when there exists a large number of outliers.

4 0.78341901 15 nips-2010-A Theory of Multiclass Boosting

Author: Indraneel Mukherjee, Robert E. Schapire

Abstract: Boosting combines weak classifiers to form highly accurate predictors. Although the case of binary classification is well understood, in the multiclass setting, the “correct” requirements on the weak classifier, or the notion of the most efficient boosting algorithms are missing. In this paper, we create a broad and general framework, within which we make precise and identify the optimal requirements on the weak-classifier, as well as design the most effective, in a certain sense, boosting algorithms that assume such requirements. 1

5 0.70514917 142 nips-2010-Learning Bounds for Importance Weighting

Author: Corinna Cortes, Yishay Mansour, Mehryar Mohri

Abstract: This paper presents an analysis of importance weighting for learning from finite samples and gives a series of theoretical and algorithmic results. We point out simple cases where importance weighting can fail, which suggests the need for an analysis of the properties of this technique. We then give both upper and lower bounds for generalization with bounded importance weights and, more significantly, give learning guarantees for the more common case of unbounded importance weights under the weak assumption that the second moment is bounded, a condition related to the R´ nyi divergence of the training and test distributions. e These results are based on a series of novel and general bounds we derive for unbounded loss functions, which are of independent interest. We use these bounds to guide the definition of an alternative reweighting algorithm and report the results of experiments demonstrating its benefits. Finally, we analyze the properties of normalized importance weights which are also commonly used.

6 0.64941263 234 nips-2010-Segmentation as Maximum-Weight Independent Set

7 0.5824458 68 nips-2010-Effects of Synaptic Weight Diffusion on Learning in Decision Making Networks

8 0.53937596 252 nips-2010-SpikeAnts, a spiking neuron network modelling the emergence of organization in a complex system

9 0.53560865 8 nips-2010-A Log-Domain Implementation of the Diffusion Network in Very Large Scale Integration

10 0.5231548 244 nips-2010-Sodium entry efficiency during action potentials: A novel single-parameter family of Hodgkin-Huxley models

11 0.52105707 10 nips-2010-A Novel Kernel for Learning a Neuron Model from Spike Train Data

12 0.51196998 253 nips-2010-Spike timing-dependent plasticity as dynamic filter

13 0.48698363 127 nips-2010-Inferring Stimulus Selectivity from the Spatial Structure of Neural Network Dynamics

14 0.48296127 200 nips-2010-Over-complete representations on recurrent neural networks can support persistent percepts

15 0.45535943 238 nips-2010-Short-term memory in neuronal networks through dynamical compressed sensing

16 0.41871345 117 nips-2010-Identifying graph-structured activation patterns in networks

17 0.41415757 115 nips-2010-Identifying Dendritic Processing

18 0.40426627 96 nips-2010-Fractionally Predictive Spiking Neurons

19 0.40048075 31 nips-2010-An analysis on negative curvature induced by singularity in multi-layer neural-network learning

20 0.3961747 30 nips-2010-An Inverse Power Method for Nonlinear Eigenproblems with Applications in 1-Spectral Clustering and Sparse PCA