nips nips2003 nips2003-16 knowledge-graph by maker-knowledge-mining

16 nips-2003-A Recurrent Model of Orientation Maps with Simple and Complex Cells


Source: pdf

Author: Paul Merolla, Kwabena A. Boahen

Abstract: We describe a neuromorphic chip that utilizes transistor heterogeneity, introduced by the fabrication process, to generate orientation maps similar to those imaged in vivo. Our model consists of a recurrent network of excitatory and inhibitory cells in parallel with a push-pull stage. Similar to a previous model the recurrent network displays hotspots of activity that give rise to visual feature maps. Unlike previous work, however, the map for orientation does not depend on the sign of contrast. Instead, signindependent cells driven by both ON and OFF channels anchor the map, while push-pull interactions give rise to sign-preserving cells. These two groups of orientation-selective cells are similar to complex and simple cells observed in V1. 1 Orientation Maps Neurons in visual areas 1 and 2 (V1 and V2) are selectively tuned for a number of visual features, the most pronounced feature being orientation. Orientation preference of individual cells varies across the two-dimensional surface of the cortex in a stereotyped manner, as revealed by electrophysiology [1] and optical imaging studies [2]. The origin of these preferred orientation (PO) maps is debated, but experiments demonstrate that they exist in the absence of visual experience [3]. To the dismay of advocates of Hebbian learning, these results suggest that the initial appearance of PO maps rely on neural mechanisms oblivious to input correlations. Here, we propose a model that accounts for observed PO maps based on innate noise in neuron thresholds and synaptic currents. The network is implemented in silicon where heterogeneity is as ubiquitous as it is in biology. 2 Patterned Activity Model Ernst et al. have previously described a 2D rate model that can account for the origin of visual maps [4]. Individual units in their network receive isotropic feedforward input from the geniculate and recurrent connections from neighboring units in a Mexican hat profile, described by short-range excitation and long-range inhibition. If the recurrent connections are sufficiently strong, hotspots of activity (or ‘bumps’) form periodically across space. In a homogeneous network, these bumps of activity are equally stable at any position in the network and are free to wander. Introducing random jitter to the Mexican hat connectivity profiles breaks the symmetry and reduces the number of stable states for the bumps. Subsequently, the bumps are pinned down at the locations that maximize their net local recurrent feedback. In this regime, moving gratings are able to shift the bumps away from their stability points such that the responses of the network resemble PO maps. Therefore, the recurrent network, given an ample amount of noise, can innately generate its own orientation specificity without the need for specific hardwired connections or visually driven learning rules. 2.1 Criticisms of the Bump model We might posit that the brain uses a similar opportunistic model to derive and organize its feature maps – but the parallels between the primary visual cortex and the Ernst et al. bump model are unconvincing. For instance, the units in their model represent the collective activity of a column, reducing the network dynamics to a firing-rate approximation. But this simplification ignores the rich temporal dynamics of spiking networks, which are known to affect bump stability. More fundamentally, there is no role for functionally distinct neuron types. The primary criticism of the Ernst et al.’s bump model is that its input only consists of a luminance channel, and it is not obvious how to replace this channel with ON and OFF rectified channels to account for simple and complex cells. One possibility would be to segregate ON-driven and OFF-driven cells (referred to as simple cells in this paper) into two distinct recurrent networks. Because each network would have its own innate noise profile, bumps would form independently. Consequently, there is no guarantee that ON-driven maps would line up with OFF-driven maps, which would result in conflicting orientation signals when these simple cells converge onto sign-independent (complex) cells. 2.2 Simple Cells Solve a Complex Problem To ensure that both ON-driven and OFF-driven simple cells have the same orientation maps, both ON and OFF bumps must be computed in the same recurrent network so that they are subjected to the same noise profile. We achieve this by building our recurrent network out of cells that are sign-independent; that is both ON and OFF channels drive the network. These cells exhibit complex cell-like behavior (and are referred to as complex cells in this paper) because they are modulated at double the spatial frequency of a sinusoidal grating input. The simple cells subsequently derive their responses from two separate signals: an orientation selective feedback signal from the complex cells indicating the presence of either an ON or an OFF bump, and an ON–OFF selection signal that chooses the appropriate response flavor. Figure 1 left illustrates the formation of bumps (highlighted cells) by a recurrent network with a Mexican hat connectivity profile. Extending the Ernst et al. model, these complex bumps seed simple bumps when driven by a grating. Simple bumps that match the sign of the input survive, whereas out-of-phase bumps are extinguished (faded cells) by push-pull inhibition. Figure 1 right shows the local connections within a microcircuit. An EXC (excitatory) cell receives excitatory input from both ON and OFF channels, and projects to other EXC (not shown) and INH (inhibitory) cells. The INH cell projects back in a reciprocal configuration to EXC cells. The divergence is indicated in left. ON-driven and OFF-driven simple cells receive input in a push-pull configuration (i.e., ON cells are excited by ON inputs and inhibited by OFF inputs, and vise-versa), while additionally receiving input from the EXC–INH recurrent network. In this model, we implement our push-pull circuit using monosynaptic inhibitory connections, despite the fact that geniculate input is strictly excitatory. This simplification, while anatomically incorrect, yields a more efficient implementation that is functionally equivalent. ON Input Luminance OFF Input left right EXC EXC Divergence INH INH Simple Cells Complex Cells ON & OFF Input ON OFF OFF Space Figure 1: left, Complex and simple cell responses to a sinusoidal grating input. Luminance is transformed into ON (green) and OFF (red) pathways by retinal processing. Complex cells form a recurrent network through excitatory and inhibitory projections (yellow and blue lines, respectively), and clusters of activity occur at twice the spatial frequency of the grating. ON input activates ON-driven simple cells (bright green) and suppresses OFF-driven simple cells (faded red), and vise-versa. right, The bump model’s local microcircuit: circles represent neurons, curved lines represent axon arbors that end in excitatory synapses (v shape) or inhibitory synapses (open circles). For simplicity, inhibitory interneurons were omitted in our push-pull circuit. 2.3 Mathematical Description • The neurons in our network follow the equation CV = −∑ ∂(t − tn) + I syn − I KCa − I leak , • n where C is membrane capacitance, V is the temporal derivative of the membrane voltage, δ(·) is the Dirac delta function, which resets the membrane at the times tn when it crosses threshold, Isyn is synaptic current from the network, and Ileak is a constant leak current. Neurons receive synaptic current of the form: ON I syn = w+ I ON − w− I OFF + wEE I EXC − wEI I INH , EXC I syn = w+ ( I ON + I OFF ) + wEE I EXC − wEI I INH + I back , OFF INH I syn = w+ I OFF − w− I ON + wEE I EXC − wEI I INH , I syn = wIE I EXC where w+ is the excitatory synaptic strength for ON and OFF input synapses, w- is the strength of the push-pull inhibition, wEE is the synaptic strength for EXC cell projections to other EXC cells, wEI is the strength of INH cell projections to EXC cells, wIE is the strength of EXC cell projections to INH cells, Iback is a constant input current, and I{ON,OFF,EXC,INH} account for all impinging synapses from each of the four cell types. These terms are calculated for cell i using an arbor function that consists of a spatial weighting J(r) and a post-synaptic current waveform α(t): k ∑ J (i − k ) ⋅ α (t − t n ) , where k spans all cells of a given type and n indexes their spike k ,n times. The spatial weighting function is described by J (i − k ) = exp( − i − k σ ) , with σ as the space constant. The current waveform, which is non-zero for t>0, convolves a 1 t function with a decaying exponential: α (t ) = (t τ c + α 0 ) −1 ∗ exp(− t τ e ) , where τc is the decay-rate, and τe is the time constant of the exponential. Finally, we model spike-rate adaptation with a calcium-dependent potassium-channel (KCa), which integrates Ca triggered by spikes at times tn with a gain K and a time constant τk, as described by I KCa = ∑ K exp(tn − t τ k ) . n 3 Silicon Implementation We implemented our model in silicon using the TSMC (Taiwan Semiconductor Manufacturing Company) 0.25µm 5-metal layer CMOS process. The final chip consists of a 2-D core of 48x48 pixels, surrounded by asynchronous digital circuitry that transmits and receives spikes in real-time. Neurons that reach threshold within the array are encoded as address-events and sent off-chip, and concurrently, incoming address-events are sent to their appropriate synapse locations. This interface is compatible with other spike-based chips that use address-events [5]. The fabricated bump chip has close to 460,000 transistors packed in 10 mm2 of silicon area for a total of 9,216 neurons. 3.1 Circuit Design Our neural circuit was morphed into hardware using four building blocks. Figure 2 shows the transistor implementation for synapses, axonal arbors (diffuser), KCa analogs, and neurons. The circuits are designed to operate in the subthreshold region (except for the spiking mechanism of the neuron). Noise is not purposely designed into the circuits. Instead, random variations from the fabrication process introduce significant deviations in I-V curves of theoretically identical MOS transistors. The function of the synapse circuit is to convert a brief voltage pulse (neuron spike) into a postsynaptic current with biologically realistic temporal dynamics. Our synapse achieves this by cascading a current-mirror integrator with a log-domain low-pass filter. The current-mirror integrator has a current impulse response that decays as 1 t (with a decay rate set by the voltage τc and an amplitude set by A). This time-extended current pulse is fed into a log-domain low-pass filter (equivalent to a current-domain RC circuit) that imposes a rise-time on the post-synaptic current set by τe. ON and OFF input synapses receive presynaptic spikes from the off-chip link, whereas EXC and INH synapses receive presynaptic spikes from local on-chip neurons. Synapse Je Diffuser Ir A Ig Jc KCa Analog Neuron Jk Vmem Vspk K Figure 2: Transistor implementations are shown for a synapse, diffuser, KCa analog, and neuron (simplified), with circuit insignias in the top-left of each box. The circuits they interact with are indicated (e.g. the neuron receives synaptic current from the diffuser as well as adaptation current from the KCa analog; the neuron in turn drives the KCa analog). The far right shows layout for one pixel of the bump chip (vertical dimension is 83µm, horizontal is 30 µm). The diffuser circuit models axonal arbors that project to a local region of space with an exponential weighting. Analogous to resistive divider networks, diffusers [6] efficiently distribute synaptic currents to multiple targets. We use four diffusers to implement axonal projections for: the ON pathway, which excites ON and EXC cells and inhibits OFF cells; the OFF pathway, which excites OFF and EXC cells and inhibits ON cells; the EXC cells, which excite all cell types; and the INH cells, which inhibits EXC, ON, and OFF cells. Each diffuser node connects to its six neighbors through transistors that have a pseudo-conductance set by σr, and to its target site through a pseudo-conductance set by σg; the space-constant of the exponential synaptic decay is set by σr and σg’s relative levels. The neuron circuit integrates diffuser currents on its membrane capacitance. Diffusers either directly inject current (excitatory), or siphon off current (inhibitory) through a current-mirror. Spikes are generated by an inverter with positive feedback (modified from [7]), and the membrane is subsequently reset by the spike signal. We model a calcium concentration in the cell with a KCa analog. K controls the amount of calcium that enters the cell per spike; the concentration decays exponentially with a time constant set by τk. Elevated charge levels activate a KCa-like current that throttles the spike-rate of the neuron. 3.2 Experimental Setup Our setup uses either a silicon retina [8] or a National Instruments DIO (digital input–output) card as input to the bump chip. This allows us to test our V1 model with real-time visual stimuli, similar to the experimental paradigm of electrophysiologists. More specifically, the setup uses an address-event link [5] to establish virtual point-to-point connectivity between ON or OFF ganglion cells from the retina chip (or DIO card) with ON or OFF synapses on the bump chip. Both the input activity and the output activity of the bump chip is displayed in real-time using receiver chips, which integrate incoming spikes and displays their rates as pixel intensities on a monitor. A logic analyzer is used to capture spike output from the bump chip so it can be further analyzed. We investigated responses of the bump chip to gratings moving in sixteen different directions, both qualitatively and quantitatively. For the qualitative aspect, we created a PO map by taking each cell’s average activity for each stimulus direction and computing the vector sum. To obtain a quantitative measure, we looked at the normalized vector magnitude (NVM), which reveals the sharpness of a cell’s tuning. The NVM is calculated by dividing the vector sum by the magnitude sum for each cell. The NVM is 0 if a cell responds equally to all orientations, and 1 if a cell’s orientation selectivity is perfect such that it only responds at a single orientation. 4 Results We presented sixteen moving gratings to the network, with directions ranging from 0 to 360 degrees. The spatial frequency of the grating is tuned to match the size of the average bump, and the temporal frequency is 1 Hz. Figure 3a shows a resulting PO map for directions from 180 to 360 degrees, looking at the inhibitory cell population (the data looks similar for other cell types). Black contours represent stable bump regions, or equivalently, the regions that exceed a prescribed threshold (90 spikes) for all directions. The PO map from the bump chip reveals structure that resembles data from real cortex. Nearby cells tend to prefer similar orientations except at fractures. There are even regions that are similar to pinwheels (delimited by a white rectangle). A PO is a useful tool to describe a network’s selectivity, but it only paints part of the picture. So we have additionally computed a NVM map and a NVM histogram, shown in Figure 3b and 3c respectively. The NVM map shows that cells with sharp selectivity tend to cluster, particularly around the edge of the bumps. The histogram also reveals that the distribution of cell selectivity across the network varies considerably, skewed towards broadly tuned cells. We also looked at spike rasters from different cell-types to gain insight into their phase relationship with the stimulus. In particular, we present recordings for the site indicated by the arrow (see Figure 3a) for gratings moving in eight directions ranging from 0 to 360 degrees in 45-degree increments (this location was chosen because it is in the vicinity of a pinwheel, is reasonably selective, and shows considerable modulation in its firing rate). Figure 4 shows the luminance of the stimulus (bottom sinusoids), ON- (cyan) and OFF-input (magenta) spike trains, and the resulting spike trains from EXC (yellow), INH (blue), ON- (green), and OFFdriven (red) cell types for each of the eight directions. The center polar plot summarizes the orientation selectivity for each cell-type by showing the normalized number of spikes for each stimulus. Data is shown for one period. Even though all cells-types are selective for the same orientation (regardless of grating direction), complex cell responses tend to be phase-insensitive while the simple cell responses are modulated at the fundamental frequency. It is worth noting that the simple cells have sharper orientation selectivity compared to the complex cells. This trend is characteristic of our data. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 300 250 200 150 100 50 20 40 60 80 100 120 140 160 180 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 3: (a) PO map for the inhibitory cell population stimulated with eight different directions from 180 to 360 degrees (black represents no activity, contours delineate regions that exceed 90 spikes for all stimuli). Normalized vector magnitude (NVM) data is presented as (b) a map and (c) a histogram. Figure 4: Spike rasters and polar plot for 8 directions ranging from 0 to 360 degrees. Each set of spike rasters represent from bottom to top, ON- (cyan) and OFF-input (magenta), INH (yellow), EXC (blue), and ON- (green) and OFF-driven (red). The stimulus period is 1 sec. 5 Discussion We have implemented a large-scale network of spiking neurons in a silicon chip that is based on layer 4 of the visual cortex. The initial testing of the network reveals a PO map, inherited from innate chip heterogeneities, resembling cortical maps. Our microcircuit proposes a novel function for complex-like cells; that is they create a sign-independent orientation selective signal, which through a push-pull circuit creates sharply tuned simple cells with the same orientation preference. Recently, Ringach et al. surveyed orientation selectivity in the macaque [9]. They observed that, in a population of V1 neurons (N=308) the distribution of orientation selectivity is quite broad, having a median NVM of 0.39. We have measured median NVM’s ranging from 0.25 to 0.32. Additionally, Ringach et al. found a negative correlation between spontaneous firing rate and NVM. This is consistent with our model because cells closer to the center of the bump have higher firing rates and broader tuning. While the results from the bump chip are promising, our maps are less consistent and noisier than the maps Ernst et al. have reported. We believe this is because our network is tuned to operate in a fluid state where bumps come on, travel a short distance and disappear (motivated by cortical imaging studies). But excessive fluidity can cause non-dominant bumps to briefly appear and adversely shift the PO maps. We are currently investigating the role of lateral connections between bumps as a means to suppress these spontaneous shifts. The neural mechanisms that underlie the orientation selectivity of V1 neurons are still highly debated. This may be because neuron responses are not only shaped by feedforward inputs, but are also influenced at the network level. If modeling is going to be a useful guide for electrophysiologists, we must model at the network level while retaining cell level detail. Our results demonstrate that a spike-based neuromorphic system is well suited to model layer 4 of the visual cortex. The same approach may be used to build large-scale models of other cortical regions. References 1. Hubel, D. and T. Wiesel, Receptive firelds, binocular interaction and functional architecture in the cat's visual cortex. J. Physiol, 1962. 160: p. 106-154. 2. Blasdel, G.G., Orientation selectivity, preference, and continuity in monkey striate cortex. J Neurosci, 1992. 12(8): p. 3139-61. 3. Crair, M.C., D.C. Gillespie, and M.P. Stryker, The role of visual experience in the development of columns in cat visual cortex. Science, 1998. 279(5350): p. 566-70. 4. Ernst, U.A., et al., Intracortical origin of visual maps. Nat Neurosci, 2001. 4(4): p. 431-6. 5. Boahen, K., Point-to-Point Connectivity. IEEE Transactions on Circuits & Systems II, 2000. vol 47 no 5: p. 416-434. 6. Boahen, K. and Andreou. A contrast sensitive silicon retina with reciprocal synapses. in NIPS91. 1992: IEEE. 7. Culurciello, E., R. Etienne-Cummings, and K. Boahen, A Biomorphic Digital Image Sensor. IEEE Journal of Solid State Circuits, 2003. vol 38 no 2: p. 281-294. 8. Zaghloul, K., A silicon implementation of a novel model for retinal processing, in Neuroscience. 2002, UPENN: Philadelphia. 9. Ringach, D.L., R.M. Shapley, and M.J. Hawken, Orientation selectivity in macaque V1: diversity and laminar dependence. J Neurosci, 2002. 22(13): p. 5639-51.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract We describe a neuromorphic chip that utilizes transistor heterogeneity, introduced by the fabrication process, to generate orientation maps similar to those imaged in vivo. [sent-3, score-0.464]

2 Our model consists of a recurrent network of excitatory and inhibitory cells in parallel with a push-pull stage. [sent-4, score-0.724]

3 Similar to a previous model the recurrent network displays hotspots of activity that give rise to visual feature maps. [sent-5, score-0.417]

4 Unlike previous work, however, the map for orientation does not depend on the sign of contrast. [sent-6, score-0.21]

5 Instead, signindependent cells driven by both ON and OFF channels anchor the map, while push-pull interactions give rise to sign-preserving cells. [sent-7, score-0.382]

6 These two groups of orientation-selective cells are similar to complex and simple cells observed in V1. [sent-8, score-0.682]

7 1 Orientation Maps Neurons in visual areas 1 and 2 (V1 and V2) are selectively tuned for a number of visual features, the most pronounced feature being orientation. [sent-9, score-0.197]

8 Orientation preference of individual cells varies across the two-dimensional surface of the cortex in a stereotyped manner, as revealed by electrophysiology [1] and optical imaging studies [2]. [sent-10, score-0.316]

9 The origin of these preferred orientation (PO) maps is debated, but experiments demonstrate that they exist in the absence of visual experience [3]. [sent-11, score-0.356]

10 To the dismay of advocates of Hebbian learning, these results suggest that the initial appearance of PO maps rely on neural mechanisms oblivious to input correlations. [sent-12, score-0.127]

11 Here, we propose a model that accounts for observed PO maps based on innate noise in neuron thresholds and synaptic currents. [sent-13, score-0.289]

12 The network is implemented in silicon where heterogeneity is as ubiquitous as it is in biology. [sent-14, score-0.2]

13 have previously described a 2D rate model that can account for the origin of visual maps [4]. [sent-16, score-0.192]

14 Individual units in their network receive isotropic feedforward input from the geniculate and recurrent connections from neighboring units in a Mexican hat profile, described by short-range excitation and long-range inhibition. [sent-17, score-0.454]

15 If the recurrent connections are sufficiently strong, hotspots of activity (or ‘bumps’) form periodically across space. [sent-18, score-0.294]

16 In a homogeneous network, these bumps of activity are equally stable at any position in the network and are free to wander. [sent-19, score-0.369]

17 Subsequently, the bumps are pinned down at the locations that maximize their net local recurrent feedback. [sent-21, score-0.349]

18 In this regime, moving gratings are able to shift the bumps away from their stability points such that the responses of the network resemble PO maps. [sent-22, score-0.422]

19 Therefore, the recurrent network, given an ample amount of noise, can innately generate its own orientation specificity without the need for specific hardwired connections or visually driven learning rules. [sent-23, score-0.385]

20 1 Criticisms of the Bump model We might posit that the brain uses a similar opportunistic model to derive and organize its feature maps – but the parallels between the primary visual cortex and the Ernst et al. [sent-25, score-0.161]

21 For instance, the units in their model represent the collective activity of a column, reducing the network dynamics to a firing-rate approximation. [sent-27, score-0.162]

22 But this simplification ignores the rich temporal dynamics of spiking networks, which are known to affect bump stability. [sent-28, score-0.335]

23 ’s bump model is that its input only consists of a luminance channel, and it is not obvious how to replace this channel with ON and OFF rectified channels to account for simple and complex cells. [sent-31, score-0.461]

24 One possibility would be to segregate ON-driven and OFF-driven cells (referred to as simple cells in this paper) into two distinct recurrent networks. [sent-32, score-0.774]

25 Because each network would have its own innate noise profile, bumps would form independently. [sent-33, score-0.358]

26 Consequently, there is no guarantee that ON-driven maps would line up with OFF-driven maps, which would result in conflicting orientation signals when these simple cells converge onto sign-independent (complex) cells. [sent-34, score-0.568]

27 2 Simple Cells Solve a Complex Problem To ensure that both ON-driven and OFF-driven simple cells have the same orientation maps, both ON and OFF bumps must be computed in the same recurrent network so that they are subjected to the same noise profile. [sent-36, score-0.928]

28 We achieve this by building our recurrent network out of cells that are sign-independent; that is both ON and OFF channels drive the network. [sent-37, score-0.593]

29 These cells exhibit complex cell-like behavior (and are referred to as complex cells in this paper) because they are modulated at double the spatial frequency of a sinusoidal grating input. [sent-38, score-0.852]

30 The simple cells subsequently derive their responses from two separate signals: an orientation selective feedback signal from the complex cells indicating the presence of either an ON or an OFF bump, and an ON–OFF selection signal that chooses the appropriate response flavor. [sent-39, score-0.975]

31 Figure 1 left illustrates the formation of bumps (highlighted cells) by a recurrent network with a Mexican hat connectivity profile. [sent-40, score-0.532]

32 model, these complex bumps seed simple bumps when driven by a grating. [sent-42, score-0.494]

33 Simple bumps that match the sign of the input survive, whereas out-of-phase bumps are extinguished (faded cells) by push-pull inhibition. [sent-43, score-0.453]

34 An EXC (excitatory) cell receives excitatory input from both ON and OFF channels, and projects to other EXC (not shown) and INH (inhibitory) cells. [sent-45, score-0.277]

35 The INH cell projects back in a reciprocal configuration to EXC cells. [sent-46, score-0.191]

36 ON-driven and OFF-driven simple cells receive input in a push-pull configuration (i. [sent-48, score-0.396]

37 , ON cells are excited by ON inputs and inhibited by OFF inputs, and vise-versa), while additionally receiving input from the EXC–INH recurrent network. [sent-50, score-0.497]

38 In this model, we implement our push-pull circuit using monosynaptic inhibitory connections, despite the fact that geniculate input is strictly excitatory. [sent-51, score-0.224]

39 ON Input Luminance OFF Input left right EXC EXC Divergence INH INH Simple Cells Complex Cells ON & OFF Input ON OFF OFF Space Figure 1: left, Complex and simple cell responses to a sinusoidal grating input. [sent-53, score-0.303]

40 Complex cells form a recurrent network through excitatory and inhibitory projections (yellow and blue lines, respectively), and clusters of activity occur at twice the spatial frequency of the grating. [sent-55, score-0.887]

41 ON input activates ON-driven simple cells (bright green) and suppresses OFF-driven simple cells (faded red), and vise-versa. [sent-56, score-0.671]

42 right, The bump model’s local microcircuit: circles represent neurons, curved lines represent axon arbors that end in excitatory synapses (v shape) or inhibitory synapses (open circles). [sent-57, score-0.645]

43 For simplicity, inhibitory interneurons were omitted in our push-pull circuit. [sent-58, score-0.09]

44 These terms are calculated for cell i using an arbor function that consists of a spatial weighting J(r) and a post-synaptic current waveform α(t): k ∑ J (i − k ) ⋅ α (t − t n ) , where k spans all cells of a given type and n indexes their spike k ,n times. [sent-62, score-0.603]

45 Finally, we model spike-rate adaptation with a calcium-dependent potassium-channel (KCa), which integrates Ca triggered by spikes at times tn with a gain K and a time constant τk, as described by I KCa = ∑ K exp(tn − t τ k ) . [sent-65, score-0.144]

46 n 3 Silicon Implementation We implemented our model in silicon using the TSMC (Taiwan Semiconductor Manufacturing Company) 0. [sent-66, score-0.101]

47 The final chip consists of a 2-D core of 48x48 pixels, surrounded by asynchronous digital circuitry that transmits and receives spikes in real-time. [sent-68, score-0.229]

48 The fabricated bump chip has close to 460,000 transistors packed in 10 mm2 of silicon area for a total of 9,216 neurons. [sent-71, score-0.503]

49 1 Circuit Design Our neural circuit was morphed into hardware using four building blocks. [sent-73, score-0.095]

50 Figure 2 shows the transistor implementation for synapses, axonal arbors (diffuser), KCa analogs, and neurons. [sent-74, score-0.148]

51 The function of the synapse circuit is to convert a brief voltage pulse (neuron spike) into a postsynaptic current with biologically realistic temporal dynamics. [sent-78, score-0.183]

52 ON and OFF input synapses receive presynaptic spikes from the off-chip link, whereas EXC and INH synapses receive presynaptic spikes from local on-chip neurons. [sent-82, score-0.527]

53 Synapse Je Diffuser Ir A Ig Jc KCa Analog Neuron Jk Vmem Vspk K Figure 2: Transistor implementations are shown for a synapse, diffuser, KCa analog, and neuron (simplified), with circuit insignias in the top-left of each box. [sent-83, score-0.17]

54 the neuron receives synaptic current from the diffuser as well as adaptation current from the KCa analog; the neuron in turn drives the KCa analog). [sent-86, score-0.364]

55 The far right shows layout for one pixel of the bump chip (vertical dimension is 83µm, horizontal is 30 µm). [sent-87, score-0.402]

56 The diffuser circuit models axonal arbors that project to a local region of space with an exponential weighting. [sent-88, score-0.339]

57 Analogous to resistive divider networks, diffusers [6] efficiently distribute synaptic currents to multiple targets. [sent-89, score-0.134]

58 Each diffuser node connects to its six neighbors through transistors that have a pseudo-conductance set by σr, and to its target site through a pseudo-conductance set by σg; the space-constant of the exponential synaptic decay is set by σr and σg’s relative levels. [sent-91, score-0.214]

59 The neuron circuit integrates diffuser currents on its membrane capacitance. [sent-92, score-0.363]

60 Spikes are generated by an inverter with positive feedback (modified from [7]), and the membrane is subsequently reset by the spike signal. [sent-94, score-0.177]

61 We model a calcium concentration in the cell with a KCa analog. [sent-95, score-0.191]

62 K controls the amount of calcium that enters the cell per spike; the concentration decays exponentially with a time constant set by τk. [sent-96, score-0.191]

63 2 Experimental Setup Our setup uses either a silicon retina [8] or a National Instruments DIO (digital input–output) card as input to the bump chip. [sent-99, score-0.476]

64 More specifically, the setup uses an address-event link [5] to establish virtual point-to-point connectivity between ON or OFF ganglion cells from the retina chip (or DIO card) with ON or OFF synapses on the bump chip. [sent-101, score-0.868]

65 Both the input activity and the output activity of the bump chip is displayed in real-time using receiver chips, which integrate incoming spikes and displays their rates as pixel intensities on a monitor. [sent-102, score-0.66]

66 A logic analyzer is used to capture spike output from the bump chip so it can be further analyzed. [sent-103, score-0.497]

67 We investigated responses of the bump chip to gratings moving in sixteen different directions, both qualitatively and quantitatively. [sent-104, score-0.548]

68 For the qualitative aspect, we created a PO map by taking each cell’s average activity for each stimulus direction and computing the vector sum. [sent-105, score-0.109]

69 The NVM is 0 if a cell responds equally to all orientations, and 1 if a cell’s orientation selectivity is perfect such that it only responds at a single orientation. [sent-108, score-0.471]

70 4 Results We presented sixteen moving gratings to the network, with directions ranging from 0 to 360 degrees. [sent-109, score-0.166]

71 The spatial frequency of the grating is tuned to match the size of the average bump, and the temporal frequency is 1 Hz. [sent-110, score-0.141]

72 Figure 3a shows a resulting PO map for directions from 180 to 360 degrees, looking at the inhibitory cell population (the data looks similar for other cell types). [sent-111, score-0.501]

73 Black contours represent stable bump regions, or equivalently, the regions that exceed a prescribed threshold (90 spikes) for all directions. [sent-112, score-0.266]

74 The PO map from the bump chip reveals structure that resembles data from real cortex. [sent-113, score-0.487]

75 Nearby cells tend to prefer similar orientations except at fractures. [sent-114, score-0.316]

76 The NVM map shows that cells with sharp selectivity tend to cluster, particularly around the edge of the bumps. [sent-118, score-0.508]

77 The histogram also reveals that the distribution of cell selectivity across the network varies considerably, skewed towards broadly tuned cells. [sent-119, score-0.496]

78 We also looked at spike rasters from different cell-types to gain insight into their phase relationship with the stimulus. [sent-120, score-0.155]

79 Figure 4 shows the luminance of the stimulus (bottom sinusoids), ON- (cyan) and OFF-input (magenta) spike trains, and the resulting spike trains from EXC (yellow), INH (blue), ON- (green), and OFFdriven (red) cell types for each of the eight directions. [sent-122, score-0.421]

80 The center polar plot summarizes the orientation selectivity for each cell-type by showing the normalized number of spikes for each stimulus. [sent-123, score-0.403]

81 Even though all cells-types are selective for the same orientation (regardless of grating direction), complex cell responses tend to be phase-insensitive while the simple cell responses are modulated at the fundamental frequency. [sent-125, score-0.748]

82 It is worth noting that the simple cells have sharper orientation selectivity compared to the complex cells. [sent-126, score-0.676]

83 9 1 Figure 3: (a) PO map for the inhibitory cell population stimulated with eight different directions from 180 to 360 degrees (black represents no activity, contours delineate regions that exceed 90 spikes for all stimuli). [sent-146, score-0.433]

84 Figure 4: Spike rasters and polar plot for 8 directions ranging from 0 to 360 degrees. [sent-148, score-0.133]

85 Each set of spike rasters represent from bottom to top, ON- (cyan) and OFF-input (magenta), INH (yellow), EXC (blue), and ON- (green) and OFF-driven (red). [sent-149, score-0.155]

86 5 Discussion We have implemented a large-scale network of spiking neurons in a silicon chip that is based on layer 4 of the visual cortex. [sent-151, score-0.493]

87 The initial testing of the network reveals a PO map, inherited from innate chip heterogeneities, resembling cortical maps. [sent-152, score-0.326]

88 Our microcircuit proposes a novel function for complex-like cells; that is they create a sign-independent orientation selective signal, which through a push-pull circuit creates sharply tuned simple cells with the same orientation preference. [sent-153, score-0.872]

89 They observed that, in a population of V1 neurons (N=308) the distribution of orientation selectivity is quite broad, having a median NVM of 0. [sent-156, score-0.365]

90 This is consistent with our model because cells closer to the center of the bump have higher firing rates and broader tuning. [sent-163, score-0.62]

91 While the results from the bump chip are promising, our maps are less consistent and noisier than the maps Ernst et al. [sent-164, score-0.578]

92 We believe this is because our network is tuned to operate in a fluid state where bumps come on, travel a short distance and disappear (motivated by cortical imaging studies). [sent-166, score-0.357]

93 But excessive fluidity can cause non-dominant bumps to briefly appear and adversely shift the PO maps. [sent-167, score-0.207]

94 We are currently investigating the role of lateral connections between bumps as a means to suppress these spontaneous shifts. [sent-168, score-0.256]

95 The neural mechanisms that underlie the orientation selectivity of V1 neurons are still highly debated. [sent-169, score-0.365]

96 This may be because neuron responses are not only shaped by feedforward inputs, but are also influenced at the network level. [sent-170, score-0.259]

97 If modeling is going to be a useful guide for electrophysiologists, we must model at the network level while retaining cell level detail. [sent-171, score-0.26]

98 Stryker, The role of visual experience in the development of columns in cat visual cortex. [sent-196, score-0.146]

99 , A silicon implementation of a novel model for retinal processing, in Neuroscience. [sent-230, score-0.101]

100 Hawken, Orientation selectivity in macaque V1: diversity and laminar dependence. [sent-239, score-0.146]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('exc', 0.383), ('cells', 0.316), ('bump', 0.266), ('inh', 0.261), ('bumps', 0.207), ('kca', 0.18), ('nvm', 0.18), ('orientation', 0.164), ('cell', 0.161), ('po', 0.146), ('selectivity', 0.146), ('recurrent', 0.142), ('diffuser', 0.14), ('chip', 0.136), ('ernst', 0.12), ('silicon', 0.101), ('syn', 0.1), ('network', 0.099), ('circuit', 0.095), ('spike', 0.095), ('spikes', 0.093), ('inhibitory', 0.09), ('maps', 0.088), ('synapses', 0.08), ('wei', 0.08), ('excitatory', 0.077), ('neuron', 0.075), ('synaptic', 0.074), ('visual', 0.073), ('luminance', 0.07), ('wee', 0.07), ('boahen', 0.063), ('gratings', 0.063), ('activity', 0.063), ('diffusers', 0.06), ('inhibits', 0.06), ('mexican', 0.06), ('rasters', 0.06), ('ringach', 0.06), ('grating', 0.059), ('synapse', 0.056), ('neurons', 0.055), ('membrane', 0.053), ('responses', 0.053), ('arbors', 0.052), ('axonal', 0.052), ('hat', 0.052), ('innate', 0.052), ('tuned', 0.051), ('tn', 0.051), ('complex', 0.05), ('connections', 0.049), ('selective', 0.047), ('map', 0.046), ('transistor', 0.044), ('yellow', 0.044), ('directions', 0.043), ('green', 0.042), ('circuits', 0.041), ('receive', 0.041), ('dio', 0.04), ('faded', 0.04), ('hotspots', 0.04), ('magenta', 0.04), ('simplification', 0.04), ('wie', 0.04), ('reveals', 0.039), ('analog', 0.039), ('input', 0.039), ('firing', 0.038), ('retina', 0.038), ('neurosci', 0.038), ('red', 0.037), ('projections', 0.037), ('channels', 0.036), ('excites', 0.035), ('cyan', 0.035), ('microcircuit', 0.035), ('strength', 0.034), ('blue', 0.032), ('voltage', 0.032), ('connectivity', 0.032), ('fabrication', 0.032), ('chips', 0.032), ('card', 0.032), ('feedforward', 0.032), ('spatial', 0.031), ('origin', 0.031), ('driven', 0.03), ('ranging', 0.03), ('sinusoidal', 0.03), ('integrator', 0.03), ('leak', 0.03), ('calcium', 0.03), ('presynaptic', 0.03), ('reciprocal', 0.03), ('sixteen', 0.03), ('vol', 0.03), ('spiking', 0.029), ('subsequently', 0.029)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999923 16 nips-2003-A Recurrent Model of Orientation Maps with Simple and Complex Cells

Author: Paul Merolla, Kwabena A. Boahen

Abstract: We describe a neuromorphic chip that utilizes transistor heterogeneity, introduced by the fabrication process, to generate orientation maps similar to those imaged in vivo. Our model consists of a recurrent network of excitatory and inhibitory cells in parallel with a push-pull stage. Similar to a previous model the recurrent network displays hotspots of activity that give rise to visual feature maps. Unlike previous work, however, the map for orientation does not depend on the sign of contrast. Instead, signindependent cells driven by both ON and OFF channels anchor the map, while push-pull interactions give rise to sign-preserving cells. These two groups of orientation-selective cells are similar to complex and simple cells observed in V1. 1 Orientation Maps Neurons in visual areas 1 and 2 (V1 and V2) are selectively tuned for a number of visual features, the most pronounced feature being orientation. Orientation preference of individual cells varies across the two-dimensional surface of the cortex in a stereotyped manner, as revealed by electrophysiology [1] and optical imaging studies [2]. The origin of these preferred orientation (PO) maps is debated, but experiments demonstrate that they exist in the absence of visual experience [3]. To the dismay of advocates of Hebbian learning, these results suggest that the initial appearance of PO maps rely on neural mechanisms oblivious to input correlations. Here, we propose a model that accounts for observed PO maps based on innate noise in neuron thresholds and synaptic currents. The network is implemented in silicon where heterogeneity is as ubiquitous as it is in biology. 2 Patterned Activity Model Ernst et al. have previously described a 2D rate model that can account for the origin of visual maps [4]. Individual units in their network receive isotropic feedforward input from the geniculate and recurrent connections from neighboring units in a Mexican hat profile, described by short-range excitation and long-range inhibition. If the recurrent connections are sufficiently strong, hotspots of activity (or ‘bumps’) form periodically across space. In a homogeneous network, these bumps of activity are equally stable at any position in the network and are free to wander. Introducing random jitter to the Mexican hat connectivity profiles breaks the symmetry and reduces the number of stable states for the bumps. Subsequently, the bumps are pinned down at the locations that maximize their net local recurrent feedback. In this regime, moving gratings are able to shift the bumps away from their stability points such that the responses of the network resemble PO maps. Therefore, the recurrent network, given an ample amount of noise, can innately generate its own orientation specificity without the need for specific hardwired connections or visually driven learning rules. 2.1 Criticisms of the Bump model We might posit that the brain uses a similar opportunistic model to derive and organize its feature maps – but the parallels between the primary visual cortex and the Ernst et al. bump model are unconvincing. For instance, the units in their model represent the collective activity of a column, reducing the network dynamics to a firing-rate approximation. But this simplification ignores the rich temporal dynamics of spiking networks, which are known to affect bump stability. More fundamentally, there is no role for functionally distinct neuron types. The primary criticism of the Ernst et al.’s bump model is that its input only consists of a luminance channel, and it is not obvious how to replace this channel with ON and OFF rectified channels to account for simple and complex cells. One possibility would be to segregate ON-driven and OFF-driven cells (referred to as simple cells in this paper) into two distinct recurrent networks. Because each network would have its own innate noise profile, bumps would form independently. Consequently, there is no guarantee that ON-driven maps would line up with OFF-driven maps, which would result in conflicting orientation signals when these simple cells converge onto sign-independent (complex) cells. 2.2 Simple Cells Solve a Complex Problem To ensure that both ON-driven and OFF-driven simple cells have the same orientation maps, both ON and OFF bumps must be computed in the same recurrent network so that they are subjected to the same noise profile. We achieve this by building our recurrent network out of cells that are sign-independent; that is both ON and OFF channels drive the network. These cells exhibit complex cell-like behavior (and are referred to as complex cells in this paper) because they are modulated at double the spatial frequency of a sinusoidal grating input. The simple cells subsequently derive their responses from two separate signals: an orientation selective feedback signal from the complex cells indicating the presence of either an ON or an OFF bump, and an ON–OFF selection signal that chooses the appropriate response flavor. Figure 1 left illustrates the formation of bumps (highlighted cells) by a recurrent network with a Mexican hat connectivity profile. Extending the Ernst et al. model, these complex bumps seed simple bumps when driven by a grating. Simple bumps that match the sign of the input survive, whereas out-of-phase bumps are extinguished (faded cells) by push-pull inhibition. Figure 1 right shows the local connections within a microcircuit. An EXC (excitatory) cell receives excitatory input from both ON and OFF channels, and projects to other EXC (not shown) and INH (inhibitory) cells. The INH cell projects back in a reciprocal configuration to EXC cells. The divergence is indicated in left. ON-driven and OFF-driven simple cells receive input in a push-pull configuration (i.e., ON cells are excited by ON inputs and inhibited by OFF inputs, and vise-versa), while additionally receiving input from the EXC–INH recurrent network. In this model, we implement our push-pull circuit using monosynaptic inhibitory connections, despite the fact that geniculate input is strictly excitatory. This simplification, while anatomically incorrect, yields a more efficient implementation that is functionally equivalent. ON Input Luminance OFF Input left right EXC EXC Divergence INH INH Simple Cells Complex Cells ON & OFF Input ON OFF OFF Space Figure 1: left, Complex and simple cell responses to a sinusoidal grating input. Luminance is transformed into ON (green) and OFF (red) pathways by retinal processing. Complex cells form a recurrent network through excitatory and inhibitory projections (yellow and blue lines, respectively), and clusters of activity occur at twice the spatial frequency of the grating. ON input activates ON-driven simple cells (bright green) and suppresses OFF-driven simple cells (faded red), and vise-versa. right, The bump model’s local microcircuit: circles represent neurons, curved lines represent axon arbors that end in excitatory synapses (v shape) or inhibitory synapses (open circles). For simplicity, inhibitory interneurons were omitted in our push-pull circuit. 2.3 Mathematical Description • The neurons in our network follow the equation CV = −∑ ∂(t − tn) + I syn − I KCa − I leak , • n where C is membrane capacitance, V is the temporal derivative of the membrane voltage, δ(·) is the Dirac delta function, which resets the membrane at the times tn when it crosses threshold, Isyn is synaptic current from the network, and Ileak is a constant leak current. Neurons receive synaptic current of the form: ON I syn = w+ I ON − w− I OFF + wEE I EXC − wEI I INH , EXC I syn = w+ ( I ON + I OFF ) + wEE I EXC − wEI I INH + I back , OFF INH I syn = w+ I OFF − w− I ON + wEE I EXC − wEI I INH , I syn = wIE I EXC where w+ is the excitatory synaptic strength for ON and OFF input synapses, w- is the strength of the push-pull inhibition, wEE is the synaptic strength for EXC cell projections to other EXC cells, wEI is the strength of INH cell projections to EXC cells, wIE is the strength of EXC cell projections to INH cells, Iback is a constant input current, and I{ON,OFF,EXC,INH} account for all impinging synapses from each of the four cell types. These terms are calculated for cell i using an arbor function that consists of a spatial weighting J(r) and a post-synaptic current waveform α(t): k ∑ J (i − k ) ⋅ α (t − t n ) , where k spans all cells of a given type and n indexes their spike k ,n times. The spatial weighting function is described by J (i − k ) = exp( − i − k σ ) , with σ as the space constant. The current waveform, which is non-zero for t>0, convolves a 1 t function with a decaying exponential: α (t ) = (t τ c + α 0 ) −1 ∗ exp(− t τ e ) , where τc is the decay-rate, and τe is the time constant of the exponential. Finally, we model spike-rate adaptation with a calcium-dependent potassium-channel (KCa), which integrates Ca triggered by spikes at times tn with a gain K and a time constant τk, as described by I KCa = ∑ K exp(tn − t τ k ) . n 3 Silicon Implementation We implemented our model in silicon using the TSMC (Taiwan Semiconductor Manufacturing Company) 0.25µm 5-metal layer CMOS process. The final chip consists of a 2-D core of 48x48 pixels, surrounded by asynchronous digital circuitry that transmits and receives spikes in real-time. Neurons that reach threshold within the array are encoded as address-events and sent off-chip, and concurrently, incoming address-events are sent to their appropriate synapse locations. This interface is compatible with other spike-based chips that use address-events [5]. The fabricated bump chip has close to 460,000 transistors packed in 10 mm2 of silicon area for a total of 9,216 neurons. 3.1 Circuit Design Our neural circuit was morphed into hardware using four building blocks. Figure 2 shows the transistor implementation for synapses, axonal arbors (diffuser), KCa analogs, and neurons. The circuits are designed to operate in the subthreshold region (except for the spiking mechanism of the neuron). Noise is not purposely designed into the circuits. Instead, random variations from the fabrication process introduce significant deviations in I-V curves of theoretically identical MOS transistors. The function of the synapse circuit is to convert a brief voltage pulse (neuron spike) into a postsynaptic current with biologically realistic temporal dynamics. Our synapse achieves this by cascading a current-mirror integrator with a log-domain low-pass filter. The current-mirror integrator has a current impulse response that decays as 1 t (with a decay rate set by the voltage τc and an amplitude set by A). This time-extended current pulse is fed into a log-domain low-pass filter (equivalent to a current-domain RC circuit) that imposes a rise-time on the post-synaptic current set by τe. ON and OFF input synapses receive presynaptic spikes from the off-chip link, whereas EXC and INH synapses receive presynaptic spikes from local on-chip neurons. Synapse Je Diffuser Ir A Ig Jc KCa Analog Neuron Jk Vmem Vspk K Figure 2: Transistor implementations are shown for a synapse, diffuser, KCa analog, and neuron (simplified), with circuit insignias in the top-left of each box. The circuits they interact with are indicated (e.g. the neuron receives synaptic current from the diffuser as well as adaptation current from the KCa analog; the neuron in turn drives the KCa analog). The far right shows layout for one pixel of the bump chip (vertical dimension is 83µm, horizontal is 30 µm). The diffuser circuit models axonal arbors that project to a local region of space with an exponential weighting. Analogous to resistive divider networks, diffusers [6] efficiently distribute synaptic currents to multiple targets. We use four diffusers to implement axonal projections for: the ON pathway, which excites ON and EXC cells and inhibits OFF cells; the OFF pathway, which excites OFF and EXC cells and inhibits ON cells; the EXC cells, which excite all cell types; and the INH cells, which inhibits EXC, ON, and OFF cells. Each diffuser node connects to its six neighbors through transistors that have a pseudo-conductance set by σr, and to its target site through a pseudo-conductance set by σg; the space-constant of the exponential synaptic decay is set by σr and σg’s relative levels. The neuron circuit integrates diffuser currents on its membrane capacitance. Diffusers either directly inject current (excitatory), or siphon off current (inhibitory) through a current-mirror. Spikes are generated by an inverter with positive feedback (modified from [7]), and the membrane is subsequently reset by the spike signal. We model a calcium concentration in the cell with a KCa analog. K controls the amount of calcium that enters the cell per spike; the concentration decays exponentially with a time constant set by τk. Elevated charge levels activate a KCa-like current that throttles the spike-rate of the neuron. 3.2 Experimental Setup Our setup uses either a silicon retina [8] or a National Instruments DIO (digital input–output) card as input to the bump chip. This allows us to test our V1 model with real-time visual stimuli, similar to the experimental paradigm of electrophysiologists. More specifically, the setup uses an address-event link [5] to establish virtual point-to-point connectivity between ON or OFF ganglion cells from the retina chip (or DIO card) with ON or OFF synapses on the bump chip. Both the input activity and the output activity of the bump chip is displayed in real-time using receiver chips, which integrate incoming spikes and displays their rates as pixel intensities on a monitor. A logic analyzer is used to capture spike output from the bump chip so it can be further analyzed. We investigated responses of the bump chip to gratings moving in sixteen different directions, both qualitatively and quantitatively. For the qualitative aspect, we created a PO map by taking each cell’s average activity for each stimulus direction and computing the vector sum. To obtain a quantitative measure, we looked at the normalized vector magnitude (NVM), which reveals the sharpness of a cell’s tuning. The NVM is calculated by dividing the vector sum by the magnitude sum for each cell. The NVM is 0 if a cell responds equally to all orientations, and 1 if a cell’s orientation selectivity is perfect such that it only responds at a single orientation. 4 Results We presented sixteen moving gratings to the network, with directions ranging from 0 to 360 degrees. The spatial frequency of the grating is tuned to match the size of the average bump, and the temporal frequency is 1 Hz. Figure 3a shows a resulting PO map for directions from 180 to 360 degrees, looking at the inhibitory cell population (the data looks similar for other cell types). Black contours represent stable bump regions, or equivalently, the regions that exceed a prescribed threshold (90 spikes) for all directions. The PO map from the bump chip reveals structure that resembles data from real cortex. Nearby cells tend to prefer similar orientations except at fractures. There are even regions that are similar to pinwheels (delimited by a white rectangle). A PO is a useful tool to describe a network’s selectivity, but it only paints part of the picture. So we have additionally computed a NVM map and a NVM histogram, shown in Figure 3b and 3c respectively. The NVM map shows that cells with sharp selectivity tend to cluster, particularly around the edge of the bumps. The histogram also reveals that the distribution of cell selectivity across the network varies considerably, skewed towards broadly tuned cells. We also looked at spike rasters from different cell-types to gain insight into their phase relationship with the stimulus. In particular, we present recordings for the site indicated by the arrow (see Figure 3a) for gratings moving in eight directions ranging from 0 to 360 degrees in 45-degree increments (this location was chosen because it is in the vicinity of a pinwheel, is reasonably selective, and shows considerable modulation in its firing rate). Figure 4 shows the luminance of the stimulus (bottom sinusoids), ON- (cyan) and OFF-input (magenta) spike trains, and the resulting spike trains from EXC (yellow), INH (blue), ON- (green), and OFFdriven (red) cell types for each of the eight directions. The center polar plot summarizes the orientation selectivity for each cell-type by showing the normalized number of spikes for each stimulus. Data is shown for one period. Even though all cells-types are selective for the same orientation (regardless of grating direction), complex cell responses tend to be phase-insensitive while the simple cell responses are modulated at the fundamental frequency. It is worth noting that the simple cells have sharper orientation selectivity compared to the complex cells. This trend is characteristic of our data. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 300 250 200 150 100 50 20 40 60 80 100 120 140 160 180 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 3: (a) PO map for the inhibitory cell population stimulated with eight different directions from 180 to 360 degrees (black represents no activity, contours delineate regions that exceed 90 spikes for all stimuli). Normalized vector magnitude (NVM) data is presented as (b) a map and (c) a histogram. Figure 4: Spike rasters and polar plot for 8 directions ranging from 0 to 360 degrees. Each set of spike rasters represent from bottom to top, ON- (cyan) and OFF-input (magenta), INH (yellow), EXC (blue), and ON- (green) and OFF-driven (red). The stimulus period is 1 sec. 5 Discussion We have implemented a large-scale network of spiking neurons in a silicon chip that is based on layer 4 of the visual cortex. The initial testing of the network reveals a PO map, inherited from innate chip heterogeneities, resembling cortical maps. Our microcircuit proposes a novel function for complex-like cells; that is they create a sign-independent orientation selective signal, which through a push-pull circuit creates sharply tuned simple cells with the same orientation preference. Recently, Ringach et al. surveyed orientation selectivity in the macaque [9]. They observed that, in a population of V1 neurons (N=308) the distribution of orientation selectivity is quite broad, having a median NVM of 0.39. We have measured median NVM’s ranging from 0.25 to 0.32. Additionally, Ringach et al. found a negative correlation between spontaneous firing rate and NVM. This is consistent with our model because cells closer to the center of the bump have higher firing rates and broader tuning. While the results from the bump chip are promising, our maps are less consistent and noisier than the maps Ernst et al. have reported. We believe this is because our network is tuned to operate in a fluid state where bumps come on, travel a short distance and disappear (motivated by cortical imaging studies). But excessive fluidity can cause non-dominant bumps to briefly appear and adversely shift the PO maps. We are currently investigating the role of lateral connections between bumps as a means to suppress these spontaneous shifts. The neural mechanisms that underlie the orientation selectivity of V1 neurons are still highly debated. This may be because neuron responses are not only shaped by feedforward inputs, but are also influenced at the network level. If modeling is going to be a useful guide for electrophysiologists, we must model at the network level while retaining cell level detail. Our results demonstrate that a spike-based neuromorphic system is well suited to model layer 4 of the visual cortex. The same approach may be used to build large-scale models of other cortical regions. References 1. Hubel, D. and T. Wiesel, Receptive firelds, binocular interaction and functional architecture in the cat's visual cortex. J. Physiol, 1962. 160: p. 106-154. 2. Blasdel, G.G., Orientation selectivity, preference, and continuity in monkey striate cortex. J Neurosci, 1992. 12(8): p. 3139-61. 3. Crair, M.C., D.C. Gillespie, and M.P. Stryker, The role of visual experience in the development of columns in cat visual cortex. Science, 1998. 279(5350): p. 566-70. 4. Ernst, U.A., et al., Intracortical origin of visual maps. Nat Neurosci, 2001. 4(4): p. 431-6. 5. Boahen, K., Point-to-Point Connectivity. IEEE Transactions on Circuits & Systems II, 2000. vol 47 no 5: p. 416-434. 6. Boahen, K. and Andreou. A contrast sensitive silicon retina with reciprocal synapses. in NIPS91. 1992: IEEE. 7. Culurciello, E., R. Etienne-Cummings, and K. Boahen, A Biomorphic Digital Image Sensor. IEEE Journal of Solid State Circuits, 2003. vol 38 no 2: p. 281-294. 8. Zaghloul, K., A silicon implementation of a novel model for retinal processing, in Neuroscience. 2002, UPENN: Philadelphia. 9. Ringach, D.L., R.M. Shapley, and M.J. Hawken, Orientation selectivity in macaque V1: diversity and laminar dependence. J Neurosci, 2002. 22(13): p. 5639-51.

2 0.22217865 61 nips-2003-Entrainment of Silicon Central Pattern Generators for Legged Locomotory Control

Author: Francesco Tenore, Ralph Etienne-Cummings, M. A. Lewis

Abstract: We have constructed a second generation CPG chip capable of generating the necessary timing to control the leg of a walking machine. We demonstrate improvements over a previous chip by moving toward a significantly more versatile device. This includes a larger number of silicon neurons, more sophisticated neurons including voltage dependent charging and relative and absolute refractory periods, and enhanced programmability of neural networks. This chip builds on the basic results achieved on a previous chip and expands its versatility to get closer to a self-contained locomotion controller for walking robots. 1

3 0.17765196 13 nips-2003-A Neuromorphic Multi-chip Model of a Disparity Selective Complex Cell

Author: Bertram E. Shi, Eric K. Tsang

Abstract: The relative depth of objects causes small shifts in the left and right retinal positions of these objects, called binocular disparity. Here, we describe a neuromorphic implementation of a disparity selective complex cell using the binocular energy model, which has been proposed to model the response of disparity selective cells in the visual cortex. Our system consists of two silicon chips containing spiking neurons with monocular Gabor-type spatial receptive fields (RF) and circuits that combine the spike outputs to compute a disparity selective complex cell response. The disparity selectivity of the cell can be adjusted by both position and phase shifts between the monocular RF profiles, which are both used in biology. Our neuromorphic system performs better with phase encoding, because the relative responses of neurons tuned to different disparities by phase shifts are better matched than the responses of neurons tuned by position shifts.

4 0.17645253 18 nips-2003-A Summating, Exponentially-Decaying CMOS Synapse for Spiking Neural Systems

Author: Rock Z. Shi, Timothy K. Horiuchi

Abstract: Synapses are a critical element of biologically-realistic, spike-based neural computation, serving the role of communication, computation, and modification. Many different circuit implementations of synapse function exist with different computational goals in mind. In this paper we describe a new CMOS synapse design that separately controls quiescent leak current, synaptic gain, and time-constant of decay. This circuit implements part of a commonly-used kinetic model of synaptic conductance. We show a theoretical analysis and experimental data for prototypes fabricated in a commercially-available 1.5µm CMOS process. 1

5 0.16587229 183 nips-2003-Synchrony Detection by Analogue VLSI Neurons with Bimodal STDP Synapses

Author: Adria Bofill-i-petit, Alan F. Murray

Abstract: We present test results from spike-timing correlation learning experiments carried out with silicon neurons with STDP (Spike Timing Dependent Plasticity) synapses. The weight change scheme of the STDP synapses can be set to either weight-independent or weight-dependent mode. We present results that characterise the learning window implemented for both modes of operation. When presented with spike trains with different types of synchronisation the neurons develop bimodal weight distributions. We also show that a 2-layered network of silicon spiking neurons with STDP synapses can perform hierarchical synchrony detection. 1

6 0.14862221 93 nips-2003-Information Dynamics and Emergent Computation in Recurrent Circuits of Spiking Neurons

7 0.14228186 43 nips-2003-Bounded Invariance and the Formation of Place Fields

8 0.14167023 56 nips-2003-Dopamine Modulation in a Basal Ganglio-Cortical Network of Working Memory

9 0.13530394 45 nips-2003-Circuit Optimization Predicts Dynamic Networks for Chemosensory Orientation in Nematode C. elegans

10 0.13493387 185 nips-2003-The Doubly Balanced Network of Spiking Neurons: A Memory Model with High Capacity

11 0.12534535 67 nips-2003-Eye Micro-movements Improve Stimulus Detection Beyond the Nyquist Limit in the Peripheral Retina

12 0.11942932 125 nips-2003-Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model

13 0.11872184 140 nips-2003-Nonlinear Processing in LGN Neurons

14 0.10470028 10 nips-2003-A Low-Power Analog VLSI Visual Collision Detector

15 0.1036968 160 nips-2003-Prediction on Spike Data Using Kernel Algorithms

16 0.093741164 127 nips-2003-Mechanism of Neural Interference by Transcranial Magnetic Stimulation: Network or Single Neuron?

17 0.09100727 11 nips-2003-A Mixed-Signal VLSI for Real-Time Generation of Edge-Based Image Vectors

18 0.090612069 49 nips-2003-Decoding V1 Neuronal Activity using Particle Filtering with Volterra Kernels

19 0.086438656 129 nips-2003-Minimising Contrastive Divergence in Noisy, Mixed-mode VLSI Neurons

20 0.081920505 27 nips-2003-Analytical Solution of Spike-timing Dependent Plasticity Based on Synaptic Biophysics


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.154), (1, 0.108), (2, 0.433), (3, 0.098), (4, 0.087), (5, -0.02), (6, -0.136), (7, 0.042), (8, 0.03), (9, 0.0), (10, 0.012), (11, 0.085), (12, 0.054), (13, 0.044), (14, 0.021), (15, 0.042), (16, 0.015), (17, 0.022), (18, -0.036), (19, 0.049), (20, -0.097), (21, -0.088), (22, -0.06), (23, -0.058), (24, -0.013), (25, 0.062), (26, -0.043), (27, -0.034), (28, 0.028), (29, -0.011), (30, 0.054), (31, 0.055), (32, -0.021), (33, 0.07), (34, 0.097), (35, 0.084), (36, -0.034), (37, 0.038), (38, -0.005), (39, -0.027), (40, -0.003), (41, 0.044), (42, 0.088), (43, 0.093), (44, 0.036), (45, -0.057), (46, 0.069), (47, 0.0), (48, -0.014), (49, 0.034)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.98237652 16 nips-2003-A Recurrent Model of Orientation Maps with Simple and Complex Cells

Author: Paul Merolla, Kwabena A. Boahen

Abstract: We describe a neuromorphic chip that utilizes transistor heterogeneity, introduced by the fabrication process, to generate orientation maps similar to those imaged in vivo. Our model consists of a recurrent network of excitatory and inhibitory cells in parallel with a push-pull stage. Similar to a previous model the recurrent network displays hotspots of activity that give rise to visual feature maps. Unlike previous work, however, the map for orientation does not depend on the sign of contrast. Instead, signindependent cells driven by both ON and OFF channels anchor the map, while push-pull interactions give rise to sign-preserving cells. These two groups of orientation-selective cells are similar to complex and simple cells observed in V1. 1 Orientation Maps Neurons in visual areas 1 and 2 (V1 and V2) are selectively tuned for a number of visual features, the most pronounced feature being orientation. Orientation preference of individual cells varies across the two-dimensional surface of the cortex in a stereotyped manner, as revealed by electrophysiology [1] and optical imaging studies [2]. The origin of these preferred orientation (PO) maps is debated, but experiments demonstrate that they exist in the absence of visual experience [3]. To the dismay of advocates of Hebbian learning, these results suggest that the initial appearance of PO maps rely on neural mechanisms oblivious to input correlations. Here, we propose a model that accounts for observed PO maps based on innate noise in neuron thresholds and synaptic currents. The network is implemented in silicon where heterogeneity is as ubiquitous as it is in biology. 2 Patterned Activity Model Ernst et al. have previously described a 2D rate model that can account for the origin of visual maps [4]. Individual units in their network receive isotropic feedforward input from the geniculate and recurrent connections from neighboring units in a Mexican hat profile, described by short-range excitation and long-range inhibition. If the recurrent connections are sufficiently strong, hotspots of activity (or ‘bumps’) form periodically across space. In a homogeneous network, these bumps of activity are equally stable at any position in the network and are free to wander. Introducing random jitter to the Mexican hat connectivity profiles breaks the symmetry and reduces the number of stable states for the bumps. Subsequently, the bumps are pinned down at the locations that maximize their net local recurrent feedback. In this regime, moving gratings are able to shift the bumps away from their stability points such that the responses of the network resemble PO maps. Therefore, the recurrent network, given an ample amount of noise, can innately generate its own orientation specificity without the need for specific hardwired connections or visually driven learning rules. 2.1 Criticisms of the Bump model We might posit that the brain uses a similar opportunistic model to derive and organize its feature maps – but the parallels between the primary visual cortex and the Ernst et al. bump model are unconvincing. For instance, the units in their model represent the collective activity of a column, reducing the network dynamics to a firing-rate approximation. But this simplification ignores the rich temporal dynamics of spiking networks, which are known to affect bump stability. More fundamentally, there is no role for functionally distinct neuron types. The primary criticism of the Ernst et al.’s bump model is that its input only consists of a luminance channel, and it is not obvious how to replace this channel with ON and OFF rectified channels to account for simple and complex cells. One possibility would be to segregate ON-driven and OFF-driven cells (referred to as simple cells in this paper) into two distinct recurrent networks. Because each network would have its own innate noise profile, bumps would form independently. Consequently, there is no guarantee that ON-driven maps would line up with OFF-driven maps, which would result in conflicting orientation signals when these simple cells converge onto sign-independent (complex) cells. 2.2 Simple Cells Solve a Complex Problem To ensure that both ON-driven and OFF-driven simple cells have the same orientation maps, both ON and OFF bumps must be computed in the same recurrent network so that they are subjected to the same noise profile. We achieve this by building our recurrent network out of cells that are sign-independent; that is both ON and OFF channels drive the network. These cells exhibit complex cell-like behavior (and are referred to as complex cells in this paper) because they are modulated at double the spatial frequency of a sinusoidal grating input. The simple cells subsequently derive their responses from two separate signals: an orientation selective feedback signal from the complex cells indicating the presence of either an ON or an OFF bump, and an ON–OFF selection signal that chooses the appropriate response flavor. Figure 1 left illustrates the formation of bumps (highlighted cells) by a recurrent network with a Mexican hat connectivity profile. Extending the Ernst et al. model, these complex bumps seed simple bumps when driven by a grating. Simple bumps that match the sign of the input survive, whereas out-of-phase bumps are extinguished (faded cells) by push-pull inhibition. Figure 1 right shows the local connections within a microcircuit. An EXC (excitatory) cell receives excitatory input from both ON and OFF channels, and projects to other EXC (not shown) and INH (inhibitory) cells. The INH cell projects back in a reciprocal configuration to EXC cells. The divergence is indicated in left. ON-driven and OFF-driven simple cells receive input in a push-pull configuration (i.e., ON cells are excited by ON inputs and inhibited by OFF inputs, and vise-versa), while additionally receiving input from the EXC–INH recurrent network. In this model, we implement our push-pull circuit using monosynaptic inhibitory connections, despite the fact that geniculate input is strictly excitatory. This simplification, while anatomically incorrect, yields a more efficient implementation that is functionally equivalent. ON Input Luminance OFF Input left right EXC EXC Divergence INH INH Simple Cells Complex Cells ON & OFF Input ON OFF OFF Space Figure 1: left, Complex and simple cell responses to a sinusoidal grating input. Luminance is transformed into ON (green) and OFF (red) pathways by retinal processing. Complex cells form a recurrent network through excitatory and inhibitory projections (yellow and blue lines, respectively), and clusters of activity occur at twice the spatial frequency of the grating. ON input activates ON-driven simple cells (bright green) and suppresses OFF-driven simple cells (faded red), and vise-versa. right, The bump model’s local microcircuit: circles represent neurons, curved lines represent axon arbors that end in excitatory synapses (v shape) or inhibitory synapses (open circles). For simplicity, inhibitory interneurons were omitted in our push-pull circuit. 2.3 Mathematical Description • The neurons in our network follow the equation CV = −∑ ∂(t − tn) + I syn − I KCa − I leak , • n where C is membrane capacitance, V is the temporal derivative of the membrane voltage, δ(·) is the Dirac delta function, which resets the membrane at the times tn when it crosses threshold, Isyn is synaptic current from the network, and Ileak is a constant leak current. Neurons receive synaptic current of the form: ON I syn = w+ I ON − w− I OFF + wEE I EXC − wEI I INH , EXC I syn = w+ ( I ON + I OFF ) + wEE I EXC − wEI I INH + I back , OFF INH I syn = w+ I OFF − w− I ON + wEE I EXC − wEI I INH , I syn = wIE I EXC where w+ is the excitatory synaptic strength for ON and OFF input synapses, w- is the strength of the push-pull inhibition, wEE is the synaptic strength for EXC cell projections to other EXC cells, wEI is the strength of INH cell projections to EXC cells, wIE is the strength of EXC cell projections to INH cells, Iback is a constant input current, and I{ON,OFF,EXC,INH} account for all impinging synapses from each of the four cell types. These terms are calculated for cell i using an arbor function that consists of a spatial weighting J(r) and a post-synaptic current waveform α(t): k ∑ J (i − k ) ⋅ α (t − t n ) , where k spans all cells of a given type and n indexes their spike k ,n times. The spatial weighting function is described by J (i − k ) = exp( − i − k σ ) , with σ as the space constant. The current waveform, which is non-zero for t>0, convolves a 1 t function with a decaying exponential: α (t ) = (t τ c + α 0 ) −1 ∗ exp(− t τ e ) , where τc is the decay-rate, and τe is the time constant of the exponential. Finally, we model spike-rate adaptation with a calcium-dependent potassium-channel (KCa), which integrates Ca triggered by spikes at times tn with a gain K and a time constant τk, as described by I KCa = ∑ K exp(tn − t τ k ) . n 3 Silicon Implementation We implemented our model in silicon using the TSMC (Taiwan Semiconductor Manufacturing Company) 0.25µm 5-metal layer CMOS process. The final chip consists of a 2-D core of 48x48 pixels, surrounded by asynchronous digital circuitry that transmits and receives spikes in real-time. Neurons that reach threshold within the array are encoded as address-events and sent off-chip, and concurrently, incoming address-events are sent to their appropriate synapse locations. This interface is compatible with other spike-based chips that use address-events [5]. The fabricated bump chip has close to 460,000 transistors packed in 10 mm2 of silicon area for a total of 9,216 neurons. 3.1 Circuit Design Our neural circuit was morphed into hardware using four building blocks. Figure 2 shows the transistor implementation for synapses, axonal arbors (diffuser), KCa analogs, and neurons. The circuits are designed to operate in the subthreshold region (except for the spiking mechanism of the neuron). Noise is not purposely designed into the circuits. Instead, random variations from the fabrication process introduce significant deviations in I-V curves of theoretically identical MOS transistors. The function of the synapse circuit is to convert a brief voltage pulse (neuron spike) into a postsynaptic current with biologically realistic temporal dynamics. Our synapse achieves this by cascading a current-mirror integrator with a log-domain low-pass filter. The current-mirror integrator has a current impulse response that decays as 1 t (with a decay rate set by the voltage τc and an amplitude set by A). This time-extended current pulse is fed into a log-domain low-pass filter (equivalent to a current-domain RC circuit) that imposes a rise-time on the post-synaptic current set by τe. ON and OFF input synapses receive presynaptic spikes from the off-chip link, whereas EXC and INH synapses receive presynaptic spikes from local on-chip neurons. Synapse Je Diffuser Ir A Ig Jc KCa Analog Neuron Jk Vmem Vspk K Figure 2: Transistor implementations are shown for a synapse, diffuser, KCa analog, and neuron (simplified), with circuit insignias in the top-left of each box. The circuits they interact with are indicated (e.g. the neuron receives synaptic current from the diffuser as well as adaptation current from the KCa analog; the neuron in turn drives the KCa analog). The far right shows layout for one pixel of the bump chip (vertical dimension is 83µm, horizontal is 30 µm). The diffuser circuit models axonal arbors that project to a local region of space with an exponential weighting. Analogous to resistive divider networks, diffusers [6] efficiently distribute synaptic currents to multiple targets. We use four diffusers to implement axonal projections for: the ON pathway, which excites ON and EXC cells and inhibits OFF cells; the OFF pathway, which excites OFF and EXC cells and inhibits ON cells; the EXC cells, which excite all cell types; and the INH cells, which inhibits EXC, ON, and OFF cells. Each diffuser node connects to its six neighbors through transistors that have a pseudo-conductance set by σr, and to its target site through a pseudo-conductance set by σg; the space-constant of the exponential synaptic decay is set by σr and σg’s relative levels. The neuron circuit integrates diffuser currents on its membrane capacitance. Diffusers either directly inject current (excitatory), or siphon off current (inhibitory) through a current-mirror. Spikes are generated by an inverter with positive feedback (modified from [7]), and the membrane is subsequently reset by the spike signal. We model a calcium concentration in the cell with a KCa analog. K controls the amount of calcium that enters the cell per spike; the concentration decays exponentially with a time constant set by τk. Elevated charge levels activate a KCa-like current that throttles the spike-rate of the neuron. 3.2 Experimental Setup Our setup uses either a silicon retina [8] or a National Instruments DIO (digital input–output) card as input to the bump chip. This allows us to test our V1 model with real-time visual stimuli, similar to the experimental paradigm of electrophysiologists. More specifically, the setup uses an address-event link [5] to establish virtual point-to-point connectivity between ON or OFF ganglion cells from the retina chip (or DIO card) with ON or OFF synapses on the bump chip. Both the input activity and the output activity of the bump chip is displayed in real-time using receiver chips, which integrate incoming spikes and displays their rates as pixel intensities on a monitor. A logic analyzer is used to capture spike output from the bump chip so it can be further analyzed. We investigated responses of the bump chip to gratings moving in sixteen different directions, both qualitatively and quantitatively. For the qualitative aspect, we created a PO map by taking each cell’s average activity for each stimulus direction and computing the vector sum. To obtain a quantitative measure, we looked at the normalized vector magnitude (NVM), which reveals the sharpness of a cell’s tuning. The NVM is calculated by dividing the vector sum by the magnitude sum for each cell. The NVM is 0 if a cell responds equally to all orientations, and 1 if a cell’s orientation selectivity is perfect such that it only responds at a single orientation. 4 Results We presented sixteen moving gratings to the network, with directions ranging from 0 to 360 degrees. The spatial frequency of the grating is tuned to match the size of the average bump, and the temporal frequency is 1 Hz. Figure 3a shows a resulting PO map for directions from 180 to 360 degrees, looking at the inhibitory cell population (the data looks similar for other cell types). Black contours represent stable bump regions, or equivalently, the regions that exceed a prescribed threshold (90 spikes) for all directions. The PO map from the bump chip reveals structure that resembles data from real cortex. Nearby cells tend to prefer similar orientations except at fractures. There are even regions that are similar to pinwheels (delimited by a white rectangle). A PO is a useful tool to describe a network’s selectivity, but it only paints part of the picture. So we have additionally computed a NVM map and a NVM histogram, shown in Figure 3b and 3c respectively. The NVM map shows that cells with sharp selectivity tend to cluster, particularly around the edge of the bumps. The histogram also reveals that the distribution of cell selectivity across the network varies considerably, skewed towards broadly tuned cells. We also looked at spike rasters from different cell-types to gain insight into their phase relationship with the stimulus. In particular, we present recordings for the site indicated by the arrow (see Figure 3a) for gratings moving in eight directions ranging from 0 to 360 degrees in 45-degree increments (this location was chosen because it is in the vicinity of a pinwheel, is reasonably selective, and shows considerable modulation in its firing rate). Figure 4 shows the luminance of the stimulus (bottom sinusoids), ON- (cyan) and OFF-input (magenta) spike trains, and the resulting spike trains from EXC (yellow), INH (blue), ON- (green), and OFFdriven (red) cell types for each of the eight directions. The center polar plot summarizes the orientation selectivity for each cell-type by showing the normalized number of spikes for each stimulus. Data is shown for one period. Even though all cells-types are selective for the same orientation (regardless of grating direction), complex cell responses tend to be phase-insensitive while the simple cell responses are modulated at the fundamental frequency. It is worth noting that the simple cells have sharper orientation selectivity compared to the complex cells. This trend is characteristic of our data. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 300 250 200 150 100 50 20 40 60 80 100 120 140 160 180 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 3: (a) PO map for the inhibitory cell population stimulated with eight different directions from 180 to 360 degrees (black represents no activity, contours delineate regions that exceed 90 spikes for all stimuli). Normalized vector magnitude (NVM) data is presented as (b) a map and (c) a histogram. Figure 4: Spike rasters and polar plot for 8 directions ranging from 0 to 360 degrees. Each set of spike rasters represent from bottom to top, ON- (cyan) and OFF-input (magenta), INH (yellow), EXC (blue), and ON- (green) and OFF-driven (red). The stimulus period is 1 sec. 5 Discussion We have implemented a large-scale network of spiking neurons in a silicon chip that is based on layer 4 of the visual cortex. The initial testing of the network reveals a PO map, inherited from innate chip heterogeneities, resembling cortical maps. Our microcircuit proposes a novel function for complex-like cells; that is they create a sign-independent orientation selective signal, which through a push-pull circuit creates sharply tuned simple cells with the same orientation preference. Recently, Ringach et al. surveyed orientation selectivity in the macaque [9]. They observed that, in a population of V1 neurons (N=308) the distribution of orientation selectivity is quite broad, having a median NVM of 0.39. We have measured median NVM’s ranging from 0.25 to 0.32. Additionally, Ringach et al. found a negative correlation between spontaneous firing rate and NVM. This is consistent with our model because cells closer to the center of the bump have higher firing rates and broader tuning. While the results from the bump chip are promising, our maps are less consistent and noisier than the maps Ernst et al. have reported. We believe this is because our network is tuned to operate in a fluid state where bumps come on, travel a short distance and disappear (motivated by cortical imaging studies). But excessive fluidity can cause non-dominant bumps to briefly appear and adversely shift the PO maps. We are currently investigating the role of lateral connections between bumps as a means to suppress these spontaneous shifts. The neural mechanisms that underlie the orientation selectivity of V1 neurons are still highly debated. This may be because neuron responses are not only shaped by feedforward inputs, but are also influenced at the network level. If modeling is going to be a useful guide for electrophysiologists, we must model at the network level while retaining cell level detail. Our results demonstrate that a spike-based neuromorphic system is well suited to model layer 4 of the visual cortex. The same approach may be used to build large-scale models of other cortical regions. References 1. Hubel, D. and T. Wiesel, Receptive firelds, binocular interaction and functional architecture in the cat's visual cortex. J. Physiol, 1962. 160: p. 106-154. 2. Blasdel, G.G., Orientation selectivity, preference, and continuity in monkey striate cortex. J Neurosci, 1992. 12(8): p. 3139-61. 3. Crair, M.C., D.C. Gillespie, and M.P. Stryker, The role of visual experience in the development of columns in cat visual cortex. Science, 1998. 279(5350): p. 566-70. 4. Ernst, U.A., et al., Intracortical origin of visual maps. Nat Neurosci, 2001. 4(4): p. 431-6. 5. Boahen, K., Point-to-Point Connectivity. IEEE Transactions on Circuits & Systems II, 2000. vol 47 no 5: p. 416-434. 6. Boahen, K. and Andreou. A contrast sensitive silicon retina with reciprocal synapses. in NIPS91. 1992: IEEE. 7. Culurciello, E., R. Etienne-Cummings, and K. Boahen, A Biomorphic Digital Image Sensor. IEEE Journal of Solid State Circuits, 2003. vol 38 no 2: p. 281-294. 8. Zaghloul, K., A silicon implementation of a novel model for retinal processing, in Neuroscience. 2002, UPENN: Philadelphia. 9. Ringach, D.L., R.M. Shapley, and M.J. Hawken, Orientation selectivity in macaque V1: diversity and laminar dependence. J Neurosci, 2002. 22(13): p. 5639-51.

2 0.77579433 61 nips-2003-Entrainment of Silicon Central Pattern Generators for Legged Locomotory Control

Author: Francesco Tenore, Ralph Etienne-Cummings, M. A. Lewis

Abstract: We have constructed a second generation CPG chip capable of generating the necessary timing to control the leg of a walking machine. We demonstrate improvements over a previous chip by moving toward a significantly more versatile device. This includes a larger number of silicon neurons, more sophisticated neurons including voltage dependent charging and relative and absolute refractory periods, and enhanced programmability of neural networks. This chip builds on the basic results achieved on a previous chip and expands its versatility to get closer to a self-contained locomotion controller for walking robots. 1

3 0.76198626 13 nips-2003-A Neuromorphic Multi-chip Model of a Disparity Selective Complex Cell

Author: Bertram E. Shi, Eric K. Tsang

Abstract: The relative depth of objects causes small shifts in the left and right retinal positions of these objects, called binocular disparity. Here, we describe a neuromorphic implementation of a disparity selective complex cell using the binocular energy model, which has been proposed to model the response of disparity selective cells in the visual cortex. Our system consists of two silicon chips containing spiking neurons with monocular Gabor-type spatial receptive fields (RF) and circuits that combine the spike outputs to compute a disparity selective complex cell response. The disparity selectivity of the cell can be adjusted by both position and phase shifts between the monocular RF profiles, which are both used in biology. Our neuromorphic system performs better with phase encoding, because the relative responses of neurons tuned to different disparities by phase shifts are better matched than the responses of neurons tuned by position shifts.

4 0.6830976 45 nips-2003-Circuit Optimization Predicts Dynamic Networks for Chemosensory Orientation in Nematode C. elegans

Author: Nathan A. Dunn, John S. Conery, Shawn R. Lockery

Abstract: The connectivity of the nervous system of the nematode Caenorhabditis elegans has been described completely, but the analysis of the neuronal basis of behavior in this system is just beginning. Here, we used an optimization algorithm to search for patterns of connectivity sufficient to compute the sensorimotor transformation underlying C. elegans chemotaxis, a simple form of spatial orientation behavior in which turning probability is modulated by the rate of change of chemical concentration. Optimization produced differentiator networks with inhibitory feedback among all neurons. Further analysis showed that feedback regulates the latency between sensory input and behavior. Common patterns of connectivity between the model and biological networks suggest new functions for previously identified connections in the C. elegans nervous system. 1

5 0.66865784 56 nips-2003-Dopamine Modulation in a Basal Ganglio-Cortical Network of Working Memory

Author: Aaron J. Gruber, Peter Dayan, Boris S. Gutkin, Sara A. Solla

Abstract: Dopamine exerts two classes of effect on the sustained neural activity in prefrontal cortex that underlies working memory. Direct release in the cortex increases the contrast of prefrontal neurons, enhancing the robustness of storage. Release of dopamine in the striatum is associated with salient stimuli and makes medium spiny neurons bistable; this modulation of the output of spiny neurons affects prefrontal cortex so as to indirectly gate access to working memory and additionally damp sensitivity to noise. Existing models have treated dopamine in one or other structure, or have addressed basal ganglia gating of working memory exclusive of dopamine effects. In this paper we combine these mechanisms and explore their joint effect. We model a memory-guided saccade task to illustrate how dopamine’s actions lead to working memory that is selective for salient input and has increased robustness to distraction. 1

6 0.65914023 127 nips-2003-Mechanism of Neural Interference by Transcranial Magnetic Stimulation: Network or Single Neuron?

7 0.63567489 185 nips-2003-The Doubly Balanced Network of Spiking Neurons: A Memory Model with High Capacity

8 0.58192968 140 nips-2003-Nonlinear Processing in LGN Neurons

9 0.54588562 10 nips-2003-A Low-Power Analog VLSI Visual Collision Detector

10 0.54023844 18 nips-2003-A Summating, Exponentially-Decaying CMOS Synapse for Spiking Neural Systems

11 0.51057714 43 nips-2003-Bounded Invariance and the Formation of Place Fields

12 0.50348336 93 nips-2003-Information Dynamics and Emergent Computation in Recurrent Circuits of Spiking Neurons

13 0.4984462 67 nips-2003-Eye Micro-movements Improve Stimulus Detection Beyond the Nyquist Limit in the Peripheral Retina

14 0.49257833 183 nips-2003-Synchrony Detection by Analogue VLSI Neurons with Bimodal STDP Synapses

15 0.41910762 129 nips-2003-Minimising Contrastive Divergence in Noisy, Mixed-mode VLSI Neurons

16 0.41527253 125 nips-2003-Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model

17 0.4087311 184 nips-2003-The Diffusion-Limited Biochemical Signal-Relay Channel

18 0.39020458 49 nips-2003-Decoding V1 Neuronal Activity using Particle Filtering with Volterra Kernels

19 0.36475617 7 nips-2003-A Functional Architecture for Motion Pattern Processing in MSTd

20 0.3490462 165 nips-2003-Reasoning about Time and Knowledge in Neural Symbolic Learning Systems


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.032), (11, 0.025), (29, 0.014), (30, 0.018), (35, 0.026), (40, 0.309), (41, 0.021), (53, 0.103), (59, 0.053), (63, 0.076), (71, 0.043), (76, 0.05), (85, 0.036), (91, 0.105)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.84822077 16 nips-2003-A Recurrent Model of Orientation Maps with Simple and Complex Cells

Author: Paul Merolla, Kwabena A. Boahen

Abstract: We describe a neuromorphic chip that utilizes transistor heterogeneity, introduced by the fabrication process, to generate orientation maps similar to those imaged in vivo. Our model consists of a recurrent network of excitatory and inhibitory cells in parallel with a push-pull stage. Similar to a previous model the recurrent network displays hotspots of activity that give rise to visual feature maps. Unlike previous work, however, the map for orientation does not depend on the sign of contrast. Instead, signindependent cells driven by both ON and OFF channels anchor the map, while push-pull interactions give rise to sign-preserving cells. These two groups of orientation-selective cells are similar to complex and simple cells observed in V1. 1 Orientation Maps Neurons in visual areas 1 and 2 (V1 and V2) are selectively tuned for a number of visual features, the most pronounced feature being orientation. Orientation preference of individual cells varies across the two-dimensional surface of the cortex in a stereotyped manner, as revealed by electrophysiology [1] and optical imaging studies [2]. The origin of these preferred orientation (PO) maps is debated, but experiments demonstrate that they exist in the absence of visual experience [3]. To the dismay of advocates of Hebbian learning, these results suggest that the initial appearance of PO maps rely on neural mechanisms oblivious to input correlations. Here, we propose a model that accounts for observed PO maps based on innate noise in neuron thresholds and synaptic currents. The network is implemented in silicon where heterogeneity is as ubiquitous as it is in biology. 2 Patterned Activity Model Ernst et al. have previously described a 2D rate model that can account for the origin of visual maps [4]. Individual units in their network receive isotropic feedforward input from the geniculate and recurrent connections from neighboring units in a Mexican hat profile, described by short-range excitation and long-range inhibition. If the recurrent connections are sufficiently strong, hotspots of activity (or ‘bumps’) form periodically across space. In a homogeneous network, these bumps of activity are equally stable at any position in the network and are free to wander. Introducing random jitter to the Mexican hat connectivity profiles breaks the symmetry and reduces the number of stable states for the bumps. Subsequently, the bumps are pinned down at the locations that maximize their net local recurrent feedback. In this regime, moving gratings are able to shift the bumps away from their stability points such that the responses of the network resemble PO maps. Therefore, the recurrent network, given an ample amount of noise, can innately generate its own orientation specificity without the need for specific hardwired connections or visually driven learning rules. 2.1 Criticisms of the Bump model We might posit that the brain uses a similar opportunistic model to derive and organize its feature maps – but the parallels between the primary visual cortex and the Ernst et al. bump model are unconvincing. For instance, the units in their model represent the collective activity of a column, reducing the network dynamics to a firing-rate approximation. But this simplification ignores the rich temporal dynamics of spiking networks, which are known to affect bump stability. More fundamentally, there is no role for functionally distinct neuron types. The primary criticism of the Ernst et al.’s bump model is that its input only consists of a luminance channel, and it is not obvious how to replace this channel with ON and OFF rectified channels to account for simple and complex cells. One possibility would be to segregate ON-driven and OFF-driven cells (referred to as simple cells in this paper) into two distinct recurrent networks. Because each network would have its own innate noise profile, bumps would form independently. Consequently, there is no guarantee that ON-driven maps would line up with OFF-driven maps, which would result in conflicting orientation signals when these simple cells converge onto sign-independent (complex) cells. 2.2 Simple Cells Solve a Complex Problem To ensure that both ON-driven and OFF-driven simple cells have the same orientation maps, both ON and OFF bumps must be computed in the same recurrent network so that they are subjected to the same noise profile. We achieve this by building our recurrent network out of cells that are sign-independent; that is both ON and OFF channels drive the network. These cells exhibit complex cell-like behavior (and are referred to as complex cells in this paper) because they are modulated at double the spatial frequency of a sinusoidal grating input. The simple cells subsequently derive their responses from two separate signals: an orientation selective feedback signal from the complex cells indicating the presence of either an ON or an OFF bump, and an ON–OFF selection signal that chooses the appropriate response flavor. Figure 1 left illustrates the formation of bumps (highlighted cells) by a recurrent network with a Mexican hat connectivity profile. Extending the Ernst et al. model, these complex bumps seed simple bumps when driven by a grating. Simple bumps that match the sign of the input survive, whereas out-of-phase bumps are extinguished (faded cells) by push-pull inhibition. Figure 1 right shows the local connections within a microcircuit. An EXC (excitatory) cell receives excitatory input from both ON and OFF channels, and projects to other EXC (not shown) and INH (inhibitory) cells. The INH cell projects back in a reciprocal configuration to EXC cells. The divergence is indicated in left. ON-driven and OFF-driven simple cells receive input in a push-pull configuration (i.e., ON cells are excited by ON inputs and inhibited by OFF inputs, and vise-versa), while additionally receiving input from the EXC–INH recurrent network. In this model, we implement our push-pull circuit using monosynaptic inhibitory connections, despite the fact that geniculate input is strictly excitatory. This simplification, while anatomically incorrect, yields a more efficient implementation that is functionally equivalent. ON Input Luminance OFF Input left right EXC EXC Divergence INH INH Simple Cells Complex Cells ON & OFF Input ON OFF OFF Space Figure 1: left, Complex and simple cell responses to a sinusoidal grating input. Luminance is transformed into ON (green) and OFF (red) pathways by retinal processing. Complex cells form a recurrent network through excitatory and inhibitory projections (yellow and blue lines, respectively), and clusters of activity occur at twice the spatial frequency of the grating. ON input activates ON-driven simple cells (bright green) and suppresses OFF-driven simple cells (faded red), and vise-versa. right, The bump model’s local microcircuit: circles represent neurons, curved lines represent axon arbors that end in excitatory synapses (v shape) or inhibitory synapses (open circles). For simplicity, inhibitory interneurons were omitted in our push-pull circuit. 2.3 Mathematical Description • The neurons in our network follow the equation CV = −∑ ∂(t − tn) + I syn − I KCa − I leak , • n where C is membrane capacitance, V is the temporal derivative of the membrane voltage, δ(·) is the Dirac delta function, which resets the membrane at the times tn when it crosses threshold, Isyn is synaptic current from the network, and Ileak is a constant leak current. Neurons receive synaptic current of the form: ON I syn = w+ I ON − w− I OFF + wEE I EXC − wEI I INH , EXC I syn = w+ ( I ON + I OFF ) + wEE I EXC − wEI I INH + I back , OFF INH I syn = w+ I OFF − w− I ON + wEE I EXC − wEI I INH , I syn = wIE I EXC where w+ is the excitatory synaptic strength for ON and OFF input synapses, w- is the strength of the push-pull inhibition, wEE is the synaptic strength for EXC cell projections to other EXC cells, wEI is the strength of INH cell projections to EXC cells, wIE is the strength of EXC cell projections to INH cells, Iback is a constant input current, and I{ON,OFF,EXC,INH} account for all impinging synapses from each of the four cell types. These terms are calculated for cell i using an arbor function that consists of a spatial weighting J(r) and a post-synaptic current waveform α(t): k ∑ J (i − k ) ⋅ α (t − t n ) , where k spans all cells of a given type and n indexes their spike k ,n times. The spatial weighting function is described by J (i − k ) = exp( − i − k σ ) , with σ as the space constant. The current waveform, which is non-zero for t>0, convolves a 1 t function with a decaying exponential: α (t ) = (t τ c + α 0 ) −1 ∗ exp(− t τ e ) , where τc is the decay-rate, and τe is the time constant of the exponential. Finally, we model spike-rate adaptation with a calcium-dependent potassium-channel (KCa), which integrates Ca triggered by spikes at times tn with a gain K and a time constant τk, as described by I KCa = ∑ K exp(tn − t τ k ) . n 3 Silicon Implementation We implemented our model in silicon using the TSMC (Taiwan Semiconductor Manufacturing Company) 0.25µm 5-metal layer CMOS process. The final chip consists of a 2-D core of 48x48 pixels, surrounded by asynchronous digital circuitry that transmits and receives spikes in real-time. Neurons that reach threshold within the array are encoded as address-events and sent off-chip, and concurrently, incoming address-events are sent to their appropriate synapse locations. This interface is compatible with other spike-based chips that use address-events [5]. The fabricated bump chip has close to 460,000 transistors packed in 10 mm2 of silicon area for a total of 9,216 neurons. 3.1 Circuit Design Our neural circuit was morphed into hardware using four building blocks. Figure 2 shows the transistor implementation for synapses, axonal arbors (diffuser), KCa analogs, and neurons. The circuits are designed to operate in the subthreshold region (except for the spiking mechanism of the neuron). Noise is not purposely designed into the circuits. Instead, random variations from the fabrication process introduce significant deviations in I-V curves of theoretically identical MOS transistors. The function of the synapse circuit is to convert a brief voltage pulse (neuron spike) into a postsynaptic current with biologically realistic temporal dynamics. Our synapse achieves this by cascading a current-mirror integrator with a log-domain low-pass filter. The current-mirror integrator has a current impulse response that decays as 1 t (with a decay rate set by the voltage τc and an amplitude set by A). This time-extended current pulse is fed into a log-domain low-pass filter (equivalent to a current-domain RC circuit) that imposes a rise-time on the post-synaptic current set by τe. ON and OFF input synapses receive presynaptic spikes from the off-chip link, whereas EXC and INH synapses receive presynaptic spikes from local on-chip neurons. Synapse Je Diffuser Ir A Ig Jc KCa Analog Neuron Jk Vmem Vspk K Figure 2: Transistor implementations are shown for a synapse, diffuser, KCa analog, and neuron (simplified), with circuit insignias in the top-left of each box. The circuits they interact with are indicated (e.g. the neuron receives synaptic current from the diffuser as well as adaptation current from the KCa analog; the neuron in turn drives the KCa analog). The far right shows layout for one pixel of the bump chip (vertical dimension is 83µm, horizontal is 30 µm). The diffuser circuit models axonal arbors that project to a local region of space with an exponential weighting. Analogous to resistive divider networks, diffusers [6] efficiently distribute synaptic currents to multiple targets. We use four diffusers to implement axonal projections for: the ON pathway, which excites ON and EXC cells and inhibits OFF cells; the OFF pathway, which excites OFF and EXC cells and inhibits ON cells; the EXC cells, which excite all cell types; and the INH cells, which inhibits EXC, ON, and OFF cells. Each diffuser node connects to its six neighbors through transistors that have a pseudo-conductance set by σr, and to its target site through a pseudo-conductance set by σg; the space-constant of the exponential synaptic decay is set by σr and σg’s relative levels. The neuron circuit integrates diffuser currents on its membrane capacitance. Diffusers either directly inject current (excitatory), or siphon off current (inhibitory) through a current-mirror. Spikes are generated by an inverter with positive feedback (modified from [7]), and the membrane is subsequently reset by the spike signal. We model a calcium concentration in the cell with a KCa analog. K controls the amount of calcium that enters the cell per spike; the concentration decays exponentially with a time constant set by τk. Elevated charge levels activate a KCa-like current that throttles the spike-rate of the neuron. 3.2 Experimental Setup Our setup uses either a silicon retina [8] or a National Instruments DIO (digital input–output) card as input to the bump chip. This allows us to test our V1 model with real-time visual stimuli, similar to the experimental paradigm of electrophysiologists. More specifically, the setup uses an address-event link [5] to establish virtual point-to-point connectivity between ON or OFF ganglion cells from the retina chip (or DIO card) with ON or OFF synapses on the bump chip. Both the input activity and the output activity of the bump chip is displayed in real-time using receiver chips, which integrate incoming spikes and displays their rates as pixel intensities on a monitor. A logic analyzer is used to capture spike output from the bump chip so it can be further analyzed. We investigated responses of the bump chip to gratings moving in sixteen different directions, both qualitatively and quantitatively. For the qualitative aspect, we created a PO map by taking each cell’s average activity for each stimulus direction and computing the vector sum. To obtain a quantitative measure, we looked at the normalized vector magnitude (NVM), which reveals the sharpness of a cell’s tuning. The NVM is calculated by dividing the vector sum by the magnitude sum for each cell. The NVM is 0 if a cell responds equally to all orientations, and 1 if a cell’s orientation selectivity is perfect such that it only responds at a single orientation. 4 Results We presented sixteen moving gratings to the network, with directions ranging from 0 to 360 degrees. The spatial frequency of the grating is tuned to match the size of the average bump, and the temporal frequency is 1 Hz. Figure 3a shows a resulting PO map for directions from 180 to 360 degrees, looking at the inhibitory cell population (the data looks similar for other cell types). Black contours represent stable bump regions, or equivalently, the regions that exceed a prescribed threshold (90 spikes) for all directions. The PO map from the bump chip reveals structure that resembles data from real cortex. Nearby cells tend to prefer similar orientations except at fractures. There are even regions that are similar to pinwheels (delimited by a white rectangle). A PO is a useful tool to describe a network’s selectivity, but it only paints part of the picture. So we have additionally computed a NVM map and a NVM histogram, shown in Figure 3b and 3c respectively. The NVM map shows that cells with sharp selectivity tend to cluster, particularly around the edge of the bumps. The histogram also reveals that the distribution of cell selectivity across the network varies considerably, skewed towards broadly tuned cells. We also looked at spike rasters from different cell-types to gain insight into their phase relationship with the stimulus. In particular, we present recordings for the site indicated by the arrow (see Figure 3a) for gratings moving in eight directions ranging from 0 to 360 degrees in 45-degree increments (this location was chosen because it is in the vicinity of a pinwheel, is reasonably selective, and shows considerable modulation in its firing rate). Figure 4 shows the luminance of the stimulus (bottom sinusoids), ON- (cyan) and OFF-input (magenta) spike trains, and the resulting spike trains from EXC (yellow), INH (blue), ON- (green), and OFFdriven (red) cell types for each of the eight directions. The center polar plot summarizes the orientation selectivity for each cell-type by showing the normalized number of spikes for each stimulus. Data is shown for one period. Even though all cells-types are selective for the same orientation (regardless of grating direction), complex cell responses tend to be phase-insensitive while the simple cell responses are modulated at the fundamental frequency. It is worth noting that the simple cells have sharper orientation selectivity compared to the complex cells. This trend is characteristic of our data. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 300 250 200 150 100 50 20 40 60 80 100 120 140 160 180 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 3: (a) PO map for the inhibitory cell population stimulated with eight different directions from 180 to 360 degrees (black represents no activity, contours delineate regions that exceed 90 spikes for all stimuli). Normalized vector magnitude (NVM) data is presented as (b) a map and (c) a histogram. Figure 4: Spike rasters and polar plot for 8 directions ranging from 0 to 360 degrees. Each set of spike rasters represent from bottom to top, ON- (cyan) and OFF-input (magenta), INH (yellow), EXC (blue), and ON- (green) and OFF-driven (red). The stimulus period is 1 sec. 5 Discussion We have implemented a large-scale network of spiking neurons in a silicon chip that is based on layer 4 of the visual cortex. The initial testing of the network reveals a PO map, inherited from innate chip heterogeneities, resembling cortical maps. Our microcircuit proposes a novel function for complex-like cells; that is they create a sign-independent orientation selective signal, which through a push-pull circuit creates sharply tuned simple cells with the same orientation preference. Recently, Ringach et al. surveyed orientation selectivity in the macaque [9]. They observed that, in a population of V1 neurons (N=308) the distribution of orientation selectivity is quite broad, having a median NVM of 0.39. We have measured median NVM’s ranging from 0.25 to 0.32. Additionally, Ringach et al. found a negative correlation between spontaneous firing rate and NVM. This is consistent with our model because cells closer to the center of the bump have higher firing rates and broader tuning. While the results from the bump chip are promising, our maps are less consistent and noisier than the maps Ernst et al. have reported. We believe this is because our network is tuned to operate in a fluid state where bumps come on, travel a short distance and disappear (motivated by cortical imaging studies). But excessive fluidity can cause non-dominant bumps to briefly appear and adversely shift the PO maps. We are currently investigating the role of lateral connections between bumps as a means to suppress these spontaneous shifts. The neural mechanisms that underlie the orientation selectivity of V1 neurons are still highly debated. This may be because neuron responses are not only shaped by feedforward inputs, but are also influenced at the network level. If modeling is going to be a useful guide for electrophysiologists, we must model at the network level while retaining cell level detail. Our results demonstrate that a spike-based neuromorphic system is well suited to model layer 4 of the visual cortex. The same approach may be used to build large-scale models of other cortical regions. References 1. Hubel, D. and T. Wiesel, Receptive firelds, binocular interaction and functional architecture in the cat's visual cortex. J. Physiol, 1962. 160: p. 106-154. 2. Blasdel, G.G., Orientation selectivity, preference, and continuity in monkey striate cortex. J Neurosci, 1992. 12(8): p. 3139-61. 3. Crair, M.C., D.C. Gillespie, and M.P. Stryker, The role of visual experience in the development of columns in cat visual cortex. Science, 1998. 279(5350): p. 566-70. 4. Ernst, U.A., et al., Intracortical origin of visual maps. Nat Neurosci, 2001. 4(4): p. 431-6. 5. Boahen, K., Point-to-Point Connectivity. IEEE Transactions on Circuits & Systems II, 2000. vol 47 no 5: p. 416-434. 6. Boahen, K. and Andreou. A contrast sensitive silicon retina with reciprocal synapses. in NIPS91. 1992: IEEE. 7. Culurciello, E., R. Etienne-Cummings, and K. Boahen, A Biomorphic Digital Image Sensor. IEEE Journal of Solid State Circuits, 2003. vol 38 no 2: p. 281-294. 8. Zaghloul, K., A silicon implementation of a novel model for retinal processing, in Neuroscience. 2002, UPENN: Philadelphia. 9. Ringach, D.L., R.M. Shapley, and M.J. Hawken, Orientation selectivity in macaque V1: diversity and laminar dependence. J Neurosci, 2002. 22(13): p. 5639-51.

2 0.84168595 7 nips-2003-A Functional Architecture for Motion Pattern Processing in MSTd

Author: Scott A. Beardsley, Lucia M. Vaina

Abstract: Psychophysical studies suggest the existence of specialized detectors for component motion patterns (radial, circular, and spiral), that are consistent with the visual motion properties of cells in the dorsal medial superior temporal area (MSTd) of non-human primates. Here we use a biologically constrained model of visual motion processing in MSTd, in conjunction with psychophysical performance on two motion pattern tasks, to elucidate the computational mechanisms associated with the processing of widefield motion patterns encountered during self-motion. In both tasks discrimination thresholds varied significantly with the type of motion pattern presented, suggesting perceptual correlates to the preferred motion bias reported in MSTd. Through the model we demonstrate that while independently responding motion pattern units are capable of encoding information relevant to the visual motion tasks, equivalent psychophysical performance can only be achieved using interconnected neural populations that systematically inhibit non-responsive units. These results suggest the cyclic trends in psychophysical performance may be mediated, in part, by recurrent connections within motion pattern responsive areas whose structure is a function of the similarity in preferred motion patterns and receptive field locations between units. 1 In trod u ction A major challenge in computational neuroscience is to elucidate the architecture of the cortical circuits for sensory processing and their effective role in mediating behavior. In the visual motion system, biologically constrained models are playing an increasingly important role in this endeavor by providing an explanatory substrate linking perceptual performance and the visual properties of single cells. Single cell studies indicate the presence of complex interconnected structures in middle temporal and primary visual cortex whose most basic horizontal connections can impart considerable computational power to the underlying neural population [1, 2]. Combined psychophysical and computational studies support these findings Figure 1: a) Schematic of the graded motion pattern (GMP) task. Discrimination pairs of stimuli were created by perturbing the flow angle (φ) of each 'test' motion (with average dot speed, vav), by ±φp in the stimulus space spanned by radial and circular motions. b) Schematic of the shifted center-of-motion (COM) task. Discrimination pairs of stimuli were created by shifting the COM of the ‘test’ motion to the left and right of a central fixation point. For each motion pattern the COM was shifted within the illusory inner aperture and was never explicitly visible. and suggest that recurrent connections may play a significant role in encoding the visual motion properties associated with various psychophysical tasks [3, 4]. Using this methodology our goal is to elucidate the computational mechanisms associated with the processing of wide-field motion patterns encountered during self-motion. In the human visual motion system, psychophysical studies suggest the existence of specialized detectors for the motion pattern components (i.e., radial, circular and spiral motions) associated with self-motion [5, 6]. Neurophysiological studies reporting neurons sensitive to motion patterns in the dorsal medial superior temporal area (MSTd) support the existence of such mechanisms [7-10], and in conjunction with psychophysical studies suggest a strong link between the patterns of neural activity and motion-based perceptual performance [11, 12]. Through the combination of human psychophysical performance and biologically constrained modeling we investigate the computational role of simple recurrent connections within a population of MSTd-like units. Based on the known visual motion properties within MSTd we ask what neural structures are computationally sufficient to encode psychophysical performance on a series of motion pattern tasks. 2 M o t i o n pa t t e r n d i sc r i m i n a t i o n Using motion pattern stimuli consistent with previous studies [5, 6], we have developed a set of novel psychophysical tasks designed to facilitate a more direct comparison between human perceptual performance and the visual motion properties of cells in MSTd that have been found to underlie the discrimination of motion patterns [11, 12]. The psychophysical tasks, referred to as the graded motion pattern (GMP) and shifted center-of-motion (COM) tasks, are outlined in Fig. 1. Using a temporal two-alternative-forced-choice task we measured discrimination thresholds to global changes in the patterns of complex motion (GMP task), [13], and shifts in the center-of-motion (COM task). Stimuli were presented with central fixation using a constant stimulus paradigm and consisted of dynamic random dot displays presented in a 24o annular region (central 4o removed). In each task, the stimulus duration was randomly perturbed across presentations (440±40 msec) to control for timing-based cues, and dots moved coherently through a radial speed Figure 2: a) GMP thresholds across 8 'test' motions at two mean dot speeds for two observers. Performance varied continuously with thresholds for radial motions (φ=0, 180o) significantly lower than those for circular motions (φ=90,270o), (p<0.001; t(37)=3.39). b) COM thresholds at three mean dot speeds for two observers. As with the GMP task, performance varied continuously with thresholds for radial motions significantly lower than those for circular motions, (p<0.001; t(37)=4.47). gradient in directions consistent with the global motion pattern presented. Discrimination thresholds were obtained across eight ‘test’ motions corresponding to expansion, contraction, CW and CCW rotation, and the four intermediate spiral motions. To minimize adaptation to specific motion patterns, opposing motions (e.g., expansion/ contraction) were interleaved across paired presentations. 2.1 Results Discrimination thresholds are reported here from a subset of the observer population consisting of three experienced psychophysical observers, one of which was naïve to the purpose of the psychophysical tasks. For each condition, performance is reported as the mean and standard error averaged across 8-12 thresholds. Across observers and dot speeds GMP thresholds followed a distinct trend in the stimulus space [13], with radial motions (expansion/contraction) significantly lower than circular motions (CW/CCW rotation), (p<0.001; t(37)=3.39), (Fig. 2a). While thresholds for the intermediate spiral motions were not significantly different from circular motions (p=0.223, t(60)=0.74), the trends across 'test' motions were well fit within the stimulus space (SB: r>0.82, SC: r>0.77) by sinusoids whose period and phase were 196 ± 10o and -72 ± 20o respectively (Fig. 1a). When the radial speed gradient was removed by randomizing the spatial distribution of dot speeds, threshold performance increased significantly across observers (p<0.05; t(17)=1.91), particularly for circular motions (p<0.005; t(25)=3.31), (data not shown). Such performance suggests a perceptual contribution associated with the presence of the speed gradient and is particularly interesting given the fact that the speed gradient did not contribute computationally relevant information to the task. However, the speed gradient did convey information regarding the integrative structure of the global motion field and as such suggests a preference of the underlying motion mechanisms for spatially structured speed information. Similar trends in performance were observed in the COM task across observers and dot speeds. Discrimination thresholds varied continuously as a function of the 'test' motion with thresholds for radial motions significantly lower than those for circular motions, (p<0.001; t(37)=4.47) and could be well fit by a sinusoidal trend line (e.g. SB at 3 deg/s: r>0.91, period = 178 ± 10 o and phase = -70 ± 25o), (Fig. 2b). 2.2 A local or global task? The consistency of the cyclic threshold profile in stimuli that restricted the temporal integration of individual dot motions [13], and simultaneously contained all directions of motion, generally argues against a primary role for local motion mechanisms in the psychophysical tasks. While the psychophysical literature has reported a wide variety of “local” motion direction anisotropies whose properties are reminiscent of the results observed here, e.g. [14], all would predict equivalent thresholds for radial and circular motions for a set of uniformly distributed and/or spatially restricted motion direction mechanisms. Together with the computational impact of the speed gradient and psychophysical studies supporting the existence of wide-field motion pattern mechanisms [5, 6], these results suggest that the threshold differences across the GMP and COM tasks may be associated with variations in the computational properties across a series of specialized motion pattern mechanisms. 3 A computational model The similarities between the motion pattern stimuli used to quantify human perception and the visual motion properties of cells in MSTd suggests that MSTd may play a computational role in the psychophysical tasks. To examine this hypothesis, we constructed a population of MSTd-like units whose visual motion properties were consistent with the reported neurophysiology (see [13] for details). Across the population, the distribution of receptive field centers was uniform across polar angle and followed a gamma distribution Γ(5,6) across eccenticity [7]. For each unit, visual motion responses followed a gaussian tuning profile as a function of the stimulus flow angle G( φ), (σi=60±30o; [10]), and the distance of the stimulus COM from the unit’s receptive field center Gsat(xi, yi, σs=19o), Eq. 1, such that its preferred motion response was position invariant to small shifts in the COM [10] and degraded continuously for large shifts [9]. Within the model, simulations were categorized according to the distribution of preferred motions represented across the population (one reported in MSTd and a uniform control). The first distribution simulated an expansion bias in which the density of preferred motions decreased symmetrically from expansions to contraction [10]. The second distribution simulated a uniform preference for all motions and was used as a control to quantify the effects of an expansion bias on psychophysical performance. Throughout the paper we refer to simulations containing these distributions as ‘Expansion-biased’ and ‘Uniform’ respectively. 3.1 Extracting perceptual estimates from the neural code For each stimulus presentation, the ith unit’s response was calculated as the average firing rate, Ri, from the product of its motion pattern and spatial tuning profiles, ( ) Ri = Rmax G min[φ − φi ] ,σ ti G sati (x− xi , y − y i ,σ s ) + P (λ = 12 ) (1) where Rmax is the maximum preferred stimulus response (spikes/s), min[ ] refers to the minimum angular distance between the stimulus flow angle φ and the unit’s preferred motion φi, Gsat is the unit’s spatial tuning profile saturated within the central 5±3o, σti and σs are the standard deviations of the unit’s motion pattern and Figure 3: Model vs. psychophysical performance for independently responding units. Model thresholds are reported as the average (±1 S.E.) across five simulated populations. a) GMP thresholds were highest for contracting motions and lowest for expanding motions across all Expansion-biased populations. b) Comparable trends in performance were observed for COM thresholds. Comparison with the Uniform control simulations in both tasks (2000 units shown here) indicates that thresholds closely followed the distribution of preferred motions simulated within the model. spatial tuning profiles respectively, (xi,yi) is the spatial location of the unit’s receptive field center, (x,y) is the spatial location of the stimulus COM, and P(λ=12) is the background activity simulated as an uncorrelated Poisson process. The psychophysical tasks were simulated using a modified center-of-gravity ^ approach to decode estimates of the stimulus properties, i.e. flow angle (φ ) and ˆ ˆ COM location in the visual field (x, y ) , from the neural population   ∑ xi Ri ∑ y i Ri v  , i , ∑ φ i Ri  ∑ Ri i   i i   i (xˆ, yˆ , φˆ) =  i∑ R  (2) v where φi is the unit vector in the stimulus space (Fig. 1a) corresponding to the unit’s preferred motion. For each set of paired stimuli, psychophysical judgments were made by comparing the estimated stimulus properties according to the discrimination criteria, specified in the psychophysical tasks. As with the psychophysical experiments, discrimination thresholds were computed using a leastsquares fit to percent correct performance across constant stimulus levels. 3.2 Simulation 1: Independent neural responses In the first series of simulations, GMP and COM thresholds were quantified across three populations (500, 1000, and 2000 units) of independently responding units for each simulated distribution (Expansion-biased and Uniform). Across simulations, both the range in thresholds and their trends across ‘test’ motions were compared with human psychophysical performance to quantify the effects of population size and an expansion biased preferred motion distribution on model performance. Over the psychophysical range of interest (φp ± 7o), GMP thresholds for contracting motions were at chance across all Expansion-biased populations, (Fig. 3a). While thresholds for expanding motions were generally consistent with those for human observers, those for circular motions remained significantly higher for all but the largest populations. Similar trends in performance were observed for the COM task, (Fig. 3b). Here the range of COM thresholds was well matched with human performance for simulations containing 1000 units, however, the trends across motion patterns remained inconsistent even for the largest populations. Figure 4: Proposed recurrent connection profile between motion pattern units. a) Across the motion pattern space connection strength followed an inverse gaussian profile such that the ith unit (with preferred motion φi) systematically inhibited units with anti-preferred motions centered at 180+φi. b) Across the visual field connection strength followed a difference-of-gaussians profile as a function of the relative distance between receptive field centers such that spatially local units are mutually excitatory (σRe=10o) and more distant units were mutually inhibitory (σRi=80o). For simulations containing a uniform distribution of preferred motions, the threshold range was consistent with human performance on both tasks, however, the trend across motion patterns was generally flat. What variability did occur was due primarily to the discrete sampling of preferred motions across the population. Comparison of the discrimination thresholds for the Expansion-biased and Uniform populations indicates that the trend across thresholds was closely matched to the underlying distributions of preferred motions. This result in due in part to the nearequal weighting of independently responding units and can be explained to a first approximation by the proportional increase in the signal-to-noise ratio across the population as a function of the density of units responsive to a given 'test' motion. 3.3 Simulation 2: An interconnected neural structure In a second series of simulations, we examined the computational effect of adding recurrent connections between units. If the distribution of preferred motions in MSTd is in fact biased towards expansions, as the neurophysiology suggests, it seems unlikely that independent estimates of the visual motion information would be sufficient to yield the threshold profiles observed in the psychophysical tasks. We hypothesize that a simple fixed architecture of excitatory and/or inhibitory connections is sufficient to account for the cyclic trends in discrimination thresholds. Specifically, we propose that a recurrent connection profile whose strength varies as a function of (a) the similarity between preferred motion patterns and (b) the distance between receptive field centers, is computationally sufficient to recover the trends in GMP/COM performance (Fig. 4), wij = S R e − ( xi − x j )2 + ( yi − y j )2 2 2σ R e − SR e 2 − −(min[ φi − φ j ])2 ( xi − x j )2 + ( yi − y j )2 2 2 σ Ri − Sφ e 2σ I2 (3) Figure 5: Model vs. psychophysical performance for populations containing recurrent connections (σI=80o). As the number of units increased for Expansionbiased populations, discrimination thresholds decreased to psychophysical levels and the sinusoidal trend in thresholds emerged for both the (a) GMP and (b) COM tasks. Sinusoidal trends were established for as few as 1000 units and were well fit (r>0.9) by sinusoids whose periods and phases were (193.8 ± 11.7o, -70.0 ± 22.6o) and (168.2 ± 13.7o, -118.8 ± 31.8o) for the GMP and COM tasks respectively. where wij is the strength of the recurrent connection between ith and jth units, (xi,yi) and (xj,yj) denote the spatial locations of their receptive field centers, σRe (=10o) and σRi (=80o) together define the spatial extent of a difference-of-gaussians interaction between receptive field centers, and SR and Sφ scale the connection strength. To examine the effects of the spread of motion pattern-specific inhibition and connection strength in the model, σI, Sφ, and SR were considered free parameters. Within the parameter space used to define recurrent connections (i.e., σI, Sφ and SR), Monte Carlo simulations of Expansion-biased model performance (1000 units) yielded regions of high correlation on both tasks (with respect to the psychophysical thresholds, r>0.7) that were consistent across independently simulated populations. Typically these regions were well defined over a broad range such that there was significant overlap between tasks (e.g., for the GMP task (SR=0.03), σI=[45,120o], Sφ=[0.03,0.3] and for the COM task (σI=80o), Sφ = [0.03,0.08], SR = [0.005, 0.04]). Fig. 5 shows averaged threshold performance for simulations of interconnected units drawn from the highly correlated regions of the (σI, Sφ, SR) parameter space. For populations not explicitly examined in the Monte Carlo simulations connection strengths (Sφ, SR) were scaled inversely with population size to maintain an equivalent level of recurrent activity. With the incorporation of recurrent connections, the sinusoidal trend in GMP and COM thresholds emerged for Expansion-biased populations as the number of units increased. In both tasks the cyclic threshold profiles were established for 1000 units and were well fit (r>0.9) by sinusoids whose periods and phases were consistent with human performance. Unlike the Expansion-biased populations, Uniform populations were not significantly affected by the presence of recurrent connections (Fig. 5). Both the range in thresholds and the flat trend across motion patterns were well matched to those in Section 3.2. Together these results suggest that the sinusoidal trends in GMP and COM performance may be mediated by the combined contribution of the recurrent interconnections and the bias in preferred motions across the population. 4 D i s c u s s i on Using a biologically constrained computational model in conjunction with human psychophysical performance on two motion pattern tasks we have shown that the visual motion information encoded across an interconnected population of cells responsive to motion patterns, such as those in MSTd, is computationally sufficient to extract perceptual estimates consistent with human performance. Specifically, we have shown that the cyclic trend in psychophysical performance observed across tasks, (a) cannot be reproduced using populations of independently responding units and (b) is dependent, in part, on the presence of an expanding motion bias in the distribution of preferred motions across the neural population. The model’s performance suggests the presence of specific recurrent structures within motion pattern responsive areas, such as MSTd, whose strength varies as a function of the similarity between preferred motion patterns and the distance between receptive field centers. While such structures have not been explicitly examined in MSTd and other higher visual motion areas there is anecdotal support for the presence of inhibitory connections [8]. Together, these results suggest that robust processing of the motion patterns associated with self-motion and optic flow may be mediated, in part, by recurrent structures in extrastriate visual motion areas whose distributions of preferred motions are biased strongly in favor of expanding motions. Acknowledgments This work was supported by National Institutes of Health grant EY-2R01-07861-13 to L.M.V. References [1] Malach, R., Schirman, T., Harel, M., Tootell, R., & Malonek, D., (1997), Cerebral Cortex, 7(4): 386-393. [2] Gilbert, C. D., (1992), Neuron, 9: 1-13. [3] Koechlin, E., Anton, J., & Burnod, Y., (1999), Biological Cybernetics, 80: 2544. [4] Stemmler, M., Usher, M., & Niebur, E., (1995), Science, 269: 1877-1880. [5] Burr, D. C., Morrone, M. C., & Vaina, L. M., (1998), Vision Research, 38(12): 1731-1743. [6] Meese, T. S. & Harris, S. J., (2002), Vision Research, 42: 1073-1080. [7] Tanaka, K. & Saito, H. A., (1989), Journal of Neurophysiology, 62(3): 626-641. [8] Duffy, C. J. & Wurtz, R. H., (1991), Journal of Neurophysiology, 65(6): 13461359. [9] Duffy, C. J. & Wurtz, R. H., (1995), Journal of Neuroscience, 15(7): 5192-5208. [10] Graziano, M. S., Anderson, R. A., & Snowden, R., (1994), Journal of Neuroscience, 14(1): 54-67. [11] Celebrini, S. & Newsome, W., (1994), Journal of Neuroscience, 14(7): 41094124. [12] Celebrini, S. & Newsome, W. T., (1995), Journal of Neurophysiology, 73(2): 437-448. [13] Beardsley, S. A. & Vaina, L. M., (2001), Journal of Computational Neuroscience, 10: 255-280. [14] Matthews, N. & Qian, N., (1999), Vision Research, 39: 2205-2211.

3 0.50066578 183 nips-2003-Synchrony Detection by Analogue VLSI Neurons with Bimodal STDP Synapses

Author: Adria Bofill-i-petit, Alan F. Murray

Abstract: We present test results from spike-timing correlation learning experiments carried out with silicon neurons with STDP (Spike Timing Dependent Plasticity) synapses. The weight change scheme of the STDP synapses can be set to either weight-independent or weight-dependent mode. We present results that characterise the learning window implemented for both modes of operation. When presented with spike trains with different types of synchronisation the neurons develop bimodal weight distributions. We also show that a 2-layered network of silicon spiking neurons with STDP synapses can perform hierarchical synchrony detection. 1

4 0.49716926 13 nips-2003-A Neuromorphic Multi-chip Model of a Disparity Selective Complex Cell

Author: Bertram E. Shi, Eric K. Tsang

Abstract: The relative depth of objects causes small shifts in the left and right retinal positions of these objects, called binocular disparity. Here, we describe a neuromorphic implementation of a disparity selective complex cell using the binocular energy model, which has been proposed to model the response of disparity selective cells in the visual cortex. Our system consists of two silicon chips containing spiking neurons with monocular Gabor-type spatial receptive fields (RF) and circuits that combine the spike outputs to compute a disparity selective complex cell response. The disparity selectivity of the cell can be adjusted by both position and phase shifts between the monocular RF profiles, which are both used in biology. Our neuromorphic system performs better with phase encoding, because the relative responses of neurons tuned to different disparities by phase shifts are better matched than the responses of neurons tuned by position shifts.

5 0.49092776 177 nips-2003-Simplicial Mixtures of Markov Chains: Distributed Modelling of Dynamic User Profiles

Author: Mark Girolami, Ata Kabán

Abstract: To provide a compact generative representation of the sequential activity of a number of individuals within a group there is a tradeoff between the definition of individual specific and global models. This paper proposes a linear-time distributed model for finite state symbolic sequences representing traces of individual user activity by making the assumption that heterogeneous user behavior may be ‘explained’ by a relatively small number of common structurally simple behavioral patterns which may interleave randomly in a user-specific proportion. The results of an empirical study on three different sources of user traces indicates that this modelling approach provides an efficient representation scheme, reflected by improved prediction performance as well as providing lowcomplexity and intuitively interpretable representations.

6 0.45850366 185 nips-2003-The Doubly Balanced Network of Spiking Neurons: A Memory Model with High Capacity

7 0.45830333 107 nips-2003-Learning Spectral Clustering

8 0.4571839 125 nips-2003-Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model

9 0.45558214 61 nips-2003-Entrainment of Silicon Central Pattern Generators for Legged Locomotory Control

10 0.45277545 101 nips-2003-Large Margin Classifiers: Convex Loss, Low Noise, and Convergence Rates

11 0.45177037 93 nips-2003-Information Dynamics and Emergent Computation in Recurrent Circuits of Spiking Neurons

12 0.45075208 73 nips-2003-Feature Selection in Clustering Problems

13 0.44904071 81 nips-2003-Geometric Analysis of Constrained Curves

14 0.44870791 43 nips-2003-Bounded Invariance and the Formation of Place Fields

15 0.44831222 79 nips-2003-Gene Expression Clustering with Functional Mixture Models

16 0.4468776 80 nips-2003-Generalised Propagation for Fast Fourier Transforms with Partial or Missing Data

17 0.44408214 86 nips-2003-ICA-based Clustering of Genes from Microarray Expression Data

18 0.44370279 112 nips-2003-Learning to Find Pre-Images

19 0.44269153 65 nips-2003-Extending Q-Learning to General Adaptive Multi-Agent Systems

20 0.44256282 18 nips-2003-A Summating, Exponentially-Decaying CMOS Synapse for Spiking Neural Systems