nips nips2001 nips2001-142 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Neil Burgess, Tom Hartley
Abstract: We present a model of the firing of place and head-direction cells in rat hippocampus. The model can predict the response of individual cells and populations to parametric manipulations of both geometric (e.g. O'Keefe & Burgess, 1996) and orientational (Fenton et aI., 2000a) cues, extending a previous geometric model (Hartley et al., 2000). It provides a functional description of how these cells' spatial responses are derived from the rat's environment and makes easily testable quantitative predictions. Consideration of the phenomenon of remapping (Muller & Kubie, 1987; Bostock et aI., 1991) indicates that the model may also be consistent with nonparametric changes in firing, and provides constraints for its future development. 1
Reference: text
sentIndex sentText sentNum sentScore
1 Orientational and geometric determinants place and head- Neil Burgess & Tom Hartley Institute of Cognitive Neuroscience & Department of Anatomy, UCL 17 Queen Square, London WCIN 3AR, UK n. [sent-1, score-0.349]
2 uk Abstract We present a model of the firing of place and head-direction cells in rat hippocampus. [sent-8, score-1.014]
3 The model can predict the response of individual cells and populations to parametric manipulations of both geometric (e. [sent-9, score-0.538]
4 O'Keefe & Burgess, 1996) and orientational (Fenton et aI. [sent-11, score-0.288]
5 , 2000a) cues, extending a previous geometric model (Hartley et al. [sent-12, score-0.313]
6 It provides a functional description of how these cells' spatial responses are derived from the rat's environment and makes easily testable quantitative predictions. [sent-14, score-0.449]
7 Consideration of the phenomenon of remapping (Muller & Kubie, 1987; Bostock et aI. [sent-15, score-0.177]
8 , 1991) indicates that the model may also be consistent with nonparametric changes in firing, and provides constraints for its future development. [sent-16, score-0.142]
9 1 Introduction 'Place cells' recorded in the hippocampus of freely moving rats encode the rat's current location (O'Keefe & Dostrovsky, 1971; Wilson & McNaughton, 1993). [sent-17, score-0.354]
10 In open environments a place cell will fire whenever the rat enters a specific portion of the environment (the 'place field'), independent of the rat's orientation (Muller et aI. [sent-18, score-1.679]
11 This location-specific firing appears to be present on the rat's first visit to an environment (e. [sent-20, score-0.412]
12 Hill, 1978), and does not depend on the presence of local cues such as odors on the floor or walls. [sent-22, score-0.309]
13 The complementary pattern of firing has also been found in related brain areas: 'head-direction cells' that fire whenever the rat faces in a particular direction independent of its location (Taube et aI. [sent-23, score-1.288]
14 Experiments involving consistent rotation of cues at or beyond the edge of the environment (referred to as 'distal' cues) produce rotation of the entire place (O'Keefe & Speakman, 1987; Muller et aI. [sent-25, score-1.133]
15 Rotating cues within the environment does not produce this effect (Cressant et aI. [sent-28, score-0.625]
16 Here we suggest a predicitive model of the mechanisms underlying these spatial responses. [sent-30, score-0.082]
17 2 Geometric influences given consistent orientation Given a stable directional reference (e. [sent-31, score-0.395]
18 stable distal cues across trials), fields are determined by inputs tuned to detect extended obstacles or boundaries at particular bearings. [sent-33, score-0.859]
19 That is, they respond whenever a boundary or obstacle occurs at a given distance along a given allocentric direction, independent of the rat's orientation. [sent-34, score-0.409]
20 These inputs are referred to below as putative 'boundary vector cells' (BVCs). [sent-35, score-0.205]
21 The functional form of these inputs has been estimated by recording from the same place cell in several environments of differing geometry within the same set of distal orientation cu~s (O'Keefe & Burgess, 1996; Hartley et al. [sent-36, score-1.039]
wordName wordTfidf (topN-words)
[('rat', 0.444), ('burgess', 0.305), ('cues', 0.279), ('hartley', 0.263), ('cells', 0.214), ('firing', 0.176), ('distal', 0.176), ('taube', 0.176), ('environment', 0.171), ('orientational', 0.153), ('place', 0.144), ('muller', 0.139), ('fire', 0.139), ('et', 0.135), ('geometric', 0.135), ('cell', 0.117), ('orientation', 0.117), ('whenever', 0.114), ('rotation', 0.09), ('environments', 0.088), ('obstacle', 0.076), ('obstacles', 0.076), ('wcin', 0.076), ('boundary', 0.073), ('tuned', 0.071), ('inputs', 0.07), ('referred', 0.07), ('mcnaughton', 0.07), ('testable', 0.07), ('determinants', 0.07), ('populations', 0.07), ('ucl', 0.07), ('anatomy', 0.07), ('stable', 0.065), ('putative', 0.065), ('cu', 0.065), ('freely', 0.065), ('hill', 0.065), ('neil', 0.065), ('visit', 0.065), ('rats', 0.061), ('tom', 0.061), ('complementary', 0.061), ('rotating', 0.061), ('enters', 0.061), ('consistent', 0.061), ('hippocampus', 0.058), ('wilson', 0.058), ('influences', 0.058), ('differing', 0.058), ('location', 0.056), ('manipulations', 0.055), ('directional', 0.055), ('queen', 0.053), ('faces', 0.053), ('spatial', 0.053), ('nonparametric', 0.051), ('functional', 0.051), ('portion', 0.05), ('recording', 0.048), ('respond', 0.046), ('consideration', 0.045), ('direction', 0.045), ('fields', 0.044), ('extending', 0.043), ('phenomenon', 0.042), ('quantitative', 0.042), ('detect', 0.041), ('recorded', 0.04), ('produce', 0.04), ('reference', 0.039), ('encode', 0.038), ('parametric', 0.038), ('distance', 0.038), ('boundaries', 0.037), ('independent', 0.037), ('uk', 0.036), ('involving', 0.036), ('moving', 0.036), ('trials', 0.036), ('geometry', 0.035), ('open', 0.034), ('di', 0.034), ('neuroscience', 0.032), ('responses', 0.032), ('edge', 0.031), ('areas', 0.031), ('london', 0.031), ('square', 0.031), ('beyond', 0.03), ('provides', 0.03), ('cognitive', 0.03), ('presence', 0.03), ('mechanisms', 0.029), ('field', 0.029), ('brain', 0.028), ('specific', 0.028), ('entire', 0.026), ('predict', 0.026), ('occurs', 0.025)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999988 142 nips-2001-Orientational and Geometric Determinants of Place and Head-direction
Author: Neil Burgess, Tom Hartley
Abstract: We present a model of the firing of place and head-direction cells in rat hippocampus. The model can predict the response of individual cells and populations to parametric manipulations of both geometric (e.g. O'Keefe & Burgess, 1996) and orientational (Fenton et aI., 2000a) cues, extending a previous geometric model (Hartley et al., 2000). It provides a functional description of how these cells' spatial responses are derived from the rat's environment and makes easily testable quantitative predictions. Consideration of the phenomenon of remapping (Muller & Kubie, 1987; Bostock et aI., 1991) indicates that the model may also be consistent with nonparametric changes in firing, and provides constraints for its future development. 1
2 0.13989533 96 nips-2001-Information-Geometric Decomposition in Spike Analysis
Author: Hiroyuki Nakahara, Shun-ichi Amari
Abstract: We present an information-geometric measure to systematically investigate neuronal firing patterns, taking account not only of the second-order but also of higher-order interactions. We begin with the case of two neurons for illustration and show how to test whether or not any pairwise correlation in one period is significantly different from that in the other period. In order to test such a hypothesis of different firing rates, the correlation term needs to be singled out 'orthogonally' to the firing rates, where the null hypothesis might not be of independent firing. This method is also shown to directly associate neural firing with behavior via their mutual information, which is decomposed into two types of information, conveyed by mean firing rate and coincident firing, respectively. Then, we show that these results, using the 'orthogonal' decomposition, are naturally extended to the case of three neurons and n neurons in general. 1
3 0.10421465 73 nips-2001-Eye movements and the maturation of cortical orientation selectivity
Author: Antonino Casile, Michele Rucci
Abstract: Neural activity appears to be a crucial component for shaping the receptive fields of cortical simple cells into adjacent, oriented subregions alternately receiving ON- and OFF-center excitatory geniculate inputs. It is known that the orientation selective responses of V1 neurons are refined by visual experience. After eye opening, the spatiotemporal structure of neural activity in the early stages of the visual pathway depends both on the visual environment and on how the environment is scanned. We have used computational modeling to investigate how eye movements might affect the refinement of the orientation tuning of simple cells in the presence of a Hebbian scheme of synaptic plasticity. Levels of correlation between the activity of simulated cells were examined while natural scenes were scanned so as to model sequences of saccades and fixational eye movements, such as microsaccades, tremor and ocular drift. The specific patterns of activity required for a quantitatively accurate development of simple cell receptive fields with segregated ON and OFF subregions were observed during fixational eye movements, but not in the presence of saccades or with static presentation of natural visual input. These results suggest an important role for the eye movements occurring during visual fixation in the refinement of orientation selectivity.
4 0.1019899 37 nips-2001-Associative memory in realistic neuronal networks
Author: Peter E. Latham
Abstract: Almost two decades ago , Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. It is still not clear, however, whether realistic neuronal networks can support multiple attractors. The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typically a few Hz. Embedding attractor is easy; doing so without destabilizing the background is not. Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. Mean field theory is used to understand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented. One of the most important features of the nervous system is its ability to perform associative memory. It is generally believed that associative memory is implemented using attractor networks - experimental studies point in that direction [4- 7], and there are virtually no competing theoretical models. Perhaps surprisingly, however, it is still an open theoretical question whether attractors can exist in realistic neuronal networks. The
5 0.099343285 42 nips-2001-Bayesian morphometry of hippocampal cells suggests same-cell somatodendritic repulsion
Author: Giorgio A. Ascoli, Alexei V. Samsonovich
Abstract: Visual inspection of neurons suggests that dendritic orientation may be determined both by internal constraints (e.g. membrane tension) and by external vector fields (e.g. neurotrophic gradients). For example, basal dendrites of pyramidal cells appear nicely fan-out. This regular orientation is hard to justify completely with a general tendency to grow straight, given the zigzags observed experimentally. Instead, dendrites could (A) favor a fixed (“external”) direction, or (B) repel from their own soma. To investigate these possibilities quantitatively, reconstructed hippocampal cells were subjected to Bayesian analysis. The statistical model combined linearly factors A and B, as well as the tendency to grow straight. For all morphological classes, B was found to be significantly positive and consistently greater than A. In addition, when dendrites were artificially re-oriented according to this model, the resulting structures closely resembled real morphologies. These results suggest that somatodendritic repulsion may play a role in determining dendritic orientation. Since hippocampal cells are very densely packed and their dendritic trees highly overlap, the repulsion must be cellspecific. We discuss possible mechanisms underlying such specificity. 1 I n t r od uc t i on The study of brain dynamics and development at the cellular level would greatly benefit from a standardized, accurate and yet succinct statistical model characterizing the morphology of major neuronal classes. Such model could also provide a basis for simulation of anatomically realistic virtual neurons [1]. The model should accurately distinguish among different neuronal classes: a morphological difference between classes would be captured by a difference in model parameters and reproduced in generated virtual neurons. In addition, the model should be self-consistent: there should be no statistical difference in model parameters measured from real neurons of a given class and from virtual neurons of the same class. The assumption that a simple statistical model of this sort exists relies on the similarity of average environmental and homeostatic conditions encountered by individual neurons during development and on the limited amount of genetic information that underlies differentiation of neuronal classes. Previous research in computational neuroanatomy has mainly focused on the topology and internal geometry of dendrites (i.e., the properties described in “dendrograms”) [2,3]. Recently, we attempted to include spatial orientation in the models, thus generating 2 virtual neurons in 3D [4]. Dendritic growth was assumed to deviate from the straight direction both randomly and based on a constant bias in a given direction, or “tropism”. Different models of tropism (e.g. along a fixed axis, towards a plane, or away from the soma) had dramatic effects on the shape of virtual neurons [5]. Our current strategy is to split the problem of finding a statistical model describing neuronal morphology in two parts. First, we maintain that the topology and the internal geometry of a particular dendritic tree can be described independently of its 3D embedding (i.e., the set of local dendritic orientations). At the same time, one and the same internal geometry (e.g., the experimental dendrograms obtained from real neurons) may have many equally plausible 3D embeddings that are statistically consistent with the anatomical characteristics of that neuronal class. The present work aims at finding a minimal statistical model describing local dendritic orientation in experimentally reconstructed hippocampal principal cells. Hippocampal neurons have a polarized shape: their dendrites tend to grow from the soma as if enclosed in cones. In pyramidal cells, basal and apical dendrites invade opposite hemispaces (fig. 1A), while granule cell dendrites all invade the same hemispace. This behavior could be caused by a tendency to grow towards the layers of incoming fibers to establish synapses. Such tendency would correspond to a tropism in a direction roughly parallel to the cell main axis. Alternatively, dendrites could initially stem in the appropriate (possibly genetically determined) directions, and then continue to grow approximately in a radial direction from the soma. A close inspection of pyramidal (basal) trees suggests that dendrites may indeed be repelled from their soma (Fig. 1B). A typical dendrite may reorient itself (arrow) to grow nearly straight along a radius from the soma. Remarkably, this happens even after many turns, when the initial direction is lost. Such behavior may be hard to explain without tropism. If the deviations from straight growth were random, one should be able to “remodel”th e trees by measuring and reproducing the statistics of local turn angles, assuming its independence of dendritic orientation and location. Figure 1C shows the cell from 1A after such remodeling. In this case basal and apical dendrites retain only their initial (stemming) orientations from the original data. The resulting “cotton ball” uggests that dendritic turns are not in dependent s of dendritic orientation. In this paper, we use Bayesian analysis to quantify the dendritic tropism. 2 M e t h od s Digital files of fully reconstructed rat hippocampal pyramidal cells (24 CA3 and 23 CA1 neurons) were kindly provided by Dr. D. Amaral. The overall morphology of these cells, as well as the experimental acquisition methods, were extensively described [6]. In these files, dendrites are represented as (branching) chains of cylindrical sections. Each section is connected to one other section in the path to the soma, and may be connected on the other extremity to two other sections (bifurcation), one other section (continuation point), or no other section (terminal tip). Each section is described in the file by its ending point coordinates, its diameter and its
6 0.090379529 150 nips-2001-Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex
7 0.076906979 10 nips-2001-A Hierarchical Model of Complex Cells in Visual Cortex for the Binocular Perception of Motion-in-Depth
8 0.075119838 2 nips-2001-3 state neurons for contextual processing
9 0.061762758 12 nips-2001-A Model of the Phonological Loop: Generalization and Binding
10 0.057191212 78 nips-2001-Fragment Completion in Humans and Machines
11 0.053935129 87 nips-2001-Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway
12 0.053209037 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules
13 0.050073661 28 nips-2001-Adaptive Nearest Neighbor Classification Using Support Vector Machines
14 0.043722231 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes
15 0.043642178 111 nips-2001-Learning Lateral Interactions for Feature Binding and Sensory Segmentation
16 0.042348094 124 nips-2001-Modeling the Modulatory Effect of Attention on Human Spatial Vision
17 0.038528841 23 nips-2001-A theory of neural integration in the head-direction system
18 0.036693238 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons
19 0.036455244 19 nips-2001-A Rotation and Translation Invariant Discrete Saliency Network
20 0.034711912 11 nips-2001-A Maximum-Likelihood Approach to Modeling Multisensory Enhancement
topicId topicWeight
[(0, -0.087), (1, -0.147), (2, -0.087), (3, 0.025), (4, 0.023), (5, 0.027), (6, -0.027), (7, -0.004), (8, 0.016), (9, 0.041), (10, 0.024), (11, 0.038), (12, 0.004), (13, -0.013), (14, 0.124), (15, 0.008), (16, -0.009), (17, -0.014), (18, -0.152), (19, -0.057), (20, 0.108), (21, 0.145), (22, 0.061), (23, 0.07), (24, -0.017), (25, 0.053), (26, -0.124), (27, 0.152), (28, 0.03), (29, -0.118), (30, 0.204), (31, 0.143), (32, -0.208), (33, 0.042), (34, -0.036), (35, 0.095), (36, -0.026), (37, 0.116), (38, -0.009), (39, -0.011), (40, -0.04), (41, -0.008), (42, 0.014), (43, -0.085), (44, 0.016), (45, 0.061), (46, -0.015), (47, -0.05), (48, 0.008), (49, -0.056)]
simIndex simValue paperId paperTitle
same-paper 1 0.99233043 142 nips-2001-Orientational and Geometric Determinants of Place and Head-direction
Author: Neil Burgess, Tom Hartley
Abstract: We present a model of the firing of place and head-direction cells in rat hippocampus. The model can predict the response of individual cells and populations to parametric manipulations of both geometric (e.g. O'Keefe & Burgess, 1996) and orientational (Fenton et aI., 2000a) cues, extending a previous geometric model (Hartley et al., 2000). It provides a functional description of how these cells' spatial responses are derived from the rat's environment and makes easily testable quantitative predictions. Consideration of the phenomenon of remapping (Muller & Kubie, 1987; Bostock et aI., 1991) indicates that the model may also be consistent with nonparametric changes in firing, and provides constraints for its future development. 1
2 0.6468488 42 nips-2001-Bayesian morphometry of hippocampal cells suggests same-cell somatodendritic repulsion
Author: Giorgio A. Ascoli, Alexei V. Samsonovich
Abstract: Visual inspection of neurons suggests that dendritic orientation may be determined both by internal constraints (e.g. membrane tension) and by external vector fields (e.g. neurotrophic gradients). For example, basal dendrites of pyramidal cells appear nicely fan-out. This regular orientation is hard to justify completely with a general tendency to grow straight, given the zigzags observed experimentally. Instead, dendrites could (A) favor a fixed (“external”) direction, or (B) repel from their own soma. To investigate these possibilities quantitatively, reconstructed hippocampal cells were subjected to Bayesian analysis. The statistical model combined linearly factors A and B, as well as the tendency to grow straight. For all morphological classes, B was found to be significantly positive and consistently greater than A. In addition, when dendrites were artificially re-oriented according to this model, the resulting structures closely resembled real morphologies. These results suggest that somatodendritic repulsion may play a role in determining dendritic orientation. Since hippocampal cells are very densely packed and their dendritic trees highly overlap, the repulsion must be cellspecific. We discuss possible mechanisms underlying such specificity. 1 I n t r od uc t i on The study of brain dynamics and development at the cellular level would greatly benefit from a standardized, accurate and yet succinct statistical model characterizing the morphology of major neuronal classes. Such model could also provide a basis for simulation of anatomically realistic virtual neurons [1]. The model should accurately distinguish among different neuronal classes: a morphological difference between classes would be captured by a difference in model parameters and reproduced in generated virtual neurons. In addition, the model should be self-consistent: there should be no statistical difference in model parameters measured from real neurons of a given class and from virtual neurons of the same class. The assumption that a simple statistical model of this sort exists relies on the similarity of average environmental and homeostatic conditions encountered by individual neurons during development and on the limited amount of genetic information that underlies differentiation of neuronal classes. Previous research in computational neuroanatomy has mainly focused on the topology and internal geometry of dendrites (i.e., the properties described in “dendrograms”) [2,3]. Recently, we attempted to include spatial orientation in the models, thus generating 2 virtual neurons in 3D [4]. Dendritic growth was assumed to deviate from the straight direction both randomly and based on a constant bias in a given direction, or “tropism”. Different models of tropism (e.g. along a fixed axis, towards a plane, or away from the soma) had dramatic effects on the shape of virtual neurons [5]. Our current strategy is to split the problem of finding a statistical model describing neuronal morphology in two parts. First, we maintain that the topology and the internal geometry of a particular dendritic tree can be described independently of its 3D embedding (i.e., the set of local dendritic orientations). At the same time, one and the same internal geometry (e.g., the experimental dendrograms obtained from real neurons) may have many equally plausible 3D embeddings that are statistically consistent with the anatomical characteristics of that neuronal class. The present work aims at finding a minimal statistical model describing local dendritic orientation in experimentally reconstructed hippocampal principal cells. Hippocampal neurons have a polarized shape: their dendrites tend to grow from the soma as if enclosed in cones. In pyramidal cells, basal and apical dendrites invade opposite hemispaces (fig. 1A), while granule cell dendrites all invade the same hemispace. This behavior could be caused by a tendency to grow towards the layers of incoming fibers to establish synapses. Such tendency would correspond to a tropism in a direction roughly parallel to the cell main axis. Alternatively, dendrites could initially stem in the appropriate (possibly genetically determined) directions, and then continue to grow approximately in a radial direction from the soma. A close inspection of pyramidal (basal) trees suggests that dendrites may indeed be repelled from their soma (Fig. 1B). A typical dendrite may reorient itself (arrow) to grow nearly straight along a radius from the soma. Remarkably, this happens even after many turns, when the initial direction is lost. Such behavior may be hard to explain without tropism. If the deviations from straight growth were random, one should be able to “remodel”th e trees by measuring and reproducing the statistics of local turn angles, assuming its independence of dendritic orientation and location. Figure 1C shows the cell from 1A after such remodeling. In this case basal and apical dendrites retain only their initial (stemming) orientations from the original data. The resulting “cotton ball” uggests that dendritic turns are not in dependent s of dendritic orientation. In this paper, we use Bayesian analysis to quantify the dendritic tropism. 2 M e t h od s Digital files of fully reconstructed rat hippocampal pyramidal cells (24 CA3 and 23 CA1 neurons) were kindly provided by Dr. D. Amaral. The overall morphology of these cells, as well as the experimental acquisition methods, were extensively described [6]. In these files, dendrites are represented as (branching) chains of cylindrical sections. Each section is connected to one other section in the path to the soma, and may be connected on the other extremity to two other sections (bifurcation), one other section (continuation point), or no other section (terminal tip). Each section is described in the file by its ending point coordinates, its diameter and its
3 0.59657001 96 nips-2001-Information-Geometric Decomposition in Spike Analysis
Author: Hiroyuki Nakahara, Shun-ichi Amari
Abstract: We present an information-geometric measure to systematically investigate neuronal firing patterns, taking account not only of the second-order but also of higher-order interactions. We begin with the case of two neurons for illustration and show how to test whether or not any pairwise correlation in one period is significantly different from that in the other period. In order to test such a hypothesis of different firing rates, the correlation term needs to be singled out 'orthogonally' to the firing rates, where the null hypothesis might not be of independent firing. This method is also shown to directly associate neural firing with behavior via their mutual information, which is decomposed into two types of information, conveyed by mean firing rate and coincident firing, respectively. Then, we show that these results, using the 'orthogonal' decomposition, are naturally extended to the case of three neurons and n neurons in general. 1
4 0.45746744 37 nips-2001-Associative memory in realistic neuronal networks
Author: Peter E. Latham
Abstract: Almost two decades ago , Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. It is still not clear, however, whether realistic neuronal networks can support multiple attractors. The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typically a few Hz. Embedding attractor is easy; doing so without destabilizing the background is not. Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. Mean field theory is used to understand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented. One of the most important features of the nervous system is its ability to perform associative memory. It is generally believed that associative memory is implemented using attractor networks - experimental studies point in that direction [4- 7], and there are virtually no competing theoretical models. Perhaps surprisingly, however, it is still an open theoretical question whether attractors can exist in realistic neuronal networks. The
5 0.43232459 73 nips-2001-Eye movements and the maturation of cortical orientation selectivity
Author: Antonino Casile, Michele Rucci
Abstract: Neural activity appears to be a crucial component for shaping the receptive fields of cortical simple cells into adjacent, oriented subregions alternately receiving ON- and OFF-center excitatory geniculate inputs. It is known that the orientation selective responses of V1 neurons are refined by visual experience. After eye opening, the spatiotemporal structure of neural activity in the early stages of the visual pathway depends both on the visual environment and on how the environment is scanned. We have used computational modeling to investigate how eye movements might affect the refinement of the orientation tuning of simple cells in the presence of a Hebbian scheme of synaptic plasticity. Levels of correlation between the activity of simulated cells were examined while natural scenes were scanned so as to model sequences of saccades and fixational eye movements, such as microsaccades, tremor and ocular drift. The specific patterns of activity required for a quantitatively accurate development of simple cell receptive fields with segregated ON and OFF subregions were observed during fixational eye movements, but not in the presence of saccades or with static presentation of natural visual input. These results suggest an important role for the eye movements occurring during visual fixation in the refinement of orientation selectivity.
6 0.42421362 10 nips-2001-A Hierarchical Model of Complex Cells in Visual Cortex for the Binocular Perception of Motion-in-Depth
7 0.36758333 2 nips-2001-3 state neurons for contextual processing
8 0.3251974 158 nips-2001-Receptive field structure of flow detectors for heading perception
9 0.30013257 150 nips-2001-Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex
10 0.29567766 87 nips-2001-Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway
11 0.25994304 78 nips-2001-Fragment Completion in Humans and Machines
12 0.25887227 12 nips-2001-A Model of the Phonological Loop: Generalization and Binding
13 0.24878919 124 nips-2001-Modeling the Modulatory Effect of Attention on Human Spatial Vision
14 0.24797019 176 nips-2001-Stochastic Mixed-Signal VLSI Architecture for High-Dimensional Kernel Machines
15 0.24503432 145 nips-2001-Perceptual Metamers in Stereoscopic Vision
16 0.22540037 111 nips-2001-Learning Lateral Interactions for Feature Binding and Sensory Segmentation
17 0.22099902 93 nips-2001-Incremental A*
18 0.20755734 11 nips-2001-A Maximum-Likelihood Approach to Modeling Multisensory Enhancement
19 0.20614563 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules
20 0.20046212 19 nips-2001-A Rotation and Translation Invariant Discrete Saliency Network
topicId topicWeight
[(19, 0.06), (27, 0.055), (30, 0.068), (37, 0.429), (38, 0.04), (59, 0.03), (72, 0.018), (79, 0.026), (91, 0.158)]
simIndex simValue paperId paperTitle
same-paper 1 0.84944296 142 nips-2001-Orientational and Geometric Determinants of Place and Head-direction
Author: Neil Burgess, Tom Hartley
Abstract: We present a model of the firing of place and head-direction cells in rat hippocampus. The model can predict the response of individual cells and populations to parametric manipulations of both geometric (e.g. O'Keefe & Burgess, 1996) and orientational (Fenton et aI., 2000a) cues, extending a previous geometric model (Hartley et al., 2000). It provides a functional description of how these cells' spatial responses are derived from the rat's environment and makes easily testable quantitative predictions. Consideration of the phenomenon of remapping (Muller & Kubie, 1987; Bostock et aI., 1991) indicates that the model may also be consistent with nonparametric changes in firing, and provides constraints for its future development. 1
2 0.43880671 137 nips-2001-On the Convergence of Leveraging
Author: Gunnar Rätsch, Sebastian Mika, Manfred K. Warmuth
Abstract: We give an unified convergence analysis of ensemble learning methods including e.g. AdaBoost, Logistic Regression and the Least-SquareBoost algorithm for regression. These methods have in common that they iteratively call a base learning algorithm which returns hypotheses that are then linearly combined. We show that these methods are related to the Gauss-Southwell method known from numerical optimization and state non-asymptotical convergence results for all these methods. Our analysis includes -norm regularized cost functions leading to a clean and general way to regularize ensemble learning.
3 0.38576517 111 nips-2001-Learning Lateral Interactions for Feature Binding and Sensory Segmentation
Author: Heiko Wersing
Abstract: We present a new approach to the supervised learning of lateral interactions for the competitive layer model (CLM) dynamic feature binding architecture. The method is based on consistency conditions, which were recently shown to characterize the attractor states of this linear threshold recurrent network. For a given set of training examples the learning problem is formulated as a convex quadratic optimization problem in the lateral interaction weights. An efficient dimension reduction of the learning problem can be achieved by using a linear superposition of basis interactions. We show the successful application of the method to a medical image segmentation problem of fluorescence microscope cell images.
4 0.38351366 66 nips-2001-Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms
Author: Roni Khardon, Dan Roth, Rocco A. Servedio
Abstract: We study online learning in Boolean domains using kernels which capture feature expansions equivalent to using conjunctions over basic features. We demonstrate a tradeoff between the computational efficiency with which these kernels can be computed and the generalization ability of the resulting classifier. We first describe several kernel functions which capture either limited forms of conjunctions or all conjunctions. We show that these kernels can be used to efficiently run the Perceptron algorithm over an exponential number of conjunctions; however we also prove that using such kernels the Perceptron algorithm can make an exponential number of mistakes even when learning simple functions. We also consider an analogous use of kernel functions to run the multiplicative-update Winnow algorithm over an expanded feature space of exponentially many conjunctions. While known upper bounds imply that Winnow can learn DNF formulae with a polynomial mistake bound in this setting, we prove that it is computationally hard to simulate Winnow’s behavior for learning DNF over such a feature set, and thus that such kernel functions for Winnow are not efficiently computable.
5 0.38237557 123 nips-2001-Modeling Temporal Structure in Classical Conditioning
Author: Aaron C. Courville, David S. Touretzky
Abstract: The Temporal Coding Hypothesis of Miller and colleagues [7] suggests that animals integrate related temporal patterns of stimuli into single memory representations. We formalize this concept using quasi-Bayes estimation to update the parameters of a constrained hidden Markov model. This approach allows us to account for some surprising temporal effects in the second order conditioning experiments of Miller et al. [1 , 2, 3], which other models are unable to explain. 1
6 0.38184637 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds
7 0.379758 144 nips-2001-Partially labeled classification with Markov random walks
8 0.37955785 148 nips-2001-Predictive Representations of State
9 0.3778092 107 nips-2001-Latent Dirichlet Allocation
10 0.37731051 18 nips-2001-A Rational Analysis of Cognitive Control in a Speeded Discrimination Task
11 0.37691736 182 nips-2001-The Fidelity of Local Ordinal Encoding
12 0.3763417 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments
13 0.37564629 87 nips-2001-Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway
14 0.37527275 100 nips-2001-Iterative Double Clustering for Unsupervised and Semi-Supervised Learning
15 0.37466657 54 nips-2001-Contextual Modulation of Target Saliency
16 0.3743335 52 nips-2001-Computing Time Lower Bounds for Recurrent Sigmoidal Neural Networks
17 0.37259659 68 nips-2001-Entropy and Inference, Revisited
18 0.37199816 96 nips-2001-Information-Geometric Decomposition in Spike Analysis
19 0.37172824 183 nips-2001-The Infinite Hidden Markov Model
20 0.37088439 169 nips-2001-Small-World Phenomena and the Dynamics of Information