nips nips2000 nips2000-87 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Suzanna Becker, Neil Burgess
Abstract: We present a computational model of the neural mechanisms in the parietal and temporal lobes that support spatial navigation, recall of scenes and imagery of the products of recall. Long term representations are stored in the hippocampus, and are associated with local spatial and object-related features in the parahippocampal region. Viewer-centered representations are dynamically generated from long term memory in the parietal part of the model. The model thereby simulates recall and imagery of locations and objects in complex environments. After parietal damage, the model exhibits hemispatial neglect in mental imagery that rotates with the imagined perspective of the observer, as in the famous Milan Square experiment [1]. Our model makes novel predictions for the neural representations in the parahippocampal and parietal regions and for behavior in healthy volunteers and neuropsychological patients.
Reference: text
sentIndex sentText sentNum sentScore
1 Modelling spatial recall, mental imagery and neglect Suzanna Becker Department of Psychology McMaster University 1280 Main Street West Hamilton,Ont. [sent-1, score-0.389]
2 uk Abstract We present a computational model of the neural mechanisms in the parietal and temporal lobes that support spatial navigation, recall of scenes and imagery of the products of recall. [sent-6, score-0.723]
3 Long term representations are stored in the hippocampus, and are associated with local spatial and object-related features in the parahippocampal region. [sent-7, score-0.418]
4 Viewer-centered representations are dynamically generated from long term memory in the parietal part of the model. [sent-8, score-0.549]
5 The model thereby simulates recall and imagery of locations and objects in complex environments. [sent-9, score-0.327]
6 After parietal damage, the model exhibits hemispatial neglect in mental imagery that rotates with the imagined perspective of the observer, as in the famous Milan Square experiment [1]. [sent-10, score-0.82]
7 Our model makes novel predictions for the neural representations in the parahippocampal and parietal regions and for behavior in healthy volunteers and neuropsychological patients. [sent-11, score-0.736]
8 These representations, and the ability to translate between them, have been accounted for in several computational models of the parietal cortex e. [sent-14, score-0.513]
9 In other situations such as route planning, recall and imagery for scenes or events one must also reply upon representations of spatial layouts from long-term memory. [sent-17, score-0.383]
10 Neuropsychological and neuroimaging studies implicate both the parietal and hippocampal regions in such tasks [4, 5], with the long-term memory component associated with the hippocampus. [sent-18, score-0.664]
11 The discovery of "place cells" in the hippocampus [6] provides evidence that hippocampal representations are ailocentric, in that absolute locations in open spaces are encoded irrespective of viewing direction. [sent-19, score-0.418]
12 This paper addresses the nature and source of the spatial representations in the hippocampal and parietal regions, and how they interact during recall and navigation. [sent-20, score-0.835]
13 We assume that in the hippocampus proper, long-term spatial memories are stored allocentrically, whereas in the parietal cortex view-based images are created on-the-fly during perception or recall. [sent-21, score-0.697]
14 Intuitively it makes sense to use an allocentric representation for long-term storage as the position of the body will have changed before recall. [sent-22, score-0.318]
15 reach with the hand) or to imagine a scene, an egocentric representation (e. [sent-25, score-0.429]
16 A study of hemispatial neglect patients throws some light on the interaction of long-term memory with mental imagery. [sent-28, score-0.325]
17 Bisiach and Luzatti [1] asked two patients to recall the buildings from the familiar Cathedral Square in Milan, after being asked to imagine (i) facing the cathedral, and (ii) facing in the opposite direction. [sent-29, score-0.389]
18 Both patients, in both (i) and (ii), predominantly recalled buildings that would have appeared on their right from the specified viewpoint. [sent-30, score-0.235]
19 Since the buildings recalled in (i) were located physically on the opposite side of the square to those recalled in (ii), the patients' long-term memory for all of the buildings in the square was apparently intact. [sent-31, score-0.597]
20 Further, the area neglected rotated according to the patient's imagined viewpoint, suggesting that their impairment relates to the generation of egocentric mental images from a non-egocentric long-term store. [sent-32, score-0.526]
21 Object information from the ventral visual processing stream enters the hippocampal formation (medial entorhinal cortex) via the perirhinal cortex, while vi suo spatial information from the dorsal pathways enters lateral entorhinal cortex primarily via the parahippocampal cortex [9]. [sent-36, score-0.962]
22 We extend the O'Keefe & Burgess [10] hippocampal model to include object-place associations by encoding object features in perirhinal cortex (we refer to these features as texture, but they could also be attributes such as colour, shape or size). [sent-37, score-0.571]
23 Reciprocal connections to the parahippocampus allow object features to cue the hippocampus to activate a remembered location in an environment, and conversely, a remembered location can be used to reactivate the feature information of objects at that location. [sent-38, score-0.93]
24 The connections from parietal to parahippocampal areas allow the remembered location to be specified in egocentric imagery. [sent-39, score-1.265]
25 parietal ego <·>allo /ransla/um Medial parietal egocentric locations "'-~~~-:. [sent-41, score-1.319]
26 Note the allocentric encoding of direction (NSEW) in parahippocampus, and the egocentric encoding of directions (LR) in medial parietal cortex. [sent-60, score-1.274]
27 An allocentric representation of object location is extracted from the ventral visual stream in the parahippocampus, and feeds into the hippocampus. [sent-62, score-0.501]
28 The dorsal visual stream provides an egocentric representation of object location in medial parietal areas and makes bi-directional contact with the parahippocampus via posterior parietal area 7a. [sent-63, score-1.741]
29 Inputs carrying allocentric heading direction information [11] project to both parietal and parahippocampal regions, allowing bidirectional translation from allocentric to egocentric directions. [sent-64, score-1.745]
30 Recurrent connections in the hippocampus allow recall from long-term memory via the parahippocampus, and egocentric imagery in the medial parietal areas. [sent-65, score-1.356]
31 The hippocampal formation (HF) consists of several regions - the entorhinal cortex, dentate gyrus, CA3, and CAl, each of which appears to code for space with varying degrees of sparseness. [sent-69, score-0.285]
32 To simplify, in our model the HF is represented by a single layer of "place cells", each tuned to random, fixed configurations of spatial features as in [10, 12]. [sent-70, score-0.265]
33 It receives these inputs from the parahippocampal cortex (PH) and perirhinal cortex (PR), respectively. [sent-72, score-0.496]
34 The parahippocampal representation of object locations is simulated as a layer of neurons, each of which is tuned to respond whenever there is a landmark at a given distance and allocentric direction from the subject. [sent-73, score-0.996]
35 Projections from this representation into the hippocampus drive the firing of place cells. [sent-74, score-0.27]
36 This representation has been shown to account for the properties of place cells recorded across environments of varying shape and size [10, 12]. [sent-75, score-0.269]
37 Recurrent connections between place cells allow subsequent pattern completion in the place cell layer. [sent-76, score-0.528]
38 Return projections from the place cells to the parahippocampus allow reactivation of all landmark location information consistent with the current location. [sent-77, score-0.528]
39 The perirhinal representation in our model consists of a layer of neurons, each tuned to a particular textural feature. [sent-78, score-0.331]
40 Thus, in our model, object features can be used to cue the hippocampal system to activate a remembered location in an environment, and conversely, a remembered location can activate all associated object textures. [sent-80, score-0.935]
41 Further, each allocentric spatial feature unit in the parahippocampus projects to the perirhinal object feature units so that attention to one location can activate a particular object's features. [sent-81, score-0.884]
42 2 Parietal cortex Neurons responding to specific egocentric stimulus locations (e. [sent-83, score-0.563]
43 relative to the eye, head or hand) have been recorded in several parietal areas. [sent-85, score-0.511]
44 Tasks involving imagery of the products of retrieval tend to activate medial parietal areas (precuneus, posterior cingulate, retrosplenial cortex) in neuroimaging studies [14]. [sent-86, score-0.741]
45 We hypothesize that there is a medial parietal egocentric map of space, coding for the locations of objects organised by distance and angle from the body midline. [sent-87, score-1.155]
46 In this representation cells are tuned to respond to the presence of an object at a specific distance in a specific egocentric direction. [sent-88, score-0.71]
47 Cells have also been reported in posterior parietal areas with egocentrically tuned responses that are modulated by variables such as eye position [15] or body orientation (in area 7a [16]). [sent-89, score-0.496]
48 We hypothesize that area 7a performs the translation between allocentric and egocentric representations so that, as well as being driven directly by perception, the medial parietal egocentric map can be driven by recalled allocentric parahippocampal representations. [sent-91, score-2.257]
49 We consider simply translation between allocentric and view-dependent representations, requiring a modulatory input from the head direction system. [sent-92, score-0.445]
50 A more detailed model would include translations between allocentric and body, head and eye centered representations, and possibly use of retrosplenial areas to buffer these intermediate representations [18]. [sent-93, score-0.447]
51 The translation between parahippocampal and parietal representations occurs via a hardwired mapping of each to an expanded set of egocentric representations, each modulated by head direction so that one is fully activated for each (coarse coded) head direction (see Figure 1). [sent-94, score-1.488]
52 With activation from the appropriate head direction unit, activation from the parahippocampal or parietal representation can activate the appropriate cell in the other representation via this expanded representation. [sent-95, score-1.124]
53 3 Simulation details The hippocampal component of the model was trained on the spatial environment shown in the top-left panel of Figure 2, representing the buildings of the Milan square. [sent-97, score-0.431]
54 The HF place cells were preassigned to cover a grid of locations in the environment, with each cell's activation falling off as a Gaussian of the distance to its preferred location. [sent-102, score-0.431]
55 The weights to the perirhinal (PR) object feature units - on the HF-to-PR and PH-to-PR connections - were trained by simulating sequential attention to each visible object, from each training location. [sent-104, score-0.367]
56 Thus, a single object's textural features in the PR layer were associated with the corresponding PH location features and HF place cell activations via Hebbian learning. [sent-105, score-0.639]
57 The PR-to-HF weights were trained to associate each training location with the single predominant texture - either that of a nearby object or that of the background. [sent-106, score-0.329]
58 The connections to and within the parietal component of the model were hard-wired to implement the bidirectional allocentric-egocentric mappings (these are functionally equivalent to a rotation by adding or subtracting the heading angle). [sent-107, score-0.605]
59 The 2-layer parietal circuit in Figure 1 essentially encodes separate transformation matrices for each of a discrete set of head directions in the first layer. [sent-108, score-0.552]
60 A right parietal lesion causing left neglect was simulated with graded, random knockout to units in the egocentric map of the left side of space. [sent-109, score-1.033]
61 In simulation 1, the model was required to recall the allocentric representation of the Milan square after being cued with the texture and direction (()j) of each of the visible buildings in turn, at a short distance rj. [sent-114, score-0.941]
62 The initial input to the HF, [HF (t = 0), was the sum of an externally provided texture cue from the PR cell layer, and a distance and direction cue from the PH cell layer obtained by initializing the PH states using equation 1, with rj = 2. [sent-115, score-0.577]
63 A place was then recalled by repeatedly updating the HF cells' states until convergence according to: IHF (t) AfF(t) = = . [sent-116, score-0.248]
64 First, the PH cells and HF place cells were initialized to the states of the retrieved spatial location (obtained after settling in simulation 1). [sent-128, score-0.552]
65 The model was then asked what it "saw" in various directions by simulating focused attention on the egocentric map, and requiring the model to retrieve the object texture at that location via activation of the PR region. [sent-129, score-0.904]
66 The egocentric medial parietal (MP) activation was calculated from the PH-to-MP mapping, as described above. [sent-130, score-0.958]
67 Attention to a queried egocentric direction was simulated by modulating the pattern of activation across the MP layer with a Gaussian filter centered on that location. [sent-131, score-0.627]
68 4 Results and discussion In simulation 1, when cued with the textures of each of the 5 buildings around the training region, the model settled on an appropriate place cell activation. [sent-133, score-0.477]
69 The model was cued with the texture of the cathedral front, and settled to a place representation near to its southwest corner. [sent-135, score-0.432]
70 The resulting PH layer activations show correct recall of the locations of the other landmarks around the square. [sent-136, score-0.308]
71 In simulation 2, shown in the lower panel, the model rotated the PH map according to the cued heading direction, and was able to retrieve correctly the texture of each building when queried with its egocentric direction. [sent-137, score-0.868]
72 In the lesioned model, buildings to the egocentric left were usually not identified correctly. [sent-138, score-0.554]
73 The building to the left has texture 5, and the building to the right has texture 7. [sent-141, score-0.332]
74 After a simulated parietal lesion, the model neglects building 5. [sent-142, score-0.503]
75 3 Predictions and future directions We have demonstrated how egocentric spatial representations may be formed from allocentric ones and vice versa. [sent-143, score-0.856]
76 The entorhinal cortex (EC) is the major cortical input zone to the hippocampus, and both the parahippocampal and perirhinal regions project to it [13]. [sent-145, score-0.492]
77 Single cell recordings in EC indicate tuning curves that are broadly similar to those of place cells, but are much more coarsely tuned and less specific to individual episodes [21, 9] . [sent-146, score-0.26]
78 Additionally, EC cells can hold state information, such as a spatial location or object identity, over long time delays and even across intervening items [9]. [sent-147, score-0.398]
79 An allocentric representation could emerge if the EC is under pressure to use a more compressed, temporally stable code to reconstruct the rapidly changing visuospatial input. [sent-148, score-0.316]
80 An egocentric map is altered dramatically after changes in viewpoint, whereas an allocentric map is not. [sent-149, score-0.731]
81 Thus, the PH and hippocampal representations could evolve via an unsupervised learning procedure that discovers a temporally stable, generative model of the parietal input. [sent-150, score-0.703]
82 The inverse mapping from allocentric PH features to egocentric parietal features could be learned by training the back-projections similarly. [sent-151, score-1.142]
83 But how could the egocentric map in the parietal region be learned in the first place? [sent-152, score-0.86]
84 We note that our parietal imagery system might also support the short-term visuospatial working memory required in more perceptual tasks (e. [sent-154, score-0.631]
85 In our model, sparse random connections from the object layer to the place layer ensure a high degree of initial place-tuning that should generalize across similar environments. [sent-163, score-0.474]
86 Plasticity in the HF-PR connections will allow unique textures of walls, buildings etc to be associated with particular places; thus after extensive exposure, environment-specific place firing patterns should emerge. [sent-164, score-0.353]
87 A selective lesion to the parahippocampus should abolish the ability to make allocentric object-place associations altogether, thereby severely disrupting both landmark-based and memory-based navigation. [sent-165, score-0.413]
88 In contrast, a pure hippocampal lesion would spare the ability to represent a single object's distance and allocentric directions from a location, so navigation based on a single landmark should be spared. [sent-166, score-0.584]
89 If an arrangement of objects is viewed in a 3-D environment, the recall or recognition of the arrangement from a new viewpoint will be facilitated by having formed an allocentric representation of their locations. [sent-167, score-0.425]
90 PR activations - Control f':L o o 00 0 5 10 Texture neuron · MP activns with neglect PR activations - Lesioned 0. [sent-175, score-0.24]
91 Middle: HF place cell activations, after being cued that building #1 is nearby and to the north. [sent-179, score-0.361]
92 Place cells are arranged in a polar coordinate grid according to the distance and direction of their preferred locations relative to the centre of the environment (bright white spot). [sent-180, score-0.357]
93 Right: PH inputs to place cell layer are plotted in polar coordinates, representing the recalled distances and directions of visible edges associated with the maximally activated location. [sent-183, score-0.505]
94 The externally cued heading direction is also shown here. [sent-184, score-0.301]
95 Left: An imagined view in the egocentric map layer (MP), given that the heading direction is south; the visible edges shown above have been rotated by 180 degrees. [sent-187, score-0.836]
96 Mid-left: the recalled texture features in the PR layer are plotted in two different conditions, simulating attention to the right (circles) and left (stars). [sent-188, score-0.377]
97 Mid-right and right: Similarly, the MP and PR activations are shown after damage to the left side of the egocentric map. [sent-189, score-0.493]
98 If the stimulus is evoking erroneous vestibular or somatosensory inputs to shift the perceived head direction system leftward, then all objects will now be mapped further rightward in egocentric space and into the 'good side' of the parietal map in a lesioned model. [sent-191, score-1.13]
99 O' Keefe, editors, The hippocampal and parietal foundations of spatial cognition. [sent-263, score-0.684]
100 O'Keefe, editors, The hippocampal and parietal foundations of spatial cognition. [sent-404, score-0.684]
wordName wordTfidf (topN-words)
[('parietal', 0.423), ('egocentric', 0.389), ('allocentric', 0.246), ('ph', 0.207), ('parahippocampal', 0.196), ('burgess', 0.166), ('hippocampal', 0.164), ('hf', 0.153), ('place', 0.143), ('imagery', 0.135), ('buildings', 0.13), ('parahippocampus', 0.12), ('perirhinal', 0.12), ('texture', 0.114), ('object', 0.113), ('milan', 0.105), ('recalled', 0.105), ('heading', 0.104), ('location', 0.102), ('neglect', 0.098), ('medial', 0.098), ('spatial', 0.097), ('cortex', 0.09), ('cued', 0.09), ('pr', 0.088), ('head', 0.088), ('hippocampus', 0.087), ('cells', 0.086), ('layer', 0.085), ('locations', 0.084), ('representations', 0.083), ('direction', 0.077), ('cell', 0.076), ('remembered', 0.075), ('activations', 0.071), ('recall', 0.068), ('mp', 0.065), ('patients', 0.065), ('hemispatial', 0.06), ('jeffery', 0.06), ('mental', 0.059), ('activate', 0.055), ('visible', 0.055), ('building', 0.052), ('entorhinal', 0.052), ('activation', 0.048), ('connections', 0.048), ('map', 0.048), ('lesion', 0.047), ('ahf', 0.045), ('cathedral', 0.045), ('imagined', 0.045), ('landmark', 0.045), ('maguire', 0.045), ('textural', 0.045), ('memory', 0.043), ('square', 0.042), ('features', 0.042), ('tuned', 0.041), ('directions', 0.041), ('distance', 0.041), ('environment', 0.04), ('objects', 0.04), ('representation', 0.04), ('cue', 0.039), ('simulation', 0.038), ('lesioned', 0.035), ('ec', 0.035), ('formation', 0.035), ('translation', 0.034), ('regions', 0.034), ('via', 0.033), ('rotated', 0.033), ('damage', 0.033), ('asked', 0.033), ('body', 0.032), ('allow', 0.032), ('attention', 0.031), ('viewpoint', 0.031), ('act', 0.03), ('ang', 0.03), ('becker', 0.03), ('bidirectional', 0.03), ('bisiach', 0.03), ('cacucd', 0.03), ('externally', 0.03), ('facing', 0.03), ('frackowiak', 0.03), ('frith', 0.03), ('guariglia', 0.03), ('ifr', 0.03), ('lever', 0.03), ('retrosplenial', 0.03), ('spiers', 0.03), ('suzuki', 0.03), ('udir', 0.03), ('vestibular', 0.03), ('visuospatial', 0.03), ('preferred', 0.029), ('simulated', 0.028)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000001 87 nips-2000-Modelling Spatial Recall, Mental Imagery and Neglect
Author: Suzanna Becker, Neil Burgess
Abstract: We present a computational model of the neural mechanisms in the parietal and temporal lobes that support spatial navigation, recall of scenes and imagery of the products of recall. Long term representations are stored in the hippocampus, and are associated with local spatial and object-related features in the parahippocampal region. Viewer-centered representations are dynamically generated from long term memory in the parietal part of the model. The model thereby simulates recall and imagery of locations and objects in complex environments. After parietal damage, the model exhibits hemispatial neglect in mental imagery that rotates with the imagined perspective of the observer, as in the famous Milan Square experiment [1]. Our model makes novel predictions for the neural representations in the parahippocampal and parietal regions and for behavior in healthy volunteers and neuropsychological patients.
Author: Angelo Arleo, Fabrizio Smeraldi, Stéphane Hug, Wulfram Gerstner
Abstract: We model hippocampal place cells and head-direction cells by combining allothetic (visual) and idiothetic (proprioceptive) stimuli. Visual input, provided by a video camera on a miniature robot, is preprocessed by a set of Gabor filters on 31 nodes of a log-polar retinotopic graph. Unsupervised Hebbian learning is employed to incrementally build a population of localized overlapping place fields. Place cells serve as basis functions for reinforcement learning. Experimental results for goal-oriented navigation of a mobile robot are presented.
3 0.18380463 8 nips-2000-A New Model of Spatial Representation in Multimodal Brain Areas
Author: Sophie Denève, Jean-René Duhamel, Alexandre Pouget
Abstract: Most models of spatial representations in the cortex assume cells with limited receptive fields that are defined in a particular egocentric frame of reference. However, cells outside of primary sensory cortex are either gain modulated by postural input or partially shifting. We show that solving classical spatial tasks, like sensory prediction, multi-sensory integration, sensory-motor transformation and motor control requires more complicated intermediate representations that are not invariant in one frame of reference. We present an iterative basis function map that performs these spatial tasks optimally with gain modulated and partially shifting units, and tests it against neurophysiological and neuropsychological data. In order to perform an action directed toward an object, it is necessary to have a representation of its spatial location. The brain must be able to use spatial cues coming from different modalities (e.g. vision, audition, touch, proprioception), combine them to infer the position of the object, and compute the appropriate movement. These cues are in different frames of reference corresponding to different sensory or motor modalities. Visual inputs are primarily encoded in retinotopic maps, auditory inputs are encoded in head centered maps and tactile cues are encoded in skin-centered maps. Going from one frame of reference to the other might seem easy. For example, the head-centered position of an object can be approximated by the sum of its retinotopic position and the eye position. However, positions are represented by population codes in the brain, and computing a head-centered map from a retinotopic map is a more complex computation than the underlying sum. Moreover, as we get closer to sensory-motor areas it seems reasonable to assume Spksls 150 100 50 o Figure 1: Response of a VIP cell to visual stimuli appearing in different part of the screen, for three different eye positions. The level of grey represent the frequency of discharge (In spikes per seconds). The white cross is the fixation point (the head is fixed). The cell's receptive field is moving with the eyes, but only partially. Here the receptive field shift is 60% of the total gaze shift. Moreover this cell is gain modulated by eye position (adapted from Duhamel et al). that the representations should be useful for sensory-motor transformations, rather than encode an
4 0.1118425 10 nips-2000-A Productive, Systematic Framework for the Representation of Visual Structure
Author: Shimon Edelman, Nathan Intrator
Abstract: We describe a unified framework for the understanding of structure representation in primate vision. A model derived from this framework is shown to be effectively systematic in that it has the ability to interpret and associate together objects that are related through a rearrangement of common
5 0.086689278 66 nips-2000-Hippocampally-Dependent Consolidation in a Hierarchical Model of Neocortex
Author: Szabolcs KĂĄli, Peter Dayan
Abstract: In memory consolidation, declarative memories which initially require the hippocampus for their recall, ultimately become independent of it. Consolidation has been the focus of numerous experimental and qualitative modeling studies, but only little quantitative exploration. We present a consolidation model in which hierarchical connections in the cortex, that initially instantiate purely semantic information acquired through probabilistic unsupervised learning, come to instantiate episodic information as well. The hippocampus is responsible for helping complete partial input patterns before consolidation is complete, while also training the cortex to perform appropriate completion by itself.
6 0.079406798 19 nips-2000-Adaptive Object Representation with Hierarchically-Distributed Memory Sites
7 0.071448445 40 nips-2000-Dendritic Compartmentalization Could Underlie Competition and Attentional Biasing of Simultaneous Visual Stimuli
8 0.070076481 124 nips-2000-Spike-Timing-Dependent Learning for Oscillatory Networks
9 0.051919263 102 nips-2000-Position Variance, Recurrence and Perceptual Learning
10 0.05174144 23 nips-2000-An Adaptive Metric Machine for Pattern Classification
11 0.051495064 107 nips-2000-Rate-coded Restricted Boltzmann Machines for Face Recognition
12 0.048885219 2 nips-2000-A Comparison of Image Processing Techniques for Visual Speech Recognition Applications
13 0.046078157 15 nips-2000-Accumulator Networks: Suitors of Local Probability Propagation
14 0.045343708 117 nips-2000-Shape Context: A New Descriptor for Shape Matching and Object Recognition
15 0.043858491 108 nips-2000-Recognizing Hand-written Digits Using Hierarchical Products of Experts
16 0.041585799 56 nips-2000-Foundations for a Circuit Complexity Theory of Sensory Processing
17 0.040757582 135 nips-2000-The Manhattan World Assumption: Regularities in Scene Statistics which Enable Bayesian Inference
18 0.040354367 147 nips-2000-Who Does What? A Novel Algorithm to Determine Function Localization
19 0.039298974 45 nips-2000-Emergence of Movement Sensitive Neurons' Properties by Learning a Sparse Code for Natural Moving Images
20 0.038925767 42 nips-2000-Divisive and Subtractive Mask Effects: Linking Psychophysics and Biophysics
topicId topicWeight
[(0, 0.135), (1, -0.138), (2, -0.062), (3, 0.022), (4, -0.063), (5, 0.068), (6, 0.186), (7, -0.134), (8, 0.266), (9, -0.029), (10, 0.031), (11, 0.12), (12, -0.064), (13, 0.001), (14, 0.048), (15, 0.025), (16, 0.09), (17, -0.029), (18, 0.11), (19, 0.087), (20, -0.095), (21, 0.032), (22, 0.016), (23, 0.121), (24, -0.213), (25, -0.084), (26, -0.047), (27, -0.021), (28, -0.114), (29, 0.099), (30, -0.068), (31, -0.127), (32, 0.023), (33, 0.112), (34, 0.122), (35, 0.024), (36, 0.148), (37, 0.102), (38, 0.031), (39, -0.179), (40, -0.025), (41, -0.03), (42, 0.04), (43, 0.004), (44, -0.014), (45, -0.037), (46, -0.122), (47, 0.195), (48, -0.098), (49, -0.039)]
simIndex simValue paperId paperTitle
same-paper 1 0.97686529 87 nips-2000-Modelling Spatial Recall, Mental Imagery and Neglect
Author: Suzanna Becker, Neil Burgess
Abstract: We present a computational model of the neural mechanisms in the parietal and temporal lobes that support spatial navigation, recall of scenes and imagery of the products of recall. Long term representations are stored in the hippocampus, and are associated with local spatial and object-related features in the parahippocampal region. Viewer-centered representations are dynamically generated from long term memory in the parietal part of the model. The model thereby simulates recall and imagery of locations and objects in complex environments. After parietal damage, the model exhibits hemispatial neglect in mental imagery that rotates with the imagined perspective of the observer, as in the famous Milan Square experiment [1]. Our model makes novel predictions for the neural representations in the parahippocampal and parietal regions and for behavior in healthy volunteers and neuropsychological patients.
Author: Angelo Arleo, Fabrizio Smeraldi, Stéphane Hug, Wulfram Gerstner
Abstract: We model hippocampal place cells and head-direction cells by combining allothetic (visual) and idiothetic (proprioceptive) stimuli. Visual input, provided by a video camera on a miniature robot, is preprocessed by a set of Gabor filters on 31 nodes of a log-polar retinotopic graph. Unsupervised Hebbian learning is employed to incrementally build a population of localized overlapping place fields. Place cells serve as basis functions for reinforcement learning. Experimental results for goal-oriented navigation of a mobile robot are presented.
3 0.71729058 8 nips-2000-A New Model of Spatial Representation in Multimodal Brain Areas
Author: Sophie Denève, Jean-René Duhamel, Alexandre Pouget
Abstract: Most models of spatial representations in the cortex assume cells with limited receptive fields that are defined in a particular egocentric frame of reference. However, cells outside of primary sensory cortex are either gain modulated by postural input or partially shifting. We show that solving classical spatial tasks, like sensory prediction, multi-sensory integration, sensory-motor transformation and motor control requires more complicated intermediate representations that are not invariant in one frame of reference. We present an iterative basis function map that performs these spatial tasks optimally with gain modulated and partially shifting units, and tests it against neurophysiological and neuropsychological data. In order to perform an action directed toward an object, it is necessary to have a representation of its spatial location. The brain must be able to use spatial cues coming from different modalities (e.g. vision, audition, touch, proprioception), combine them to infer the position of the object, and compute the appropriate movement. These cues are in different frames of reference corresponding to different sensory or motor modalities. Visual inputs are primarily encoded in retinotopic maps, auditory inputs are encoded in head centered maps and tactile cues are encoded in skin-centered maps. Going from one frame of reference to the other might seem easy. For example, the head-centered position of an object can be approximated by the sum of its retinotopic position and the eye position. However, positions are represented by population codes in the brain, and computing a head-centered map from a retinotopic map is a more complex computation than the underlying sum. Moreover, as we get closer to sensory-motor areas it seems reasonable to assume Spksls 150 100 50 o Figure 1: Response of a VIP cell to visual stimuli appearing in different part of the screen, for three different eye positions. The level of grey represent the frequency of discharge (In spikes per seconds). The white cross is the fixation point (the head is fixed). The cell's receptive field is moving with the eyes, but only partially. Here the receptive field shift is 60% of the total gaze shift. Moreover this cell is gain modulated by eye position (adapted from Duhamel et al). that the representations should be useful for sensory-motor transformations, rather than encode an
4 0.43276086 10 nips-2000-A Productive, Systematic Framework for the Representation of Visual Structure
Author: Shimon Edelman, Nathan Intrator
Abstract: We describe a unified framework for the understanding of structure representation in primate vision. A model derived from this framework is shown to be effectively systematic in that it has the ability to interpret and associate together objects that are related through a rearrangement of common
5 0.40345648 66 nips-2000-Hippocampally-Dependent Consolidation in a Hierarchical Model of Neocortex
Author: Szabolcs KĂĄli, Peter Dayan
Abstract: In memory consolidation, declarative memories which initially require the hippocampus for their recall, ultimately become independent of it. Consolidation has been the focus of numerous experimental and qualitative modeling studies, but only little quantitative exploration. We present a consolidation model in which hierarchical connections in the cortex, that initially instantiate purely semantic information acquired through probabilistic unsupervised learning, come to instantiate episodic information as well. The hippocampus is responsible for helping complete partial input patterns before consolidation is complete, while also training the cortex to perform appropriate completion by itself.
6 0.30240428 19 nips-2000-Adaptive Object Representation with Hierarchically-Distributed Memory Sites
8 0.22345513 34 nips-2000-Competition and Arbors in Ocular Dominance
9 0.21412458 124 nips-2000-Spike-Timing-Dependent Learning for Oscillatory Networks
10 0.19490854 23 nips-2000-An Adaptive Metric Machine for Pattern Classification
12 0.16232587 132 nips-2000-The Interplay of Symbolic and Subsymbolic Processes in Anagram Problem Solving
13 0.16199858 15 nips-2000-Accumulator Networks: Suitors of Local Probability Propagation
14 0.15785493 102 nips-2000-Position Variance, Recurrence and Perceptual Learning
15 0.15503946 56 nips-2000-Foundations for a Circuit Complexity Theory of Sensory Processing
16 0.14973482 18 nips-2000-Active Support Vector Machine Classification
17 0.14918429 125 nips-2000-Stability and Noise in Biochemical Switches
18 0.14268152 57 nips-2000-Four-legged Walking Gait Control Using a Neuromorphic Chip Interfaced to a Support Vector Learning Algorithm
19 0.13956524 71 nips-2000-Interactive Parts Model: An Application to Recognition of On-line Cursive Script
20 0.13800129 29 nips-2000-Bayes Networks on Ice: Robotic Search for Antarctic Meteorites
topicId topicWeight
[(0, 0.016), (10, 0.022), (17, 0.108), (32, 0.015), (33, 0.02), (36, 0.018), (38, 0.055), (42, 0.018), (55, 0.045), (59, 0.307), (62, 0.025), (65, 0.022), (67, 0.045), (76, 0.019), (79, 0.014), (81, 0.045), (84, 0.037), (90, 0.015), (97, 0.023)]
simIndex simValue paperId paperTitle
same-paper 1 0.87072176 87 nips-2000-Modelling Spatial Recall, Mental Imagery and Neglect
Author: Suzanna Becker, Neil Burgess
Abstract: We present a computational model of the neural mechanisms in the parietal and temporal lobes that support spatial navigation, recall of scenes and imagery of the products of recall. Long term representations are stored in the hippocampus, and are associated with local spatial and object-related features in the parahippocampal region. Viewer-centered representations are dynamically generated from long term memory in the parietal part of the model. The model thereby simulates recall and imagery of locations and objects in complex environments. After parietal damage, the model exhibits hemispatial neglect in mental imagery that rotates with the imagined perspective of the observer, as in the famous Milan Square experiment [1]. Our model makes novel predictions for the neural representations in the parahippocampal and parietal regions and for behavior in healthy volunteers and neuropsychological patients.
2 0.86205 11 nips-2000-A Silicon Primitive for Competitive Learning
Author: David Hsu, Miguel Figueroa, Chris Diorio
Abstract: Competitive learning is a technique for training classification and clustering networks. We have designed and fabricated an 11transistor primitive, that we term an automaximizing bump circuit, that implements competitive learning dynamics. The circuit performs a similarity computation, affords nonvolatile storage, and implements simultaneous local adaptation and computation. We show that our primitive is suitable for implementing competitive learning in VLSI, and demonstrate its effectiveness in a standard clustering task.
Author: Angelo Arleo, Fabrizio Smeraldi, Stéphane Hug, Wulfram Gerstner
Abstract: We model hippocampal place cells and head-direction cells by combining allothetic (visual) and idiothetic (proprioceptive) stimuli. Visual input, provided by a video camera on a miniature robot, is preprocessed by a set of Gabor filters on 31 nodes of a log-polar retinotopic graph. Unsupervised Hebbian learning is employed to incrementally build a population of localized overlapping place fields. Place cells serve as basis functions for reinforcement learning. Experimental results for goal-oriented navigation of a mobile robot are presented.
4 0.38622597 8 nips-2000-A New Model of Spatial Representation in Multimodal Brain Areas
Author: Sophie Denève, Jean-René Duhamel, Alexandre Pouget
Abstract: Most models of spatial representations in the cortex assume cells with limited receptive fields that are defined in a particular egocentric frame of reference. However, cells outside of primary sensory cortex are either gain modulated by postural input or partially shifting. We show that solving classical spatial tasks, like sensory prediction, multi-sensory integration, sensory-motor transformation and motor control requires more complicated intermediate representations that are not invariant in one frame of reference. We present an iterative basis function map that performs these spatial tasks optimally with gain modulated and partially shifting units, and tests it against neurophysiological and neuropsychological data. In order to perform an action directed toward an object, it is necessary to have a representation of its spatial location. The brain must be able to use spatial cues coming from different modalities (e.g. vision, audition, touch, proprioception), combine them to infer the position of the object, and compute the appropriate movement. These cues are in different frames of reference corresponding to different sensory or motor modalities. Visual inputs are primarily encoded in retinotopic maps, auditory inputs are encoded in head centered maps and tactile cues are encoded in skin-centered maps. Going from one frame of reference to the other might seem easy. For example, the head-centered position of an object can be approximated by the sum of its retinotopic position and the eye position. However, positions are represented by population codes in the brain, and computing a head-centered map from a retinotopic map is a more complex computation than the underlying sum. Moreover, as we get closer to sensory-motor areas it seems reasonable to assume Spksls 150 100 50 o Figure 1: Response of a VIP cell to visual stimuli appearing in different part of the screen, for three different eye positions. The level of grey represent the frequency of discharge (In spikes per seconds). The white cross is the fixation point (the head is fixed). The cell's receptive field is moving with the eyes, but only partially. Here the receptive field shift is 60% of the total gaze shift. Moreover this cell is gain modulated by eye position (adapted from Duhamel et al). that the representations should be useful for sensory-motor transformations, rather than encode an
5 0.38575053 92 nips-2000-Occam's Razor
Author: Carl Edward Rasmussen, Zoubin Ghahramani
Abstract: The Bayesian paradigm apparently only sometimes gives rise to Occam's Razor; at other times very large models perform well. We give simple examples of both kinds of behaviour. The two views are reconciled when measuring complexity of functions, rather than of the machinery used to implement them. We analyze the complexity of functions for some linear in the parameter models that are equivalent to Gaussian Processes, and always find Occam's Razor at work.
6 0.38244882 107 nips-2000-Rate-coded Restricted Boltzmann Machines for Face Recognition
7 0.38093019 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics
8 0.37989011 10 nips-2000-A Productive, Systematic Framework for the Representation of Visual Structure
9 0.37901407 2 nips-2000-A Comparison of Image Processing Techniques for Visual Speech Recognition Applications
10 0.37760216 60 nips-2000-Gaussianization
11 0.3738192 146 nips-2000-What Can a Single Neuron Compute?
12 0.37250096 106 nips-2000-Propagation Algorithms for Variational Bayesian Learning
13 0.3700299 122 nips-2000-Sparse Representation for Gaussian Process Models
14 0.36902025 98 nips-2000-Partially Observable SDE Models for Image Sequence Recognition Tasks
15 0.36876836 130 nips-2000-Text Classification using String Kernels
16 0.36563984 102 nips-2000-Position Variance, Recurrence and Perceptual Learning
17 0.36552265 49 nips-2000-Explaining Away in Weight Space
18 0.36449984 71 nips-2000-Interactive Parts Model: An Application to Recognition of On-line Cursive Script
19 0.36372843 79 nips-2000-Learning Segmentation by Random Walks
20 0.36221021 74 nips-2000-Kernel Expansions with Unlabeled Examples