nips nips2001 nips2001-124 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Laurent Itti, Jochen Braun, Christof Koch
Abstract: We present new simulation results , in which a computational model of interacting visual neurons simultaneously predicts the modulation of spatial vision thresholds by focal visual attention, for five dual-task human psychophysics experiments. This new study complements our previous findings that attention activates a winnertake-all competition among early visual neurons within one cortical hypercolumn. This
Reference: text
sentIndex sentText sentNum sentScore
1 Abstract We present new simulation results , in which a computational model of interacting visual neurons simultaneously predicts the modulation of spatial vision thresholds by focal visual attention, for five dual-task human psychophysics experiments. [sent-9, score-1.084]
2 This new study complements our previous findings that attention activates a winnertake-all competition among early visual neurons within one cortical hypercolumn. [sent-10, score-0.65]
3 This "intensified competition" hypothesis assumed that attention equally affects all neurons, and yielded two singleunit predictions: an increase in gain and a sharpening of tuning with attention. [sent-11, score-0.661]
4 Hence, we here explore whether our model could still predict our data if attention might only modulate neuronal gain, but do so non-uniformly across neurons and tasks. [sent-13, score-0.433]
5 Specifically, we investigate whether modulating the gain of only the neurons that are loudest, best-tuned, or most informative about the stimulus, or of all neurons equally but in a task-dependent manner, may account for the data. [sent-14, score-0.494]
6 We find that none of these hypotheses yields predictions as plausible as the intensified competition hypothesis, hence providing additional support for our original findings. [sent-15, score-0.382]
7 1 INTRODUCTION Psychophysical studies as well as introspection indicate that we are not blind outside the focus of attention, and that we can perform simple judgments on objects not being attended to [1], though those judgments are less accurate than in the presence of attention [2, 3]. [sent-16, score-0.645]
8 In the visual domain, this modulation can be either spatially-defined (i. [sent-18, score-0.328]
9 , neuronal activity only at the retinotopic location attended to is modulated) or feature-based (i. [sent-20, score-0.478]
10 , neurons with stimulus preference matching the stimulus attended to are enhanced throughout the visual field), or a combination of both [7, 10, 24]. [sent-22, score-0.93]
11 One theoretical difficulty in trying to understand the modulatory effect of attention in computational terms is that, although attention profoundly alters visual perception, it is not equally important to all aspects of vision. [sent-24, score-0.714]
12 While most existing theories are associated to a specific body of data, and a specific experimental task used to engage attention in a given experiment, we have recently proposed a unified computational account [2] that spans five such tasks (32 thresholds under two attentional conditions, i. [sent-26, score-0.749]
13 This theory predicts that attention activates a winner-take-all competition among neurons tuned to different orientations within a single hyper column in primary visual cortex (area VI). [sent-29, score-0.597]
14 It is rooted in new information-theoretic advances [27], which allowed us to quantitatively relate single-unit activity in a computational model to human psychophysical thresholds. [sent-30, score-0.23]
15 A consequence of our "intensified competition hypothesis" is that attention both increases the gain of early visual neurons (by a factor 3. [sent-31, score-0.763]
16 3), and sharpens their tuning for the orientation (by 40%) and spatial frequency (by 30%). [sent-32, score-0.399]
17 In the present study, we thus investigate alternatives to our intensified competition hypothesis which only involve gain modulation. [sent-34, score-0.499]
18 Our previous results [2] have shown that both increased gain and sharper tuning were necessary to simultaneously account for our five pattern discrimination tasks, if those modulatory effects were to equally affect all visual neurons at the location of the stimulus and to be equal for all tasks. [sent-35, score-0.926]
19 Thus, we here extend our computational search space under two new hypotheses: First, we investigate whether attention might only modulate the gain of selected sub-populations of neurons (responding the loudest, best tuned , or most informative about the stimulus) in a task-independent manner. [sent-36, score-0.608]
20 Second, we investigate whether attention might equally modulate the gain of all visual neurons responding to the stimulus, but in a task-dependent manner. [sent-37, score-0.749]
21 Although attention certainly affects most stages of visual processing, we here continue to focus on early vision, as it is widely justified by electrophysiological and fMRI evidence that some modulation does happen very early in the processing hierarchy [5, 8, 9, 23]. [sent-39, score-0.781]
22 2 PSYCHOPHYSICAL DATA Our recent study [2] measured psychophysical thresholds for three pattern discrimination tasks (contrast, orientation and spatial frequency discriminations), and two spatial masking tasks (32 thresholds) . [sent-40, score-1.069]
23 Peripheral targets appeared for 250 ms at 4° eccentricity, in a circular aperture of 1. [sent-110, score-0.036]
24 Mask patterns were generated by superimposing 100 Gabor filters , positioned randomly within the circular aperture (A, D, E). [sent-112, score-0.114]
25 A complex pattern of effects is observed, with a strong modulation of orientation and spatial frequency discriminat ions (B, C) , smaller modulation of contrast discriminations (A) , and modulation of contrast masking that depends on stimulus configurations (D, E). [sent-114, score-1.61]
26 These complex observations can be simultaneously accounted for by our computational model of one hypercolumn in primary visual cortex. [sent-115, score-0.137]
27 3 COMPUTATIONAL MODEL The model developed to quantitatively account for this data comprises three successive stages [14, 27]. [sent-116, score-0.087]
28 In the first stage, a bank of Gabor-like linear filters (12 orientations and 5 spatial scales) analyzes a given visual location, similarly to a cortical hyper column. [sent-117, score-0.268]
29 In the second stage, filters nonlinearly interact through both a selfexcitation component, and a divisive inhibition component that is derived from a pool of similarly-tuned units. [sent-118, score-0.114]
30 "o being the linear response from a unit tuned to spatial period A and orientation (), the response R). [sent-120, score-0.273]
31 "o after interactions is given by (see [27] for additional details): Linear filters Divisive inhibition R Decision ). [sent-121, score-0.078]
32 The neurons are assumed to be noisy, with noise variance V{o given by a generalized Poisson model: V{o = (3(R). [sent-136, score-0.104]
33 The third stage relates activity in the population of interacting noisy units to behavioral discrimination performance. [sent-138, score-0.196]
34 This methodology (described further in [27]) allows us to quantitatively compute thresholds in any behavioral situation, and eliminates the need for task-dependent assumptions about the decision strategy used by the observers. [sent-140, score-0.267]
35 2) were automatically adjusted to best fit the psychophysical data from all experiments, using a multidimensional downhill simplex with simulated annealing overhead (see [27]) , running on our 16CPU Linux Beowulf system (16 x 733 MHz, 4 GB RAM, 0. [sent-142, score-0.173]
36 Thus, no bias was given to any of the two attentional conditions. [sent-147, score-0.138]
37 2), all parameters were allowed to differ with attention [2], while only the interaction parameters b, 8) could differ in the "intensified competition" case. [sent-149, score-0.243]
38 71 ~~ Q~Q (I) Poorly 16:: '\\ [stimulus-dependent; only affects filter most infonnative a~out target stimulus] 1-8. [sent-198, score-0.197]
39 62 1 ~:~ [stimulus-dependent; only affects filter beslluned to/arget stimulus] ! [sent-206, score-0.197]
40 22 ~56 [stimulus-dependent; only affects filter responding most to whole (targr+maSk) stimulus] Top-down Attention ~(I) 0. [sent-213, score-0.258]
41 02 S/A P :::l= OLL -I c - very good fit overall - all parameters biologically plausible - modulation of orientation thresholds slightly underestimated - contrast masking with variable mask orientation not perfectly predicted - 8. [sent-217, score-1.41]
42 - :::l - very good fit overall - all parameters biologically plausible - attention significantly modulates interactions and noise 1. [sent-229, score-0.379]
43 See next page for the corresponding model predictions on our five tasks, for the hypot heses shown. [sent-264, score-0.167]
44 The middle column shows which parameters were allowed to differ with attention, and t he best-fit values for both attentional conditions. [sent-265, score-0.138]
45 2 Mask contrast **** - Fully attended Poorly attended 10"2 10" Mask contrast C I . [sent-280, score-1.102]
46 6 Contrast 1 **** 0; • " l - Fully attended Poorly attended §10" o f. [sent-306, score-0.804]
47 I • 0020406080 Mask O- TargetOr} o • Fully attended Poorly attended I 0. [sent-327, score-0.804]
48 o °020406080 Mask O- TargetOr} • Fully attended Poorly attended ***** ~ ~ 0. [sent-344, score-0.804]
49 1 10 ~ 10" 10 " Mask contrast j t Qj ***** R0. [sent-348, score-0.149]
50 ~ ~ • Fully attended Poorly attended 2 °0 **** ! [sent-373, score-0.804]
51 5 1 MaskwlTargetw 2 Figure 3: Model predictions for t he different attentional modulation hypotheses studied. [sent-376, score-0.443]
52 The different rows correspond to t he different attentional manipulations studied, as labeled in t he previous figure. [sent-377, score-0.204]
53 Ratings (stars below t he plots) were derived from t he residual error of t he fits . [sent-378, score-0.073]
54 Finally, in the "task-dependent" case, the gain of all filters was affected equally (parameter ')'), but with three different values for the contrast (discrimination and masking), orientation and spatial frequency tasks. [sent-380, score-0.698]
55 Overall, very good fits were obtained in the "separate fits" and "intensified competition" conditions (as previously reported) , as well as in the "most informative filter" and "taskdependent" conditions (Fig. [sent-381, score-0.138]
56 3) , while the two remaining hypotheses yielded very poor predictions of orientation and spatial frequency discriminations. [sent-382, score-0.361]
57 More importantly, a careful analysis of the very promising results for the "task-dependent" case also revealed their low biological plausibility, with a gain modulation in excess of 20-fold being necessary to explain the orientation discrimination data (Fig. [sent-384, score-0.641]
58 Two of the four new manipulations studied yielded good quantitative model predictions: affecting the gain of the filter most informative about the target stimulus, and affecting the gain of all filters in a task-dependent manner. [sent-387, score-0.719]
59 In both cases, however, some of the internal model parameters associated with the fits were biologically unrealistic, thus reducing the plausibility of these two hypotheses. [sent-388, score-0.107]
60 In all manipulations studied, the greatest difficulty was in trying to account for the orientation and spatial frequency discrimination data without unrealistically high gain changes (greater than 20-fold). [sent-389, score-0.689]
61 Our results hence provide additional evidence for the hypothesis that sharpening of tuning may be necessary to account for these thresholds, as was originally suggested by our separate fits and our intensified competition hypothesis and has been recently supported by new investigations [16]. [sent-390, score-0.616]
wordName wordTfidf (topN-words)
[('attended', 0.402), ('poorly', 0.277), ('attention', 0.243), ('modulation', 0.226), ('thresholds', 0.212), ('mask', 0.212), ('intensified', 0.188), ('masking', 0.168), ('contrast', 0.149), ('orientation', 0.148), ('gain', 0.146), ('stimulus', 0.142), ('attentional', 0.138), ('nat', 0.127), ('filter', 0.126), ('discrimination', 0.121), ('competition', 0.115), ('neurosci', 0.106), ('oct', 0.105), ('neurons', 0.104), ('visual', 0.102), ('psychophysical', 0.102), ('fully', 0.097), ('braun', 0.091), ('spatial', 0.088), ('loudest', 0.084), ('maskwltargetw', 0.084), ('modulatory', 0.083), ('filters', 0.078), ('tuning', 0.075), ('fits', 0.073), ('itti', 0.073), ('desimone', 0.073), ('fit', 0.071), ('affects', 0.071), ('manipulations', 0.066), ('informative', 0.065), ('oci', 0.063), ('responding', 0.061), ('vision', 0.056), ('koch', 0.055), ('quantitatively', 0.055), ('electrophysiology', 0.055), ('early', 0.053), ('psychophysics', 0.05), ('modulate', 0.05), ('hypothesis', 0.05), ('tasks', 0.048), ('lee', 0.046), ('affecting', 0.046), ('cu', 0.046), ('dk', 0.046), ('frequency', 0.046), ('equally', 0.043), ('five', 0.043), ('chelazzi', 0.042), ('dipper', 0.042), ('discriminations', 0.042), ('fullyattended', 0.042), ('heses', 0.042), ('hypot', 0.042), ('maskwftarget', 0.042), ('plymouth', 0.042), ('rees', 0.042), ('sharpens', 0.042), ('targetor', 0.042), ('treue', 0.042), ('underestimated', 0.042), ('unrealistically', 0.042), ('res', 0.042), ('unrealistic', 0.042), ('activity', 0.04), ('predictions', 0.04), ('ul', 0.04), ('hypotheses', 0.039), ('enhanced', 0.038), ('period', 0.037), ('divisive', 0.036), ('carrasco', 0.036), ('qj', 0.036), ('aperture', 0.036), ('bia', 0.036), ('duncan', 0.036), ('moran', 0.036), ('rev', 0.036), ('neuronal', 0.036), ('simultaneously', 0.035), ('stage', 0.035), ('biologically', 0.034), ('fmri', 0.033), ('engage', 0.033), ('sharpening', 0.033), ('electrophysiological', 0.033), ('eccentricity', 0.033), ('activates', 0.033), ('focal', 0.033), ('human', 0.033), ('account', 0.032), ('jm', 0.031), ('modulates', 0.031)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999934 124 nips-2001-Modeling the Modulatory Effect of Attention on Human Spatial Vision
Author: Laurent Itti, Jochen Braun, Christof Koch
Abstract: We present new simulation results , in which a computational model of interacting visual neurons simultaneously predicts the modulation of spatial vision thresholds by focal visual attention, for five dual-task human psychophysics experiments. This new study complements our previous findings that attention activates a winnertake-all competition among early visual neurons within one cortical hypercolumn. This
2 0.12666042 14 nips-2001-A Neural Oscillator Model of Auditory Selective Attention
Author: Stuart N. Wrigley, Guy J. Brown
Abstract: A model of auditory grouping is described in which auditory attention plays a key role. The model is based upon an oscillatory correlation framework, in which neural oscillators representing a single perceptual stream are synchronised, and are desynchronised from oscillators representing other streams. The model suggests a mechanism by which attention can be directed to the high or low tones in a repeating sequence of tones with alternating frequencies. In addition, it simulates the perceptual segregation of a mistuned harmonic from a complex tone. 1
3 0.1053911 65 nips-2001-Effective Size of Receptive Fields of Inferior Temporal Visual Cortex Neurons in Natural Scenes
Author: Thomas P. Trappenberg, Edmund T. Rolls, Simon M. Stringer
Abstract: Inferior temporal cortex (IT) neurons have large receptive fields when a single effective object stimulus is shown against a blank background, but have much smaller receptive fields when the object is placed in a natural scene. Thus, translation invariant object recognition is reduced in natural scenes, and this may help object selection. We describe a model which accounts for this by competition within an attractor in which the neurons are tuned to different objects in the scene, and the fovea has a higher cortical magnification factor than the peripheral visual field. Furthermore, we show that top-down object bias can increase the receptive field size, facilitating object search in complex visual scenes, and providing a model of object-based attention. The model leads to the prediction that introduction of a second object into a scene with blank background will reduce the receptive field size to values that depend on the closeness of the second object to the target stimulus. We suggest that mechanisms of this type enable the output of IT to be primarily about one object, so that the areas that receive from IT can select the object as a potential target for action.
4 0.10507582 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons
Author: Shih-Chii Liu, Jörg Kramer, Giacomo Indiveri, Tobi Delbrück, Rodney J. Douglas
Abstract: We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.
5 0.098135009 37 nips-2001-Associative memory in realistic neuronal networks
Author: Peter E. Latham
Abstract: Almost two decades ago , Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. It is still not clear, however, whether realistic neuronal networks can support multiple attractors. The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typically a few Hz. Embedding attractor is easy; doing so without destabilizing the background is not. Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. Mean field theory is used to understand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented. One of the most important features of the nervous system is its ability to perform associative memory. It is generally believed that associative memory is implemented using attractor networks - experimental studies point in that direction [4- 7], and there are virtually no competing theoretical models. Perhaps surprisingly, however, it is still an open theoretical question whether attractors can exist in realistic neuronal networks. The
6 0.093168564 73 nips-2001-Eye movements and the maturation of cortical orientation selectivity
7 0.081522748 82 nips-2001-Generating velocity tuning by asymmetric recurrent connections
8 0.080374606 151 nips-2001-Probabilistic principles in unsupervised learning of visual structure: human data and a model
9 0.075145461 87 nips-2001-Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway
10 0.074799113 57 nips-2001-Correlation Codes in Neuronal Populations
11 0.07365448 48 nips-2001-Characterizing Neural Gain Control using Spike-triggered Covariance
12 0.069397993 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons
13 0.067618966 54 nips-2001-Contextual Modulation of Target Saliency
14 0.065263622 123 nips-2001-Modeling Temporal Structure in Classical Conditioning
15 0.064121649 11 nips-2001-A Maximum-Likelihood Approach to Modeling Multisensory Enhancement
16 0.061392032 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds
17 0.060424451 18 nips-2001-A Rational Analysis of Cognitive Control in a Speeded Discrimination Task
18 0.058615215 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes
19 0.057728279 189 nips-2001-The g Factor: Relating Distributions on Features to Distributions on Images
20 0.056970652 23 nips-2001-A theory of neural integration in the head-direction system
topicId topicWeight
[(0, -0.128), (1, -0.19), (2, -0.108), (3, 0.029), (4, 0.016), (5, 0.023), (6, -0.057), (7, 0.007), (8, -0.006), (9, 0.003), (10, -0.015), (11, 0.152), (12, -0.035), (13, 0.056), (14, 0.045), (15, -0.038), (16, 0.049), (17, 0.117), (18, -0.08), (19, -0.001), (20, -0.062), (21, -0.018), (22, 0.04), (23, -0.062), (24, 0.065), (25, -0.002), (26, -0.016), (27, 0.027), (28, 0.008), (29, 0.034), (30, 0.007), (31, 0.015), (32, 0.01), (33, -0.072), (34, 0.078), (35, 0.061), (36, -0.014), (37, -0.002), (38, -0.014), (39, -0.098), (40, 0.037), (41, -0.09), (42, 0.132), (43, 0.18), (44, -0.159), (45, 0.131), (46, 0.031), (47, -0.067), (48, -0.025), (49, -0.092)]
simIndex simValue paperId paperTitle
same-paper 1 0.98079854 124 nips-2001-Modeling the Modulatory Effect of Attention on Human Spatial Vision
Author: Laurent Itti, Jochen Braun, Christof Koch
Abstract: We present new simulation results , in which a computational model of interacting visual neurons simultaneously predicts the modulation of spatial vision thresholds by focal visual attention, for five dual-task human psychophysics experiments. This new study complements our previous findings that attention activates a winnertake-all competition among early visual neurons within one cortical hypercolumn. This
2 0.705118 14 nips-2001-A Neural Oscillator Model of Auditory Selective Attention
Author: Stuart N. Wrigley, Guy J. Brown
Abstract: A model of auditory grouping is described in which auditory attention plays a key role. The model is based upon an oscillatory correlation framework, in which neural oscillators representing a single perceptual stream are synchronised, and are desynchronised from oscillators representing other streams. The model suggests a mechanism by which attention can be directed to the high or low tones in a repeating sequence of tones with alternating frequencies. In addition, it simulates the perceptual segregation of a mistuned harmonic from a complex tone. 1
3 0.58425248 11 nips-2001-A Maximum-Likelihood Approach to Modeling Multisensory Enhancement
Author: H. Colonius, A. Diederich
Abstract: Multisensory response enhancement (MRE) is the augmentation of the response of a neuron to sensory input of one modality by simultaneous input from another modality. The maximum likelihood (ML) model presented here modifies the Bayesian model for MRE (Anastasio et al.) by incorporating a decision strategy to maximize the number of correct decisions. Thus the ML model can also deal with the important tasks of stimulus discrimination and identification in the presence of incongruent visual and auditory cues. It accounts for the inverse effectiveness observed in neurophysiological recording data, and it predicts a functional relation between uni- and bimodal levels of discriminability that is testable both in neurophysiological and behavioral experiments. 1
4 0.52518648 87 nips-2001-Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway
Author: Gal Chechik, Amir Globerson, M. J. Anderson, E. D. Young, Israel Nelken, Naftali Tishby
Abstract: The way groups of auditory neurons interact to code acoustic information is investigated using an information theoretic approach. We develop measures of redundancy among groups of neurons, and apply them to the study of collaborative coding efficiency in two processing stations in the auditory pathway: the inferior colliculus (IC) and the primary auditory cortex (AI). Under two schemes for the coding of the acoustic content, acoustic segments coding and stimulus identity coding, we show differences both in information content and group redundancies between IC and AI neurons. These results provide for the first time a direct evidence for redundancy reduction along the ascending auditory pathway, as has been hypothesized for theoretical considerations [Barlow 1959,2001]. The redundancy effects under the single-spikes coding scheme are significant only for groups larger than ten cells, and cannot be revealed with the redundancy measures that use only pairs of cells. The results suggest that the auditory system transforms low level representations that contain redundancies due to the statistical structure of natural stimuli, into a representation in which cortical neurons extract rare and independent component of complex acoustic signals, that are useful for auditory scene analysis. 1
5 0.49801326 73 nips-2001-Eye movements and the maturation of cortical orientation selectivity
Author: Antonino Casile, Michele Rucci
Abstract: Neural activity appears to be a crucial component for shaping the receptive fields of cortical simple cells into adjacent, oriented subregions alternately receiving ON- and OFF-center excitatory geniculate inputs. It is known that the orientation selective responses of V1 neurons are refined by visual experience. After eye opening, the spatiotemporal structure of neural activity in the early stages of the visual pathway depends both on the visual environment and on how the environment is scanned. We have used computational modeling to investigate how eye movements might affect the refinement of the orientation tuning of simple cells in the presence of a Hebbian scheme of synaptic plasticity. Levels of correlation between the activity of simulated cells were examined while natural scenes were scanned so as to model sequences of saccades and fixational eye movements, such as microsaccades, tremor and ocular drift. The specific patterns of activity required for a quantitatively accurate development of simple cell receptive fields with segregated ON and OFF subregions were observed during fixational eye movements, but not in the presence of saccades or with static presentation of natural visual input. These results suggest an important role for the eye movements occurring during visual fixation in the refinement of orientation selectivity.
6 0.47606614 65 nips-2001-Effective Size of Receptive Fields of Inferior Temporal Visual Cortex Neurons in Natural Scenes
7 0.46182549 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons
8 0.43507728 57 nips-2001-Correlation Codes in Neuronal Populations
9 0.42339399 37 nips-2001-Associative memory in realistic neuronal networks
10 0.40739682 82 nips-2001-Generating velocity tuning by asymmetric recurrent connections
11 0.38639134 151 nips-2001-Probabilistic principles in unsupervised learning of visual structure: human data and a model
12 0.3643246 48 nips-2001-Characterizing Neural Gain Control using Spike-triggered Covariance
13 0.33854169 18 nips-2001-A Rational Analysis of Cognitive Control in a Speeded Discrimination Task
14 0.31878042 3 nips-2001-ACh, Uncertainty, and Cortical Inference
15 0.28214929 123 nips-2001-Modeling Temporal Structure in Classical Conditioning
16 0.2759023 54 nips-2001-Contextual Modulation of Target Saliency
17 0.26810056 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments
18 0.26687586 96 nips-2001-Information-Geometric Decomposition in Spike Analysis
19 0.26537886 126 nips-2001-Motivated Reinforcement Learning
20 0.26021376 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes
topicId topicWeight
[(14, 0.015), (19, 0.522), (27, 0.063), (30, 0.059), (38, 0.025), (59, 0.016), (72, 0.035), (79, 0.029), (91, 0.141)]
simIndex simValue paperId paperTitle
1 0.96364772 93 nips-2001-Incremental A*
Author: S. Koenig, M. Likhachev
Abstract: Incremental search techniques find optimal solutions to series of similar search tasks much faster than is possible by solving each search task from scratch. While researchers have developed incremental versions of uninformed search methods, we develop an incremental version of A*. The first search of Lifelong Planning A* is the same as that of A* but all subsequent searches are much faster because it reuses those parts of the previous search tree that are identical to the new search tree. We then present experimental results that demonstrate the advantages of Lifelong Planning A* for simple route planning tasks. 1 Overview Artificial intelligence has investigated knowledge-based search techniques that allow one to solve search tasks in large domains. Most of the research on these methods has studied how to solve one-shot search problems. However, search is often a repetitive process, where one needs to solve a series of similar search tasks, for example, because the actual situation turns out to be slightly different from the one initially assumed or because the situation changes over time. An example for route planning tasks are changing traffic conditions. Thus, one needs to replan for the new situation, for example if one always wants to display the least time-consuming route from the airport to the conference center on a web page. In these situations, most search methods replan from scratch, that is, solve the search problems independently. Incremental search techniques share with case-based planning, plan adaptation, repair-based planning, and learning search-control knowledge the property that they find solutions to series of similar search tasks much faster than is possible by solving each search task from scratch. Incremental search techniques, however, differ from the other techniques in that the quality of their solutions is guaranteed to be as good as the quality of the solutions obtained by replanning from scratch. Although incremental search methods are not widely known in artificial intelligence and control, different researchers have developed incremental search versions of uninformed search methods in the algorithms literature. An overview can be found in [FMSN00]. We, on the other hand, develop an incremental version of A*, thus combining ideas from the algorithms literature and the artificial intelligence literature. We call the algorithm Lifelong Planning A* (LPA*), in analogy to “lifelong learning” [Thr98], because it reuses £ We thank Anthony Stentz for his support. The Intelligent Decision-Making Group is partly supported by NSF awards under contracts IIS9984827, IIS-0098807, and ITR/AP-0113881. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the sponsoring organizations and agencies or the U.S. government. information from previous searches. LPA* uses heuristics to focus the search and always finds a shortest path for the current edge costs. The first search of LPA* is the same as that of A* but all subsequent searches are much faster. LPA* produces at least the search tree that A* builds. However, it achieves a substantial speedup over A* because it reuses those parts of the previous search tree that are identical to the new search tree. 2 The Route Planning Task Lifelong Planning A* (LPA*) solves the following search task: It applies to finite graph search problems on known graphs whose edge costs can increase or decrease over time. denotes the finite set of vertices of the graph. denotes the set of successors of vertex . Similarly, denotes the set of predecessors of vertex . denotes the cost of moving from vertex to vertex . LPA* always determines a shortest path from a given start vertex to a given goal vertex , knowing both the topology of the graph and the current edge costs. We use to denote the start distance of vertex , that is, the length of a shortest path from to . ¨ ¨¦ £ £ ¡ ©§¥¤¢ FP HFE TSRQIGD¨ ¨¦ £ £ ¡ 4 ©D¥CBA@!¨ ¨ ¨¦
same-paper 2 0.95029277 124 nips-2001-Modeling the Modulatory Effect of Attention on Human Spatial Vision
Author: Laurent Itti, Jochen Braun, Christof Koch
Abstract: We present new simulation results , in which a computational model of interacting visual neurons simultaneously predicts the modulation of spatial vision thresholds by focal visual attention, for five dual-task human psychophysics experiments. This new study complements our previous findings that attention activates a winnertake-all competition among early visual neurons within one cortical hypercolumn. This
3 0.9149996 83 nips-2001-Geometrical Singularities in the Neuromanifold of Multilayer Perceptrons
Author: Shun-ichi Amari, Hyeyoung Park, Tomoko Ozeki
Abstract: Singularities are ubiquitous in the parameter space of hierarchical models such as multilayer perceptrons. At singularities, the Fisher information matrix degenerates, and the Cramer-Rao paradigm does no more hold, implying that the classical model selection theory such as AIC and MDL cannot be applied. It is important to study the relation between the generalization error and the training error at singularities. The present paper demonstrates a method of analyzing these errors both for the maximum likelihood estimator and the Bayesian predictive distribution in terms of Gaussian random fields, by using simple models. 1
4 0.83848685 119 nips-2001-Means, Correlations and Bounds
Author: Martijn Leisink, Bert Kappen
Abstract: The partition function for a Boltzmann machine can be bounded from above and below. We can use this to bound the means and the correlations. For networks with small weights, the values of these statistics can be restricted to non-trivial regions (i.e. a subset of [-1 , 1]). Experimental results show that reasonable bounding occurs for weight sizes where mean field expansions generally give good results. 1
5 0.81360805 109 nips-2001-Learning Discriminative Feature Transforms to Low Dimensions in Low Dimentions
Author: Kari Torkkola
Abstract: The marriage of Renyi entropy with Parzen density estimation has been shown to be a viable tool in learning discriminative feature transforms. However, it suffers from computational complexity proportional to the square of the number of samples in the training data. This sets a practical limit to using large databases. We suggest immediate divorce of the two methods and remarriage of Renyi entropy with a semi-parametric density estimation method, such as a Gaussian Mixture Models (GMM). This allows all of the computation to take place in the low dimensional target space, and it reduces computational complexity proportional to square of the number of components in the mixtures. Furthermore, a convenient extension to Hidden Markov Models as commonly used in speech recognition becomes possible.
6 0.50839072 1 nips-2001-(Not) Bounding the True Error
7 0.46143699 52 nips-2001-Computing Time Lower Bounds for Recurrent Sigmoidal Neural Networks
8 0.46093974 186 nips-2001-The Noisy Euclidean Traveling Salesman Problem and Learning
9 0.45897296 13 nips-2001-A Natural Policy Gradient
10 0.45605949 192 nips-2001-Tree-based reparameterization for approximate inference on loopy graphs
11 0.43964428 68 nips-2001-Entropy and Inference, Revisited
12 0.43678719 103 nips-2001-Kernel Feature Spaces and Nonlinear Blind Souce Separation
13 0.43247521 54 nips-2001-Contextual Modulation of Target Saliency
14 0.42197773 142 nips-2001-Orientational and Geometric Determinants of Place and Head-direction
15 0.42008433 64 nips-2001-EM-DD: An Improved Multiple-Instance Learning Technique
16 0.41930705 169 nips-2001-Small-World Phenomena and the Dynamics of Information
17 0.41699946 3 nips-2001-ACh, Uncertainty, and Cortical Inference
18 0.41478819 147 nips-2001-Pranking with Ranking
20 0.41200665 24 nips-2001-Active Information Retrieval