nips nips2001 nips2001-158 knowledge-graph by maker-knowledge-mining

158 nips-2001-Receptive field structure of flow detectors for heading perception


Source: pdf

Author: J. A. Beintema, M. Lappe, Alexander C. Berg

Abstract: Observer translation relative to the world creates image flow that expands from the observer's direction of translation (heading) from which the observer can recover heading direction. Yet, the image flow is often more complex, depending on rotation of the eye, scene layout and translation velocity. A number of models [1-4] have been proposed on how the human visual system extracts heading from flow in a neurophysiologic ally plausible way. These models represent heading by a set of neurons that respond to large image flow patterns and receive input from motion sensed at different image locations. We analysed these models to determine the exact receptive field of these heading detectors. We find most models predict that, contrary to widespread believe, the contribut ing motion sensors have a preferred motion directed circularly rather than radially around the detector's preferred heading. Moreover, the results suggest to look for more refined structure within the circular flow, such as bi-circularity or local motion-opponency.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Receptive field structure of flow detectors for heading perception Jaap A. [sent-1, score-1.293]

2 de Abstract Observer translation relative to the world creates image flow that expands from the observer's direction of translation (heading) from which the observer can recover heading direction. [sent-14, score-1.952]

3 Yet, the image flow is often more complex, depending on rotation of the eye, scene layout and translation velocity. [sent-15, score-1.155]

4 A number of models [1-4] have been proposed on how the human visual system extracts heading from flow in a neurophysiologic ally plausible way. [sent-16, score-1.354]

5 These models represent heading by a set of neurons that respond to large image flow patterns and receive input from motion sensed at different image locations. [sent-17, score-1.707]

6 We analysed these models to determine the exact receptive field of these heading detectors. [sent-18, score-0.777]

7 We find most models predict that, contrary to widespread believe, the contribut ing motion sensors have a preferred motion directed circularly rather than radially around the detector's preferred heading. [sent-19, score-0.867]

8 Moreover, the results suggest to look for more refined structure within the circular flow, such as bi-circularity or local motion-opponency. [sent-20, score-0.217]

9 Introduction The image flow can be considerably more complicated than merely an expanding pattern of motion vectors centered on the heading direction (Fig. [sent-21, score-1.664]

10 1b) causes the center of flow to be displaced (compare Fig. [sent-24, score-0.668]

11 The effect of rotation depends on the ratio ofrotation and translation speed. [sent-26, score-0.336]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('flow', 0.612), ('heading', 0.505), ('observer', 0.184), ('lappe', 0.166), ('ruhr', 0.166), ('zoology', 0.166), ('motion', 0.158), ('translation', 0.154), ('bochum', 0.144), ('rotation', 0.128), ('image', 0.121), ('neurobiology', 0.11), ('receptive', 0.085), ('eye', 0.083), ('preferred', 0.077), ('helmholtz', 0.072), ('radially', 0.072), ('berg', 0.072), ('markus', 0.072), ('germany', 0.069), ('albert', 0.066), ('analysed', 0.066), ('refined', 0.066), ('detectors', 0.066), ('den', 0.061), ('layout', 0.061), ('ally', 0.058), ('netherlands', 0.058), ('field', 0.055), ('circular', 0.055), ('widespread', 0.055), ('expands', 0.052), ('detector', 0.052), ('creates', 0.05), ('sensors', 0.05), ('extracts', 0.048), ('merely', 0.045), ('expanding', 0.045), ('respond', 0.044), ('contrary', 0.044), ('direction', 0.042), ('ing', 0.042), ('scene', 0.04), ('considerably', 0.04), ('receive', 0.039), ('centered', 0.037), ('directed', 0.035), ('caused', 0.034), ('recover', 0.034), ('plausible', 0.033), ('perception', 0.033), ('complicated', 0.032), ('van', 0.032), ('causes', 0.032), ('world', 0.029), ('believe', 0.029), ('yet', 0.028), ('depending', 0.028), ('neurons', 0.028), ('look', 0.027), ('human', 0.026), ('models', 0.026), ('predict', 0.024), ('center', 0.024), ('ratio', 0.024), ('visual', 0.023), ('exact', 0.023), ('institute', 0.023), ('structure', 0.022), ('patterns', 0.022), ('suggest', 0.021), ('moreover', 0.021), ('find', 0.02), ('around', 0.019), ('determine', 0.017), ('ii', 0.016), ('compare', 0.016), ('effect', 0.016), ('complex', 0.015), ('relative', 0.015), ('pattern', 0.015), ('local', 0.014), ('depends', 0.014), ('represent', 0.014), ('vectors', 0.012), ('proposed', 0.012), ('often', 0.011), ('within', 0.011), ('rather', 0.01), ('input', 0.01), ('university', 0.01), ('system', 0.009), ('large', 0.003), ('different', 0.003), ('number', 0.002), ('introduction', 0.001), ('set', 0.001), ('results', 0.001), ('abstract', 0.0)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 158 nips-2001-Receptive field structure of flow detectors for heading perception

Author: J. A. Beintema, M. Lappe, Alexander C. Berg

Abstract: Observer translation relative to the world creates image flow that expands from the observer's direction of translation (heading) from which the observer can recover heading direction. Yet, the image flow is often more complex, depending on rotation of the eye, scene layout and translation velocity. A number of models [1-4] have been proposed on how the human visual system extracts heading from flow in a neurophysiologic ally plausible way. These models represent heading by a set of neurons that respond to large image flow patterns and receive input from motion sensed at different image locations. We analysed these models to determine the exact receptive field of these heading detectors. We find most models predict that, contrary to widespread believe, the contribut ing motion sensors have a preferred motion directed circularly rather than radially around the detector's preferred heading. Moreover, the results suggest to look for more refined structure within the circular flow, such as bi-circularity or local motion-opponency.

2 0.064346731 73 nips-2001-Eye movements and the maturation of cortical orientation selectivity

Author: Antonino Casile, Michele Rucci

Abstract: Neural activity appears to be a crucial component for shaping the receptive fields of cortical simple cells into adjacent, oriented subregions alternately receiving ON- and OFF-center excitatory geniculate inputs. It is known that the orientation selective responses of V1 neurons are refined by visual experience. After eye opening, the spatiotemporal structure of neural activity in the early stages of the visual pathway depends both on the visual environment and on how the environment is scanned. We have used computational modeling to investigate how eye movements might affect the refinement of the orientation tuning of simple cells in the presence of a Hebbian scheme of synaptic plasticity. Levels of correlation between the activity of simulated cells were examined while natural scenes were scanned so as to model sequences of saccades and fixational eye movements, such as microsaccades, tremor and ocular drift. The specific patterns of activity required for a quantitatively accurate development of simple cell receptive fields with segregated ON and OFF subregions were observed during fixational eye movements, but not in the presence of saccades or with static presentation of natural visual input. These results suggest an important role for the eye movements occurring during visual fixation in the refinement of orientation selectivity.

3 0.062333308 65 nips-2001-Effective Size of Receptive Fields of Inferior Temporal Visual Cortex Neurons in Natural Scenes

Author: Thomas P. Trappenberg, Edmund T. Rolls, Simon M. Stringer

Abstract: Inferior temporal cortex (IT) neurons have large receptive fields when a single effective object stimulus is shown against a blank background, but have much smaller receptive fields when the object is placed in a natural scene. Thus, translation invariant object recognition is reduced in natural scenes, and this may help object selection. We describe a model which accounts for this by competition within an attractor in which the neurons are tuned to different objects in the scene, and the fovea has a higher cortical magnification factor than the peripheral visual field. Furthermore, we show that top-down object bias can increase the receptive field size, facilitating object search in complex visual scenes, and providing a model of object-based attention. The model leads to the prediction that introduction of a second object into a scene with blank background will reduce the receptive field size to values that depend on the closeness of the second object to the target stimulus. We suggest that mechanisms of this type enable the output of IT to be primarily about one object, so that the areas that receive from IT can select the object as a potential target for action.

4 0.051987361 193 nips-2001-Unsupervised Learning of Human Motion Models

Author: Yang Song, Luis Goncalves, Pietro Perona

Abstract: This paper presents an unsupervised learning algorithm that can derive the probabilistic dependence structure of parts of an object (a moving human body in our examples) automatically from unlabeled data. The distinguished part of this work is that it is based on unlabeled data, i.e., the training features include both useful foreground parts and background clutter and the correspondence between the parts and detected features are unknown. We use decomposable triangulated graphs to depict the probabilistic independence of parts, but the unsupervised technique is not limited to this type of graph. In the new approach, labeling of the data (part assignments) is taken as hidden variables and the EM algorithm is applied. A greedy algorithm is developed to select parts and to search for the optimal structure based on the differential entropy of these variables. The success of our algorithm is demonstrated by applying it to generate models of human motion automatically from unlabeled real image sequences.

5 0.047723882 37 nips-2001-Associative memory in realistic neuronal networks

Author: Peter E. Latham

Abstract: Almost two decades ago , Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. It is still not clear, however, whether realistic neuronal networks can support multiple attractors. The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typically a few Hz. Embedding attractor is easy; doing so without destabilizing the background is not. Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. Mean field theory is used to understand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented. One of the most important features of the nervous system is its ability to perform associative memory. It is generally believed that associative memory is implemented using attractor networks - experimental studies point in that direction [4- 7], and there are virtually no competing theoretical models. Perhaps surprisingly, however, it is still an open theoretical question whether attractors can exist in realistic neuronal networks. The

6 0.037302524 75 nips-2001-Fast, Large-Scale Transformation-Invariant Clustering

7 0.034964811 10 nips-2001-A Hierarchical Model of Complex Cells in Visual Cortex for the Binocular Perception of Motion-in-Depth

8 0.03425394 108 nips-2001-Learning Body Pose via Specialized Maps

9 0.032836001 145 nips-2001-Perceptual Metamers in Stereoscopic Vision

10 0.032229494 19 nips-2001-A Rotation and Translation Invariant Discrete Saliency Network

11 0.030441474 150 nips-2001-Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex

12 0.02947423 54 nips-2001-Contextual Modulation of Target Saliency

13 0.026516944 111 nips-2001-Learning Lateral Interactions for Feature Binding and Sensory Segmentation

14 0.025877576 161 nips-2001-Reinforcement Learning with Long Short-Term Memory

15 0.025381973 148 nips-2001-Predictive Representations of State

16 0.024975007 189 nips-2001-The g Factor: Relating Distributions on Features to Distributions on Images

17 0.02466237 164 nips-2001-Sampling Techniques for Kernel Methods

18 0.024510622 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

19 0.024415633 84 nips-2001-Global Coordination of Local Linear Models

20 0.023986571 46 nips-2001-Categorization by Learning and Combining Object Parts


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.05), (1, -0.057), (2, -0.04), (3, -0.009), (4, -0.011), (5, 0.002), (6, -0.087), (7, 0.015), (8, 0.036), (9, 0.013), (10, 0.003), (11, 0.038), (12, 0.067), (13, -0.014), (14, 0.001), (15, 0.013), (16, -0.021), (17, 0.03), (18, -0.036), (19, 0.004), (20, -0.01), (21, 0.042), (22, 0.011), (23, -0.068), (24, 0.017), (25, 0.002), (26, -0.043), (27, -0.015), (28, -0.066), (29, -0.044), (30, 0.089), (31, 0.077), (32, -0.037), (33, -0.051), (34, -0.034), (35, -0.004), (36, -0.002), (37, -0.055), (38, 0.032), (39, -0.063), (40, -0.062), (41, -0.016), (42, 0.14), (43, -0.062), (44, 0.031), (45, -0.162), (46, -0.076), (47, -0.022), (48, 0.026), (49, 0.083)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97811848 158 nips-2001-Receptive field structure of flow detectors for heading perception

Author: J. A. Beintema, M. Lappe, Alexander C. Berg

Abstract: Observer translation relative to the world creates image flow that expands from the observer's direction of translation (heading) from which the observer can recover heading direction. Yet, the image flow is often more complex, depending on rotation of the eye, scene layout and translation velocity. A number of models [1-4] have been proposed on how the human visual system extracts heading from flow in a neurophysiologic ally plausible way. These models represent heading by a set of neurons that respond to large image flow patterns and receive input from motion sensed at different image locations. We analysed these models to determine the exact receptive field of these heading detectors. We find most models predict that, contrary to widespread believe, the contribut ing motion sensors have a preferred motion directed circularly rather than radially around the detector's preferred heading. Moreover, the results suggest to look for more refined structure within the circular flow, such as bi-circularity or local motion-opponency.

2 0.46188259 108 nips-2001-Learning Body Pose via Specialized Maps

Author: Rómer Rosales, Stan Sclaroff

Abstract: A nonlinear supervised learning model, the Specialized Mappings Architecture (SMA), is described and applied to the estimation of human body pose from monocular images. The SMA consists of several specialized forward mapping functions and an inverse mapping function. Each specialized function maps certain domains of the input space (image features) onto the output space (body pose parameters). The key algorithmic problems faced are those of learning the specialized domains and mapping functions in an optimal way, as well as performing inference given inputs and knowledge of the inverse function. Solutions to these problems employ the EM algorithm and alternating choices of conditional independence assumptions. Performance of the approach is evaluated with synthetic and real video sequences of human motion. 1

3 0.44800916 193 nips-2001-Unsupervised Learning of Human Motion Models

Author: Yang Song, Luis Goncalves, Pietro Perona

Abstract: This paper presents an unsupervised learning algorithm that can derive the probabilistic dependence structure of parts of an object (a moving human body in our examples) automatically from unlabeled data. The distinguished part of this work is that it is based on unlabeled data, i.e., the training features include both useful foreground parts and background clutter and the correspondence between the parts and detected features are unknown. We use decomposable triangulated graphs to depict the probabilistic independence of parts, but the unsupervised technique is not limited to this type of graph. In the new approach, labeling of the data (part assignments) is taken as hidden variables and the EM algorithm is applied. A greedy algorithm is developed to select parts and to search for the optimal structure based on the differential entropy of these variables. The success of our algorithm is demonstrated by applying it to generate models of human motion automatically from unlabeled real image sequences.

4 0.43019724 73 nips-2001-Eye movements and the maturation of cortical orientation selectivity

Author: Antonino Casile, Michele Rucci

Abstract: Neural activity appears to be a crucial component for shaping the receptive fields of cortical simple cells into adjacent, oriented subregions alternately receiving ON- and OFF-center excitatory geniculate inputs. It is known that the orientation selective responses of V1 neurons are refined by visual experience. After eye opening, the spatiotemporal structure of neural activity in the early stages of the visual pathway depends both on the visual environment and on how the environment is scanned. We have used computational modeling to investigate how eye movements might affect the refinement of the orientation tuning of simple cells in the presence of a Hebbian scheme of synaptic plasticity. Levels of correlation between the activity of simulated cells were examined while natural scenes were scanned so as to model sequences of saccades and fixational eye movements, such as microsaccades, tremor and ocular drift. The specific patterns of activity required for a quantitatively accurate development of simple cell receptive fields with segregated ON and OFF subregions were observed during fixational eye movements, but not in the presence of saccades or with static presentation of natural visual input. These results suggest an important role for the eye movements occurring during visual fixation in the refinement of orientation selectivity.

5 0.42210251 10 nips-2001-A Hierarchical Model of Complex Cells in Visual Cortex for the Binocular Perception of Motion-in-Depth

Author: Silvio P. Sabatini, Fabio Solari, Giulia Andreani, Chiara Bartolozzi, Giacomo M. Bisio

Abstract: A cortical model for motion-in-depth selectivity of complex cells in the visual cortex is proposed. The model is based on a time extension of the phase-based techniques for disparity estimation. We consider the computation of the total temporal derivative of the time-varying disparity through the combination of the responses of disparity energy units. To take into account the physiological plausibility, the model is based on the combinations of binocular cells characterized by different ocular dominance indices. The resulting cortical units of the model show a sharp selectivity for motion-indepth that has been compared with that reported in the literature for real cortical cells. 1

6 0.34533015 145 nips-2001-Perceptual Metamers in Stereoscopic Vision

7 0.32641307 65 nips-2001-Effective Size of Receptive Fields of Inferior Temporal Visual Cortex Neurons in Natural Scenes

8 0.32062531 151 nips-2001-Probabilistic principles in unsupervised learning of visual structure: human data and a model

9 0.30540872 19 nips-2001-A Rotation and Translation Invariant Discrete Saliency Network

10 0.2929728 142 nips-2001-Orientational and Geometric Determinants of Place and Head-direction

11 0.28527614 75 nips-2001-Fast, Large-Scale Transformation-Invariant Clustering

12 0.27952707 93 nips-2001-Incremental A*

13 0.26729065 96 nips-2001-Information-Geometric Decomposition in Spike Analysis

14 0.25895932 91 nips-2001-Improvisation and Learning

15 0.24684595 37 nips-2001-Associative memory in realistic neuronal networks

16 0.24398576 182 nips-2001-The Fidelity of Local Ordinal Encoding

17 0.22758214 54 nips-2001-Contextual Modulation of Target Saliency

18 0.21436614 153 nips-2001-Product Analysis: Learning to Model Observations as Products of Hidden Variables

19 0.20902464 11 nips-2001-A Maximum-Likelihood Approach to Modeling Multisensory Enhancement

20 0.20773937 176 nips-2001-Stochastic Mixed-Signal VLSI Architecture for High-Dimensional Kernel Machines


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(19, 0.038), (27, 0.046), (30, 0.061), (43, 0.562), (59, 0.014), (72, 0.027), (79, 0.015), (91, 0.091)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.85642648 158 nips-2001-Receptive field structure of flow detectors for heading perception

Author: J. A. Beintema, M. Lappe, Alexander C. Berg

Abstract: Observer translation relative to the world creates image flow that expands from the observer's direction of translation (heading) from which the observer can recover heading direction. Yet, the image flow is often more complex, depending on rotation of the eye, scene layout and translation velocity. A number of models [1-4] have been proposed on how the human visual system extracts heading from flow in a neurophysiologic ally plausible way. These models represent heading by a set of neurons that respond to large image flow patterns and receive input from motion sensed at different image locations. We analysed these models to determine the exact receptive field of these heading detectors. We find most models predict that, contrary to widespread believe, the contribut ing motion sensors have a preferred motion directed circularly rather than radially around the detector's preferred heading. Moreover, the results suggest to look for more refined structure within the circular flow, such as bi-circularity or local motion-opponency.

2 0.4523485 188 nips-2001-The Unified Propagation and Scaling Algorithm

Author: Yee W. Teh, Max Welling

Abstract: In this paper we will show that a restricted class of constrained minimum divergence problems, named generalized inference problems, can be solved by approximating the KL divergence with a Bethe free energy. The algorithm we derive is closely related to both loopy belief propagation and iterative scaling. This unified propagation and scaling algorithm reduces to a convergent alternative to loopy belief propagation when no constraints are present. Experiments show the viability of our algorithm.

3 0.44339392 79 nips-2001-Gaussian Process Regression with Mismatched Models

Author: Peter Sollich

Abstract: Learning curves for Gaussian process regression are well understood when the 'student' model happens to match the 'teacher' (true data generation process). I derive approximations to the learning curves for the more generic case of mismatched models, and find very rich behaviour: For large input space dimensionality, where the results become exact, there are universal (student-independent) plateaux in the learning curve, with transitions in between that can exhibit arbitrarily many over-fitting maxima; over-fitting can occur even if the student estimates the teacher noise level correctly. In lower dimensions, plateaux also appear, and the learning curve remains dependent on the mismatch between student and teacher even in the asymptotic limit of a large number of training examples. Learning with excessively strong smoothness assumptions can be particularly dangerous: For example, a student with a standard radial basis function covariance function will learn a rougher teacher function only logarithmically slowly. All predictions are confirmed by simulations. 1

4 0.23603672 192 nips-2001-Tree-based reparameterization for approximate inference on loopy graphs

Author: Martin J. Wainwright, Tommi Jaakkola, Alan S. Willsky

Abstract: We develop a tree-based reparameterization framework that provides a new conceptual view of a large class of iterative algorithms for computing approximate marginals in graphs with cycles. It includes belief propagation (BP), which can be reformulated as a very local form of reparameterization. More generally, we consider algorithms that perform exact computations over spanning trees of the full graph. On the practical side, we find that such tree reparameterization (TRP) algorithms have convergence properties superior to BP. The reparameterization perspective also provides a number of theoretical insights into approximate inference, including a new characterization of fixed points; and an invariance intrinsic to TRP /BP. These two properties enable us to analyze and bound the error between the TRP /BP approximations and the actual marginals. While our results arise naturally from the TRP perspective, most of them apply in an algorithm-independent manner to any local minimum of the Bethe free energy. Our results also have natural extensions to more structured approximations [e.g. , 1, 2]. 1

5 0.20919222 149 nips-2001-Probabilistic Abstraction Hierarchies

Author: Eran Segal, Daphne Koller, Dirk Ormoneit

Abstract: Many domains are naturally organized in an abstraction hierarchy or taxonomy, where the instances in “nearby” classes in the taxonomy are similar. In this paper, we provide a general probabilistic framework for clustering data into a set of classes organized as a taxonomy, where each class is associated with a probabilistic model from which the data was generated. The clustering algorithm simultaneously optimizes three things: the assignment of data instances to clusters, the models associated with the clusters, and the structure of the abstraction hierarchy. A unique feature of our approach is that it utilizes global optimization algorithms for both of the last two steps, reducing the sensitivity to noise and the propensity to local maxima that are characteristic of algorithms such as hierarchical agglomerative clustering that only take local steps. We provide a theoretical analysis for our algorithm, showing that it converges to a local maximum of the joint likelihood of model and data. We present experimental results on synthetic data, and on real data in the domains of gene expression and text.

6 0.20842853 100 nips-2001-Iterative Double Clustering for Unsupervised and Semi-Supervised Learning

7 0.20827188 102 nips-2001-KLD-Sampling: Adaptive Particle Filters

8 0.20793316 162 nips-2001-Relative Density Nets: A New Way to Combine Backpropagation with HMM's

9 0.20615438 123 nips-2001-Modeling Temporal Structure in Classical Conditioning

10 0.20599756 182 nips-2001-The Fidelity of Local Ordinal Encoding

11 0.20555261 150 nips-2001-Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex

12 0.20522875 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments

13 0.20516485 52 nips-2001-Computing Time Lower Bounds for Recurrent Sigmoidal Neural Networks

14 0.2048935 39 nips-2001-Audio-Visual Sound Separation Via Hidden Markov Models

15 0.20455733 169 nips-2001-Small-World Phenomena and the Dynamics of Information

16 0.20449576 46 nips-2001-Categorization by Learning and Combining Object Parts

17 0.20424762 161 nips-2001-Reinforcement Learning with Long Short-Term Memory

18 0.20297652 68 nips-2001-Entropy and Inference, Revisited

19 0.20280373 1 nips-2001-(Not) Bounding the True Error

20 0.20259777 3 nips-2001-ACh, Uncertainty, and Cortical Inference