nips nips2005 nips2005-109 knowledge-graph by maker-knowledge-mining

109 nips-2005-Learning Cue-Invariant Visual Responses


Source: pdf

Author: Jarmo Hurri

Abstract: Multiple visual cues are used by the visual system to analyze a scene; achromatic cues include luminance, texture, contrast and motion. Singlecell recordings have shown that the mammalian visual cortex contains neurons that respond similarly to scene structure (e.g., orientation of a boundary), regardless of the cue type conveying this information. This paper shows that cue-invariant response properties of simple- and complex-type cells can be learned from natural image data in an unsupervised manner. In order to do this, we also extend a previous conceptual model of cue invariance so that it can be applied to model simple- and complex-cell responses. Our results relate cue-invariant response properties to natural image statistics, thereby showing how the statistical modeling approach can be used to model processing beyond the elemental response properties visual neurons. This work also demonstrates how to learn, from natural image data, more sophisticated feature detectors than those based on changes in mean luminance, thereby paving the way for new data-driven approaches to image processing and computer vision. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Box 68, FIN-00014 University of Helsinki, Finland Abstract Multiple visual cues are used by the visual system to analyze a scene; achromatic cues include luminance, texture, contrast and motion. [sent-3, score-0.479]

2 Singlecell recordings have shown that the mammalian visual cortex contains neurons that respond similarly to scene structure (e. [sent-4, score-0.264]

3 , orientation of a boundary), regardless of the cue type conveying this information. [sent-6, score-0.392]

4 This paper shows that cue-invariant response properties of simple- and complex-type cells can be learned from natural image data in an unsupervised manner. [sent-7, score-0.787]

5 In order to do this, we also extend a previous conceptual model of cue invariance so that it can be applied to model simple- and complex-cell responses. [sent-8, score-0.293]

6 Our results relate cue-invariant response properties to natural image statistics, thereby showing how the statistical modeling approach can be used to model processing beyond the elemental response properties visual neurons. [sent-9, score-0.871]

7 This work also demonstrates how to learn, from natural image data, more sophisticated feature detectors than those based on changes in mean luminance, thereby paving the way for new data-driven approaches to image processing and computer vision. [sent-10, score-0.411]

8 Spatiotemporal variations in the mean luminance level – which are also called first-order cues – are computationally the simplest of these; the name ’first-order’ comes from the idea that a single linear filtering operation can detect these cues. [sent-12, score-0.324]

9 Other types of visual cues include contrast, texture and motion; in general, cues related to variations in other characteristics than mean luminance are called higher-order (also called non-Fourier) cues; the analysis of these is thought to involve more than one level of processing/filtering. [sent-13, score-0.612]

10 Single-cell recordings have shown that the mammalian visual cortex contains neurons that are selective to both first- and higher-order cues. [sent-14, score-0.271]

11 For example, a neuron may exhibit similar selectivity to the orientation of a boundary, regardless of whether the boundary is a result of spatial changes in mean luminance or contrast [1]. [sent-15, score-0.518]

12 Monkey cortical areas V1 and V2, and cat cortical areas 17 and 18, contain both simple- (orientation-, frequency- and phase-selective) and complex-type (orientation- and frequency-selective, phase-invariant) cells that exhibit such cue-invariant response properties [2, 1, 3, 4, 5]. [sent-16, score-0.532]

13 Recent computational modeling of the visual system has produced fundamental results relating stimulus statistics to first-order response properties of simple and complex cells (see, e. [sent-18, score-0.689]

14 The linear stream responds to first-order cues, while the nonlinear stream responds to higher-order cues. [sent-22, score-0.262]

15 The model consists of simple cells, complex cells and a feedback path leading from a population of high-frequency first-order complex cells to low-frequency cue-invariant simple cells. [sent-26, score-1.114]

16 In a cue-invariant simple cell, the feedback is filtered with a filter that has similar spatial characteristics as the feedforward filter of the cell. [sent-27, score-0.88]

17 Note that while our model results in cue-invariant response properties, it is not a model of cue integration, because in the sum the two paths can cancel out. [sent-29, score-0.49]

18 In this instance of the model, the high-frequency cells prefer horizontal stimuli, while the low-frequency cue-invariant cells prefer vertical stimuli; in other instances, this relationship can be different. [sent-31, score-0.427]

19 statistics -based framework for cue-invariant responses of both simple and complex cells. [sent-34, score-0.284]

20 In order to achieve this, we also extend the two-stream model of cue-invariant responses (Figure 1A) to account for cue-invariant responses at both simple- and complex-cell levels. [sent-35, score-0.398]

21 In Section 2 we describe our version of the two-stream model of cue-invariant responses, which is based on feedback from complex cells to simple cells. [sent-37, score-0.792]

22 In Section 3 we formulate an unsupervised learning rule for learning these feedback connections. [sent-38, score-0.568]

23 We apply our learning rule to natural image data, and show that this results in the emergence of connections that give rise to cue-invariant responses at both simple- and complex-cell levels. [sent-39, score-0.482]

24 2 A model of cue-invariant responses The most prominent model of cue-invariant responses introduced in previous research is the two-stream model (see, e. [sent-41, score-0.45]

25 In this research we have extended this model so that it can be applied directly to model the cue-invariant responses of simple and complex cells. [sent-44, score-0.336]

26 (a) The feedforward filter (Gabor function [10]) of a high-frequency first-order simple cell; the filter has size 19 × 19 pixels, which is the size of the image data in our experiments. [sent-47, score-0.482]

27 (b) The feedforward filter of another first-order simple cell. [sent-48, score-0.348]

28 This feedforward filter is otherwise similar to the one in (a), except that there is a phase difference of π/2 between the two; together, the feedforward filters in (a) and (b) are used to implement an energy model of a complex cell. [sent-49, score-0.834]

29 (c) A lattice of size 7 × 7 of high-frequency filters of the type shown in (a); these filters are otherwise identical, except that their spatial locations vary. [sent-50, score-0.209]

30 Together, the lattices shown in (c) and (d) are used to implement a 7 × 7 lattice of energy-model complex cells with different spatial positions; the output of this lattice is the feedback relayed to the low-frequency cueinvariant cells. [sent-52, score-1.176]

31 (g) A feedback filter of size 7 × 7 for the simple cell whose feedforward filter is shown in (e); in order to avoid confusion between feedforward filters and feedback filters, the latter are visualized as lattices of slightly rounded rectangles. [sent-54, score-1.847]

32 (h) A feedback filter for the simple cell whose feedforward filter is shown in (f). [sent-55, score-0.948]

33 The feedback filters in (g) and (h) have been obtained by applying the learning algorithm introduced in this paper (see Section 3 for details). [sent-56, score-0.492]

34 models of simple cells and energy models of complex cells [10], and a feedback path from the complex-cell level to the simple-cell level. [sent-57, score-1.04]

35 This feedback path introduces a second, nonlinear input stream to cue-invariant cells, and gives rise to cue-invariant responses in these cells. [sent-58, score-0.87]

36 To avoid confusion between the two types of filters – one type operating on the input image and the other on the feedback – we will use the term ’feedforward filter’ for the former and the term ’feedback filter’ for the latter. [sent-59, score-0.681]

37 Figure 2 shows the feedforward and feedback filters of a concrete instance (implementation) of our model. [sent-60, score-0.811]

38 Gabor functions [10] are used to model simple-cell feedforward filters. [sent-61, score-0.345]

39 Figure 3 illustrates the design of higher-order gratings, and shows how the complex-cell lattice of the model transforms higher-order cues into feedback activity patterns that resemble corresponding first-order cues. [sent-62, score-0.895]

40 These measurements show that our model possesses the fundamental cueinvariant response properties: in our model, a cue-invariant neuron has similar selectivity to the orientation, frequency and phase of a grating stimulus, regardless of cue type (see figure caption for details). [sent-64, score-0.923]

41 We now proceed to show how the feedback filters of our model (Figures 2g and h) can be learned from natural image data. [sent-65, score-0.799]

42 1 Learning feedback connections in an unsupervised manner The objective function and the learning algorithm In this section we introduce an unsupervised algorithm for learning feedback connection weights from complex cells to simple cells. [sent-67, score-1.457]

43 Design of grating stimuli: Each row illustrates how, for a particular cue, a grating stimulus is composed of sinusoidal constituents; the equation of each stimulus (B, G, K) as a function of the constituents is shown under the stimulus. [sent-69, score-0.48]

44 Note that the orientation, frequency and phase of each grating is determined by the first sinusoidal constituent (A, D, I); here these parameters are the same for all stimuli. [sent-70, score-0.31]

45 Feedback activity: The rightmost column shows the feedback activity – that is, response of the complex-cell lattice (see Figures 2c and d) – for the three types of stimuli. [sent-72, score-0.927]

46 (C) There is no response to the luminance stimuli, since the orientation and frequency of the stimulus are different from those of the high-frequency feedforward filters. [sent-73, score-0.953]

47 (H, L) For other cue types, the lattice detects the locations of energy of the vertical high-frequency constituent (E, J), thereby resulting in feedback activity that has a spatial pattern similar to a corresponding luminance pattern (A). [sent-74, score-1.27]

48 Thus, the complex-cell lattice transforms higherorder cues into activity patterns that resemble first-order cues, and these can subsequently produce a strong response in a feedback filter (compare (H) and (L) with the feedback filter in Figure 2g). [sent-75, score-1.649]

49 was shown in Figure 4, these feedback filters give rise to cue-invariant response properties. [sent-77, score-0.734]

50 The intuitive idea behind the learning algorithm is the following: in natural images, higherorder cues tend to coincide with first-order cues. [sent-78, score-0.345]

51 For example, when two different textures are adjacent, there is often also a luminance border between them; two examples of this phenomenon are shown in Figure 5. [sent-79, score-0.214]

52 Therefore, cue-invariant response properties could be a result of learning in which large responses in the feedforward channel (first-order responses) have become associated with large responses in the feedback channel (higherorder responses). [sent-80, score-1.437]

53 Let vector c(n) = T [c1 (n) c2 (n) · · · cK (n)] denote the responses of a set of K first-order high-frequency complex cells for the input image with index n. [sent-85, score-0.565]

54 In our case the number of these complex cells is K = 7 × 7 = 49 (see Figures 2c and d), so the dimension of this vector is 49. [sent-86, score-0.245]

55 This vectorization can be done in a standard manner [15] by scanning values from the 2D lattice column-wise into a vector; when the learned feedback filter is visualized, the filter is “unvectorized” with a reverse procedure. [sent-87, score-0.71]

56 Let s(n) denote the response of a single low- π/2 cue orientation response 0 0 0. [sent-88, score-0.721]

57 3 cue frequency π 0 cue phase 2π π/2 cue orientation standard simple cell (without feedback) 1 0 0 1 0 0 0. [sent-91, score-1.009]

58 3 cue frequency π/2 π cue orientation F 1 0 0 0. [sent-94, score-0.606]

59 3 cue frequency I 1 0 π 0 2π cue phase J response 0 π H 1 0 −1 response response G 1 0 −1 π 0 cue phase 2π K 0. [sent-97, score-1.412]

60 3 carrier frequency response L response 0 E 1 response 0 π C 1 response response 0 D response B 1 response response A cue-invariant complex cell (with feedback) response cue-invariant simple cell (with feedback) 0. [sent-102, score-2.303]

61 3 carrier frequency Figure 4: Our model fulfills the fundamental properties of cue-invariant responses. [sent-107, score-0.279]

62 The plots show tuning curves for a cue-invariant simple cell – corresponding to the filters of Figures 2e and g – and complex cell of our new model (two leftmost columns), and a standard simple-cell model without feedback processing (rightmost column). [sent-108, score-0.858]

63 Solid lines show responses to luminance-defined gratings (Figure 3B), dotted lines show responses to texture-defined gratings (Figure 3G), and dashed lines show responses to contrast-defined gratings (Figure 3K). [sent-109, score-0.897]

64 (A–I) In our model, a neuron has similar selectivity to the orientation, frequency and phase of a grating stimulus, regardless of cue type; in contrast, a standard simple-cell model, without the feedback path, is only selective to the parameters of a luminance-defined grating. [sent-110, score-1.064]

65 The preferred frequency is lower for higher-order gratings than for first-order gratings; similar observations have been made in single-cell recordings [4]. [sent-111, score-0.225]

66 (J–M) In our model, the neurons are also selective to the orientation and frequency of the carrier (Figure 3J) of a contrast-defined grating (Figure 3K), thus conforming with single-cell recordings [1]. [sent-112, score-0.528]

67 Note that these measurements were made with the feedback filters learned by our unsupervised algorithm (see Section 3); thus, these measurements confirm that learning results in cue-invariant response properties. [sent-113, score-0.909]

68 Image in (A) contains a near-vertical luminance boundary across the image; the boundary in (B) is nearhorizontal. [sent-115, score-0.32]

69 In both (A) and (B), texture is different on different sides of the luminance border. [sent-116, score-0.244]

70 ) B C D E F A G H I J Figure 6: (A-D, F-I) Feedback filters (top row) learned from natural image data by using our unsupervised learning algorithm; the bottom row shows the corresponding feedforward filters. [sent-118, score-0.676]

71 For a quantitative evaluation of the cue-invariant response properties resulting from the learned filters (A) and (B), see Figure 4. [sent-119, score-0.336]

72 (E, J) The result of a control experiment, in which Gaussian white noise was used as input data; (J) shows the feedforward filter used in this control experiment. [sent-120, score-0.319]

73 frequency simple cell for the input image with index n. [sent-121, score-0.339]

74 In our learning algorithm all the feedforward filters are fixed and only a feedback filter is learned; this means that c(n) and s(n) can be computed for all n (all images) prior to applying the learning algorithm. [sent-122, score-0.811]

75 Let us denote the K-dimensional feedback filter with w; this filter is learned by our algorithm. [sent-123, score-0.546]

76 Let b(n) = w T c(n), that is, b(n) is the signal obtained when the feedback activity from the complex-cell lattice is filtered with the feedback filter; the overall activity of a cueinvariant simple cell is then s(n) + b(n). [sent-124, score-1.48]

77 To keep the output of the feedback filter b(n) bounded, we enforce a unit energy constraint on b(n), leading into constraint T 2 h(w) = E b2 (n) = wT E c(n)c(n)T w = wT Cw = 1, (2) where C = E c(n)c(n) is also positive-semidefinite and can be computed prior to learning. [sent-126, score-0.59]

78 2 Experiments The algorithm described above was applied to natural image data, which was sampled from a set of over 4,000 natural images [8]. [sent-144, score-0.372]

79 Simple-cell feedforward responses s(n) were computed using the filter shown in Figure 2e, and the set of high-frequency complex-cell lattice activities c(n) was computed using the filters shown in Figures 2c and d. [sent-147, score-0.645]

80 This preprocessing tends to weaken contrast borders, implying that in our experiments, learning higher-order responses is mostly based on texture boundaries that coincide with luminance boundaries. [sent-149, score-0.548]

81 It should be noted, however, that in spite of this preprocessing step, the resulting feedback filters produce cue-invariant responses to both texture- and contrast-defined cues (see Figure 4). [sent-150, score-0.866]

82 In order to make the components of c(n) have zero mean, and focus on the structure of feedback activity patterns instead of overall constant activation, the local mean (DC component) was removed from each c(n). [sent-151, score-0.558]

83 The resulting feedback filter is shown in Figure 6A (see also Figure 2g). [sent-156, score-0.492]

84 Data sampling, preprocessing and the learning algorithm were then repeated, but this time using the feedforward filter shown in Figure 2f; the feedback filter obtained from this run is shown in Figure 6B (see also Figure 2h). [sent-157, score-0.857]

85 The measurements in Figure 4 show that these feedback filters result in cueinvariant response properties at both simple- and complex-cell levels. [sent-158, score-0.877]

86 Thus, our unsupervised algorithm learns cue-invariant response properties from natural image data. [sent-159, score-0.557]

87 This verifies that our original results do reflect the statistics of natural image data. [sent-164, score-0.227]

88 4 Conclusions This paper has shown that cue-invariant response properties can be learned from natural image data in an unsupervised manner. [sent-165, score-0.611]

89 The results were based on a model in which there is a feedback path from complex cells to simple cells, and an unsupervised algorithm which maximizes the correlation of the energies of the feedforward and filtered feedback signals. [sent-166, score-1.753]

90 The intuitive idea behind the algorithm is that in natural visual stimuli, higher-order cues tend to coincide with first-order cues. [sent-167, score-0.364]

91 Simulations were performed to validate that the learned feedback filters give rise to in cue-invariant response properties. [sent-168, score-0.788]

92 First, for the first time it has been shown that cue-invariant response properties of simple and complex cells emerge from the statistical properties of natural images. [sent-170, score-0.676]

93 Second, our results suggest that cue invariance can result from feedback from complex cells to simple cells; no feedback from higher cortical areas would thus be needed. [sent-171, score-1.531]

94 Third, our research demonstrates how higher-order feature detectors can be learned from natural data in an unsupervised manner; this is an important step towards general-purpose data-driven approaches to image processing and computer vision. [sent-172, score-0.381]

95 A processsing stream in mammalian visual cortex neurons for non-Fourier responses. [sent-184, score-0.294]

96 Temporal and spatial response to second-order stimuli in cat area 18. [sent-198, score-0.348]

97 Physiological responses of New World monkey V1 neurons to stimuli defined by coherent motion. [sent-207, score-0.289]

98 Independent component filters of natural images compared with simple cells in primary visual cortex. [sent-225, score-0.41]

99 A two-layer sparse coding model learns simple and complex cell receptive fields and topography from natural images. [sent-231, score-0.358]

100 Temporal and spatiotemporal coherence in simple-cell responses: a generative model of natural image sequences. [sent-253, score-0.305]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('feedback', 0.492), ('feedforward', 0.319), ('lters', 0.245), ('lter', 0.22), ('cue', 0.215), ('response', 0.199), ('responses', 0.186), ('luminance', 0.182), ('cells', 0.176), ('cues', 0.142), ('lattice', 0.14), ('image', 0.134), ('carrier', 0.13), ('gratings', 0.113), ('grating', 0.112), ('cell', 0.108), ('orientation', 0.108), ('stream', 0.101), ('figures', 0.096), ('natural', 0.093), ('cueinvariant', 0.087), ('hyv', 0.086), ('visual', 0.084), ('stimulus', 0.077), ('unsupervised', 0.076), ('stimuli', 0.071), ('boundary', 0.069), ('complex', 0.069), ('frequency', 0.068), ('ltered', 0.067), ('activity', 0.066), ('higherorder', 0.065), ('hurri', 0.065), ('texture', 0.062), ('constituents', 0.057), ('properties', 0.055), ('learned', 0.054), ('rinen', 0.052), ('selectivity', 0.052), ('phase', 0.051), ('energy', 0.05), ('mammalian', 0.048), ('path', 0.048), ('wt', 0.048), ('preprocessing', 0.046), ('sinusoidal', 0.045), ('gabor', 0.045), ('coincide', 0.045), ('recordings', 0.044), ('measurements', 0.044), ('attenuate', 0.043), ('rise', 0.043), ('baker', 0.041), ('spatial', 0.04), ('regardless', 0.04), ('cat', 0.038), ('mareschal', 0.038), ('helsinki', 0.038), ('opt', 0.038), ('constituent', 0.034), ('finland', 0.034), ('selective', 0.034), ('receptive', 0.033), ('neurons', 0.032), ('pseudoinverse', 0.032), ('textures', 0.032), ('lattices', 0.032), ('cortical', 0.032), ('responds', 0.03), ('visualized', 0.03), ('rightmost', 0.03), ('sub', 0.03), ('type', 0.029), ('cortex', 0.029), ('simple', 0.029), ('filters', 0.029), ('resemble', 0.029), ('images', 0.028), ('quantitative', 0.028), ('contrast', 0.027), ('scene', 0.027), ('model', 0.026), ('spatiotemporal', 0.026), ('energies', 0.026), ('emergence', 0.026), ('coherence', 0.026), ('invariance', 0.026), ('confusion', 0.026), ('thereby', 0.026), ('dc', 0.025), ('prefer', 0.025), ('vertical', 0.025), ('manner', 0.024), ('sampled', 0.024), ('detectors', 0.024), ('paths', 0.024), ('subsequently', 0.024), ('constraint', 0.024), ('neuroscience', 0.023), ('objective', 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000005 109 nips-2005-Learning Cue-Invariant Visual Responses

Author: Jarmo Hurri

Abstract: Multiple visual cues are used by the visual system to analyze a scene; achromatic cues include luminance, texture, contrast and motion. Singlecell recordings have shown that the mammalian visual cortex contains neurons that respond similarly to scene structure (e.g., orientation of a boundary), regardless of the cue type conveying this information. This paper shows that cue-invariant response properties of simple- and complex-type cells can be learned from natural image data in an unsupervised manner. In order to do this, we also extend a previous conceptual model of cue invariance so that it can be applied to model simple- and complex-cell responses. Our results relate cue-invariant response properties to natural image statistics, thereby showing how the statistical modeling approach can be used to model processing beyond the elemental response properties visual neurons. This work also demonstrates how to learn, from natural image data, more sophisticated feature detectors than those based on changes in mean luminance, thereby paving the way for new data-driven approaches to image processing and computer vision. 1

2 0.26302105 157 nips-2005-Principles of real-time computing with feedback applied to cortical microcircuit models

Author: Wolfgang Maass, Prashant Joshi, Eduardo D. Sontag

Abstract: The network topology of neurons in the brain exhibits an abundance of feedback connections, but the computational function of these feedback connections is largely unknown. We present a computational theory that characterizes the gain in computational power achieved through feedback in dynamical systems with fading memory. It implies that many such systems acquire through feedback universal computational capabilities for analog computing with a non-fading memory. In particular, we show that feedback enables such systems to process time-varying input streams in diverse ways according to rules that are implemented through internal states of the dynamical system. In contrast to previous attractor-based computational models for neural networks, these flexible internal states are high-dimensional attractors of the circuit dynamics, that still allow the circuit state to absorb new information from online input streams. In this way one arrives at novel models for working memory, integration of evidence, and reward expectation in cortical circuits. We show that they are applicable to circuits of conductance-based Hodgkin-Huxley (HH) neurons with high levels of noise that reflect experimental data on invivo conditions. 1

3 0.21197939 28 nips-2005-Analyzing Auditory Neurons by Learning Distance Functions

Author: Inna Weiner, Tomer Hertz, Israel Nelken, Daphna Weinshall

Abstract: We present a novel approach to the characterization of complex sensory neurons. One of the main goals of characterizing sensory neurons is to characterize dimensions in stimulus space to which the neurons are highly sensitive (causing large gradients in the neural responses) or alternatively dimensions in stimulus space to which the neuronal response are invariant (defining iso-response manifolds). We formulate this problem as that of learning a geometry on stimulus space that is compatible with the neural responses: the distance between stimuli should be large when the responses they evoke are very different, and small when the responses they evoke are similar. Here we show how to successfully train such distance functions using rather limited amount of information. The data consisted of the responses of neurons in primary auditory cortex (A1) of anesthetized cats to 32 stimuli derived from natural sounds. For each neuron, a subset of all pairs of stimuli was selected such that the responses of the two stimuli in a pair were either very similar or very dissimilar. The distance function was trained to fit these constraints. The resulting distance functions generalized to predict the distances between the responses of a test stimulus and the trained stimuli. 1

4 0.16980532 101 nips-2005-Is Early Vision Optimized for Extracting Higher-order Dependencies?

Author: Yan Karklin, Michael S. Lewicki

Abstract: Linear implementations of the efficient coding hypothesis, such as independent component analysis (ICA) and sparse coding models, have provided functional explanations for properties of simple cells in V1 [1, 2]. These models, however, ignore the non-linear behavior of neurons and fail to match individual and population properties of neural receptive fields in subtle but important ways. Hierarchical models, including Gaussian Scale Mixtures [3, 4] and other generative statistical models [5, 6], can capture higher-order regularities in natural images and explain nonlinear aspects of neural processing such as normalization and context effects [6,7]. Previously, it had been assumed that the lower level representation is independent of the hierarchy, and had been fixed when training these models. Here we examine the optimal lower-level representations derived in the context of a hierarchical model and find that the resulting representations are strikingly different from those based on linear models. Unlike the the basis functions and filters learned by ICA or sparse coding, these functions individually more closely resemble simple cell receptive fields and collectively span a broad range of spatial scales. Our work unifies several related approaches and observations about natural image structure and suggests that hierarchical models might yield better representations of image structure throughout the hierarchy.

5 0.12469267 146 nips-2005-On the Accuracy of Bounded Rationality: How Far from Optimal Is Fast and Frugal?

Author: Michael Schmitt, Laura Martignon

Abstract: Fast and frugal heuristics are well studied models of bounded rationality. Psychological research has proposed the take-the-best heuristic as a successful strategy in decision making with limited resources. Take-thebest searches for a sufficiently good ordering of cues (features) in a task where objects are to be compared lexicographically. We investigate the complexity of the problem of approximating optimal cue permutations for lexicographic strategies. We show that no efficient algorithm can approximate the optimum to within any constant factor, if P = NP. We further consider a greedy approach for building lexicographic strategies and derive tight bounds for the performance ratio of a new and simple algorithm. This algorithm is proven to perform better than take-the-best. 1

6 0.10861152 149 nips-2005-Optimal cue selection strategy

7 0.10434391 150 nips-2005-Optimizing spatio-temporal filters for improving Brain-Computer Interfacing

8 0.10288386 134 nips-2005-Neural mechanisms of contrast dependent receptive field size in V1

9 0.095770977 170 nips-2005-Scaling Laws in Natural Scenes and the Inference of 3D Shape

10 0.091979221 141 nips-2005-Norepinephrine and Neural Interrupts

11 0.091161594 94 nips-2005-Identifying Distributed Object Representations in Human Extrastriate Visual Cortex

12 0.088282302 129 nips-2005-Modeling Neural Population Spiking Activity with Gibbs Distributions

13 0.082717218 203 nips-2005-Visual Encoding with Jittering Eyes

14 0.077560559 26 nips-2005-An exploration-exploitation model based on norepinepherine and dopamine activity

15 0.077159368 67 nips-2005-Extracting Dynamical Structure Embedded in Neural Activity

16 0.071338654 110 nips-2005-Learning Depth from Single Monocular Images

17 0.070539616 169 nips-2005-Saliency Based on Information Maximization

18 0.065606259 173 nips-2005-Sensory Adaptation within a Bayesian Framework for Perception

19 0.063453265 25 nips-2005-An aVLSI Cricket Ear Model

20 0.058542505 164 nips-2005-Representing Part-Whole Relationships in Recurrent Neural Networks


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.182), (1, -0.183), (2, -0.017), (3, 0.197), (4, -0.059), (5, 0.141), (6, -0.094), (7, -0.178), (8, -0.001), (9, 0.04), (10, -0.035), (11, -0.022), (12, -0.017), (13, -0.18), (14, -0.087), (15, 0.042), (16, 0.275), (17, -0.036), (18, -0.117), (19, 0.102), (20, 0.084), (21, 0.148), (22, -0.11), (23, -0.103), (24, 0.054), (25, -0.121), (26, 0.085), (27, -0.001), (28, 0.041), (29, 0.079), (30, -0.162), (31, 0.071), (32, 0.232), (33, -0.036), (34, -0.019), (35, 0.061), (36, -0.006), (37, 0.038), (38, -0.042), (39, -0.108), (40, -0.076), (41, -0.026), (42, 0.015), (43, -0.045), (44, -0.011), (45, 0.104), (46, 0.058), (47, -0.038), (48, 0.062), (49, -0.048)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97939998 109 nips-2005-Learning Cue-Invariant Visual Responses

Author: Jarmo Hurri

Abstract: Multiple visual cues are used by the visual system to analyze a scene; achromatic cues include luminance, texture, contrast and motion. Singlecell recordings have shown that the mammalian visual cortex contains neurons that respond similarly to scene structure (e.g., orientation of a boundary), regardless of the cue type conveying this information. This paper shows that cue-invariant response properties of simple- and complex-type cells can be learned from natural image data in an unsupervised manner. In order to do this, we also extend a previous conceptual model of cue invariance so that it can be applied to model simple- and complex-cell responses. Our results relate cue-invariant response properties to natural image statistics, thereby showing how the statistical modeling approach can be used to model processing beyond the elemental response properties visual neurons. This work also demonstrates how to learn, from natural image data, more sophisticated feature detectors than those based on changes in mean luminance, thereby paving the way for new data-driven approaches to image processing and computer vision. 1

2 0.72052473 157 nips-2005-Principles of real-time computing with feedback applied to cortical microcircuit models

Author: Wolfgang Maass, Prashant Joshi, Eduardo D. Sontag

Abstract: The network topology of neurons in the brain exhibits an abundance of feedback connections, but the computational function of these feedback connections is largely unknown. We present a computational theory that characterizes the gain in computational power achieved through feedback in dynamical systems with fading memory. It implies that many such systems acquire through feedback universal computational capabilities for analog computing with a non-fading memory. In particular, we show that feedback enables such systems to process time-varying input streams in diverse ways according to rules that are implemented through internal states of the dynamical system. In contrast to previous attractor-based computational models for neural networks, these flexible internal states are high-dimensional attractors of the circuit dynamics, that still allow the circuit state to absorb new information from online input streams. In this way one arrives at novel models for working memory, integration of evidence, and reward expectation in cortical circuits. We show that they are applicable to circuits of conductance-based Hodgkin-Huxley (HH) neurons with high levels of noise that reflect experimental data on invivo conditions. 1

3 0.67186522 28 nips-2005-Analyzing Auditory Neurons by Learning Distance Functions

Author: Inna Weiner, Tomer Hertz, Israel Nelken, Daphna Weinshall

Abstract: We present a novel approach to the characterization of complex sensory neurons. One of the main goals of characterizing sensory neurons is to characterize dimensions in stimulus space to which the neurons are highly sensitive (causing large gradients in the neural responses) or alternatively dimensions in stimulus space to which the neuronal response are invariant (defining iso-response manifolds). We formulate this problem as that of learning a geometry on stimulus space that is compatible with the neural responses: the distance between stimuli should be large when the responses they evoke are very different, and small when the responses they evoke are similar. Here we show how to successfully train such distance functions using rather limited amount of information. The data consisted of the responses of neurons in primary auditory cortex (A1) of anesthetized cats to 32 stimuli derived from natural sounds. For each neuron, a subset of all pairs of stimuli was selected such that the responses of the two stimuli in a pair were either very similar or very dissimilar. The distance function was trained to fit these constraints. The resulting distance functions generalized to predict the distances between the responses of a test stimulus and the trained stimuli. 1

4 0.5777998 203 nips-2005-Visual Encoding with Jittering Eyes

Author: Michele Rucci

Abstract: Under natural viewing conditions, small movements of the eye and body prevent the maintenance of a steady direction of gaze. It is known that stimuli tend to fade when they are stabilized on the retina for several seconds. However, it is unclear whether the physiological self-motion of the retinal image serves a visual purpose during the brief periods of natural visual fixation. This study examines the impact of fixational instability on the statistics of visual input to the retina and on the structure of neural activity in the early visual system. Fixational instability introduces fluctuations in the retinal input signals that, in the presence of natural images, lack spatial correlations. These input fluctuations strongly influence neural activity in a model of the LGN. They decorrelate cell responses, even if the contrast sensitivity functions of simulated cells are not perfectly tuned to counter-balance the power-law spectrum of natural images. A decorrelation of neural activity has been proposed to be beneficial for discarding statistical redundancies in the input signals. Fixational instability might, therefore, contribute to establishing efficient representations of natural stimuli. 1

5 0.57402438 146 nips-2005-On the Accuracy of Bounded Rationality: How Far from Optimal Is Fast and Frugal?

Author: Michael Schmitt, Laura Martignon

Abstract: Fast and frugal heuristics are well studied models of bounded rationality. Psychological research has proposed the take-the-best heuristic as a successful strategy in decision making with limited resources. Take-thebest searches for a sufficiently good ordering of cues (features) in a task where objects are to be compared lexicographically. We investigate the complexity of the problem of approximating optimal cue permutations for lexicographic strategies. We show that no efficient algorithm can approximate the optimum to within any constant factor, if P = NP. We further consider a greedy approach for building lexicographic strategies and derive tight bounds for the performance ratio of a new and simple algorithm. This algorithm is proven to perform better than take-the-best. 1

6 0.49180835 101 nips-2005-Is Early Vision Optimized for Extracting Higher-order Dependencies?

7 0.47187057 94 nips-2005-Identifying Distributed Object Representations in Human Extrastriate Visual Cortex

8 0.45689023 25 nips-2005-An aVLSI Cricket Ear Model

9 0.37764302 170 nips-2005-Scaling Laws in Natural Scenes and the Inference of 3D Shape

10 0.35948697 134 nips-2005-Neural mechanisms of contrast dependent receptive field size in V1

11 0.33468479 40 nips-2005-CMOL CrossNets: Possible Neuromorphic Nanoelectronic Circuits

12 0.33152851 169 nips-2005-Saliency Based on Information Maximization

13 0.32234237 17 nips-2005-Active Bidirectional Coupling in a Cochlear Chip

14 0.32099774 7 nips-2005-A Cortically-Plausible Inverse Problem Solving Method Applied to Recognizing Static and Kinematic 3D Objects

15 0.30283016 141 nips-2005-Norepinephrine and Neural Interrupts

16 0.29714069 149 nips-2005-Optimal cue selection strategy

17 0.29189399 129 nips-2005-Modeling Neural Population Spiking Activity with Gibbs Distributions

18 0.28126952 67 nips-2005-Extracting Dynamical Structure Embedded in Neural Activity

19 0.28079128 150 nips-2005-Optimizing spatio-temporal filters for improving Brain-Computer Interfacing

20 0.27334329 26 nips-2005-An exploration-exploitation model based on norepinepherine and dopamine activity


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(3, 0.021), (10, 0.038), (12, 0.01), (27, 0.113), (31, 0.049), (34, 0.057), (39, 0.018), (55, 0.04), (57, 0.025), (60, 0.05), (69, 0.078), (73, 0.035), (75, 0.236), (88, 0.082), (91, 0.036)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.80538338 109 nips-2005-Learning Cue-Invariant Visual Responses

Author: Jarmo Hurri

Abstract: Multiple visual cues are used by the visual system to analyze a scene; achromatic cues include luminance, texture, contrast and motion. Singlecell recordings have shown that the mammalian visual cortex contains neurons that respond similarly to scene structure (e.g., orientation of a boundary), regardless of the cue type conveying this information. This paper shows that cue-invariant response properties of simple- and complex-type cells can be learned from natural image data in an unsupervised manner. In order to do this, we also extend a previous conceptual model of cue invariance so that it can be applied to model simple- and complex-cell responses. Our results relate cue-invariant response properties to natural image statistics, thereby showing how the statistical modeling approach can be used to model processing beyond the elemental response properties visual neurons. This work also demonstrates how to learn, from natural image data, more sophisticated feature detectors than those based on changes in mean luminance, thereby paving the way for new data-driven approaches to image processing and computer vision. 1

2 0.63461405 32 nips-2005-Augmented Rescorla-Wagner and Maximum Likelihood Estimation

Author: Alan L. Yuille

Abstract: We show that linear generalizations of Rescorla-Wagner can perform Maximum Likelihood estimation of the parameters of all generative models for causal reasoning. Our approach involves augmenting variables to deal with conjunctions of causes, similar to the agumented model of Rescorla. Our results involve genericity assumptions on the distributions of causes. If these assumptions are violated, for example for the Cheng causal power theory, then we show that a linear Rescorla-Wagner can estimate the parameters of the model up to a nonlinear transformtion. Moreover, a nonlinear Rescorla-Wagner is able to estimate the parameters directly to within arbitrary accuracy. Previous results can be used to determine convergence and to estimate convergence rates. 1

3 0.58745754 155 nips-2005-Predicting EMG Data from M1 Neurons with Variational Bayesian Least Squares

Author: Jo-anne Ting, Aaron D'souza, Kenji Yamamoto, Toshinori Yoshioka, Donna Hoffman, Shinji Kakei, Lauren Sergio, John Kalaska, Mitsuo Kawato

Abstract: An increasing number of projects in neuroscience requires the statistical analysis of high dimensional data sets, as, for instance, in predicting behavior from neural firing or in operating artificial devices from brain recordings in brain-machine interfaces. Linear analysis techniques remain prevalent in such cases, but classical linear regression approaches are often numerically too fragile in high dimensions. In this paper, we address the question of whether EMG data collected from arm movements of monkeys can be faithfully reconstructed with linear approaches from neural activity in primary motor cortex (M1). To achieve robust data analysis, we develop a full Bayesian approach to linear regression that automatically detects and excludes irrelevant features in the data, regularizing against overfitting. In comparison with ordinary least squares, stepwise regression, partial least squares, LASSO regression and a brute force combinatorial search for the most predictive input features in the data, we demonstrate that the new Bayesian method offers a superior mixture of characteristics in terms of regularization against overfitting, computational efficiency and ease of use, demonstrating its potential as a drop-in replacement for other linear regression techniques. As neuroscientific results, our analyses demonstrate that EMG data can be well predicted from M1 neurons, further opening the path for possible real-time interfaces between brains and machines. 1

4 0.57308811 87 nips-2005-Goal-Based Imitation as Probabilistic Inference over Graphical Models

Author: Deepak Verma, Rajesh P. Rao

Abstract: Humans are extremely adept at learning new skills by imitating the actions of others. A progression of imitative abilities has been observed in children, ranging from imitation of simple body movements to goalbased imitation based on inferring intent. In this paper, we show that the problem of goal-based imitation can be formulated as one of inferring goals and selecting actions using a learned probabilistic graphical model of the environment. We first describe algorithms for planning actions to achieve a goal state using probabilistic inference. We then describe how planning can be used to bootstrap the learning of goal-dependent policies by utilizing feedback from the environment. The resulting graphical model is then shown to be powerful enough to allow goal-based imitation. Using a simple maze navigation task, we illustrate how an agent can infer the goals of an observed teacher and imitate the teacher even when the goals are uncertain and the demonstration is incomplete.

5 0.56745982 101 nips-2005-Is Early Vision Optimized for Extracting Higher-order Dependencies?

Author: Yan Karklin, Michael S. Lewicki

Abstract: Linear implementations of the efficient coding hypothesis, such as independent component analysis (ICA) and sparse coding models, have provided functional explanations for properties of simple cells in V1 [1, 2]. These models, however, ignore the non-linear behavior of neurons and fail to match individual and population properties of neural receptive fields in subtle but important ways. Hierarchical models, including Gaussian Scale Mixtures [3, 4] and other generative statistical models [5, 6], can capture higher-order regularities in natural images and explain nonlinear aspects of neural processing such as normalization and context effects [6,7]. Previously, it had been assumed that the lower level representation is independent of the hierarchy, and had been fixed when training these models. Here we examine the optimal lower-level representations derived in the context of a hierarchical model and find that the resulting representations are strikingly different from those based on linear models. Unlike the the basis functions and filters learned by ICA or sparse coding, these functions individually more closely resemble simple cell receptive fields and collectively span a broad range of spatial scales. Our work unifies several related approaches and observations about natural image structure and suggests that hierarchical models might yield better representations of image structure throughout the hierarchy.

6 0.55619806 72 nips-2005-Fast Online Policy Gradient Learning with SMD Gain Vector Adaptation

7 0.54769337 185 nips-2005-Subsequence Kernels for Relation Extraction

8 0.53978544 36 nips-2005-Bayesian models of human action understanding

9 0.53061765 67 nips-2005-Extracting Dynamical Structure Embedded in Neural Activity

10 0.52876854 29 nips-2005-Analyzing Coupled Brain Sources: Distinguishing True from Spurious Interaction

11 0.52686906 28 nips-2005-Analyzing Auditory Neurons by Learning Distance Functions

12 0.52560335 48 nips-2005-Context as Filtering

13 0.52549994 100 nips-2005-Interpolating between types and tokens by estimating power-law generators

14 0.52463377 169 nips-2005-Saliency Based on Information Maximization

15 0.5235126 153 nips-2005-Policy-Gradient Methods for Planning

16 0.5206663 203 nips-2005-Visual Encoding with Jittering Eyes

17 0.52050936 45 nips-2005-Conditional Visual Tracking in Kernel Space

18 0.52004677 173 nips-2005-Sensory Adaptation within a Bayesian Framework for Perception

19 0.5192185 200 nips-2005-Variable KD-Tree Algorithms for Spatial Pattern Search and Discovery

20 0.51899695 181 nips-2005-Spiking Inputs to a Winner-take-all Network