nips nips2008 nips2008-27 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Mehmet K. Muezzinoglu, Alexander Vergara, Ramon Huerta, Thomas Nowotny, Nikolai Rulkov, Henry Abarbanel, Allen Selverston, Mikhail Rabinovich
Abstract: The odor transduction process has a large time constant and is susceptible to various types of noise. Therefore, the olfactory code at the sensor/receptor level is in general a slow and highly variable indicator of the input odor in both natural and artificial situations. Insects overcome this problem by using a neuronal device in their Antennal Lobe (AL), which transforms the identity code of olfactory receptors to a spatio-temporal code. This transformation improves the decision of the Mushroom Bodies (MBs), the subsequent classifier, in both speed and accuracy. Here we propose a rate model based on two intrinsic mechanisms in the insect AL, namely integration and inhibition. Then we present a MB classifier model that resembles the sparse and random structure of insect MB. A local Hebbian learning procedure governs the plasticity in the model. These formulations not only help to understand the signal conditioning and classification methods of insect olfactory systems, but also can be leveraged in synthetic problems. Among them, we consider here the discrimination of odor mixtures from pure odors. We show on a set of records from metal-oxide gas sensors that the cascade of these two new models facilitates fast and accurate discrimination of even highly imbalanced mixtures from pure odors. 1
Reference: text
sentIndex sentText sentNum sentScore
1 Rabinovich1 Centre for Computational Neuroscience and Robotics Department of Informatics, University of Sussex Falmer, Brighton, BN1 9QJ, UK Abstract The odor transduction process has a large time constant and is susceptible to various types of noise. [sent-7, score-0.568]
2 Therefore, the olfactory code at the sensor/receptor level is in general a slow and highly variable indicator of the input odor in both natural and artificial situations. [sent-8, score-0.776]
3 Insects overcome this problem by using a neuronal device in their Antennal Lobe (AL), which transforms the identity code of olfactory receptors to a spatio-temporal code. [sent-9, score-0.18]
4 Here we propose a rate model based on two intrinsic mechanisms in the insect AL, namely integration and inhibition. [sent-11, score-0.237]
5 Then we present a MB classifier model that resembles the sparse and random structure of insect MB. [sent-12, score-0.18]
6 These formulations not only help to understand the signal conditioning and classification methods of insect olfactory systems, but also can be leveraged in synthetic problems. [sent-14, score-0.434]
7 Among them, we consider here the discrimination of odor mixtures from pure odors. [sent-15, score-0.708]
8 We show on a set of records from metal-oxide gas sensors that the cascade of these two new models facilitates fast and accurate discrimination of even highly imbalanced mixtures from pure odors. [sent-16, score-0.376]
9 1 Introduction Odor sensors are diverse in terms of their sensitivity to odor identity and concentrations. [sent-17, score-0.655]
10 When arranged in parallel arrays, they may provide a rich representation of the odor space. [sent-18, score-0.568]
11 Biological olfactory systems owe the bulk of their success to employing a large number of olfactory receptor neurons (ORNs) of various phenotypes. [sent-19, score-0.54]
12 Identifying and quantifying an odor accurately in a short time is an impressive characteristic of insect olfaction. [sent-21, score-0.748]
13 Our motivation in this study is the potential for skillful feature extraction and classification methods by insect olfactory systems in synthetic applications, which also deal with slow and noisy sensory data. [sent-25, score-0.399]
14 The particular problem we address is the discrimination of two-component odor mixtures from 1 1 2 Odor Mushroom Body Classifier Model 3 Odor Identity . [sent-26, score-0.596]
15 16 Sensor Array Dynamical Antennal Lobe Model Snapshot Figure 1: The considered biomimetic framework to identify whether an applied gas is a pure odor or a mixture. [sent-29, score-0.801]
16 The input is transduced by 16 parallel metal-oxide gas sensors of different type generating slow and noisy resistance time series. [sent-30, score-0.206]
17 The signal conditioning in the antennal lobe is achieved by the interaction of an excitatory Projection Neuron (PN) population (white nodes) with an inhibitory Local Neurons (LNs, black nodes). [sent-31, score-0.478]
18 We treat the problem on two mixture datasets recorded from metal-oxide gas sensors (included in the supplementary material). [sent-36, score-0.267]
19 We propose in the next section a dynamical rate model mimicking the AL’s signal conditioning function. [sent-37, score-0.145]
20 By testing the model first with a generic Support Vector Machine (SVM) classifier, we validate the substantial improvement that AL adds on the classificatory value of raw sensory signal (Section 2). [sent-38, score-0.122]
21 The model MB exploits the structural organization of the insect MB. [sent-41, score-0.18]
22 When subjected to a constant odor concentration, the settling time of ORN activity is on the order of hundreds of milliseconds to seconds [3], whereas recognition is known to occur earlier [7]. [sent-50, score-0.654]
23 This is a clear indicator that the AL makes extensive use of the ORN transient, since instantaneous activity is less odor-specific in transient than it is in during the steady state. [sent-51, score-0.214]
24 To provide high accuracy under such a temporal constraint, the classificatory information during this period must be somehow accumulated, which means that AL has to be a dynamical system, utilizing memory. [sent-52, score-0.12]
25 Strong experimental evidence suggests that the insect AL representation of odors is a transient, yet reproducible, spatio-temporal encoding [8]. [sent-54, score-0.27]
26 The AL is a dynamical network that is formed by the coupling of an excitatory neuron population (projection neurons, PNs) with an inhibitory one 2 (local neurons, LNs). [sent-55, score-0.281]
27 The fruit fly has about 50 glomeruli as chemotopic clusters of synapses from nearly 50, 000 ORNs. [sent-57, score-0.119]
28 1), the 16 artificial gas sensors actually correspond to glomeruli (rather than individual ORNs) so that the AL has direct access to sensor resistances. [sent-62, score-0.327]
29 This setting is capable of evaluating the transient portion of the sensory signal effectively. [sent-65, score-0.177]
30 In particular, the novelty gained due to observing consecutive samples during the transient is on average greater than the informational gain obtained during the steady-state. [sent-68, score-0.133]
31 As a device that extracts and integrates odor-specific information in ORN signals, the AL provides an enriched transient to the subsequent MB so that it can achieve accurate classification early in the odor period. [sent-70, score-0.669]
32 , [9, 10] to illustrate the sharpening effect of inhibition in the olfactory system. [sent-73, score-0.18]
33 The neural activity corresponding to the rate of action potential generation of the biological neurons is given by xi (t), i = 1, 2, . [sent-77, score-0.182]
34 The rate of change in these activities is stimulated by a weighted sum over both populations and a E I set of input signals Si (t) and Si (t) indicating the activity in the glomeruli stimulating the PNs and the LNs, respectively. [sent-85, score-0.2]
35 Our formulation of these ideas is through a Wilson-Cowan-like population model [11] NI dxi (t) E E EI E E βi = Ki · Θ − wij yj (t) + ginp Si (t) − xi (t) + µE (t), i ∈ 1, . [sent-87, score-0.149]
36 The network topology is formed through a random process of Bernoulli type: XY wij = gY · 1 , 0 , with probability pXY with probability 1 − pXY where gY is a fixed coupling strength. [sent-99, score-0.118]
37 Each unit, regardless of its type, accepts external input from exactly one sensor in the form of raw resistance time series. [sent-101, score-0.163]
38 This sensor is assigned randomly among all 16 available sensors, ensuring that all sensors are covered1 . [sent-102, score-0.176]
39 5 2 (b) Figure 2: (a) A record from Dataset 1, where 100ppm acetaldehyde was applied to the sensor array for 0 ≤ t ≤ 100s. [sent-109, score-0.228]
40 (b) Activity of NE = 75 excitatory PN units of the sample AL model in response to the (time-scaled version of) record shown on panel (a). [sent-112, score-0.152]
41 For the mixture identification problem of this study, we consider a network with NE = NI = 75 and E I ginp = ginp = 10−2 . [sent-115, score-0.25]
42 We confirm this in all simulations with the selected parameter values, both during and after the sensory input (odor) period (see Fig. [sent-131, score-0.142]
43 3 Validation We consider the activity in PN population as the only piece of information regarding the input odor that is passed on to higher-order layers of the olfactory system. [sent-134, score-0.908]
44 Access to this activity by those layers can be modeled as instantaneous sampling of a selected brief window of temporal behavior of PNs [7]. [sent-135, score-0.113]
45 Therefore, the recognition system in our model utilizes such snapshots from the spiking activity in the excitatory population xi (t). [sent-136, score-0.252]
46 1 Dataset The model is driven by responses recorded from 16 metal-oxide gas sensors in parallel. [sent-143, score-0.178]
47 We have made 80 recordings and grouped them into two sets based on vapor concentration: records for 100ppm vapor in Dataset 1 and 50ppm in Dataset 2. [sent-144, score-0.148]
48 Each dataset contains 40 records from three classes: 10 pure acetaldehyde, 10 pure toluene, and 20 mixture records. [sent-145, score-0.372]
49 The mixture class contains records from imbalanced acetaldehyde-toluene mixtures with 96%-4%, 98%-2%, 2%-98%, and 4%-96% partial concentrations, five from each. [sent-146, score-0.132]
50 We removed the offset from each sensor record and scaled the odor period to 1s. [sent-149, score-0.781]
51 This was done by mapping the odor period, which has fixed length of 100s in the original records, to 1s by reindexing the time series. [sent-150, score-0.568]
52 These one-second long raw time series, included in the supplementary material, constitute the pool of raw inputs to be applied to the AL network during the time interval 0. [sent-151, score-0.189]
53 The input is set to zero outside of this odor period. [sent-154, score-0.596]
54 5s Success rate (%) 100 60 40 without AL network with optimized AL network 20 0. [sent-159, score-0.134]
55 5s Success rate (%) 100 60 40 without AL network using SVM with optimized AL network using SVM with optimized AL network using MB 20 0. [sent-173, score-0.188]
56 5 gE 0 1 (c) −5 x 10 85 (d) Figure 3: (a) Estimated correct classification profile versus snapshot time ts during the normalized odor period for Dataset 1. [sent-186, score-0.781]
57 The black baseline profile is obtained by discarding AL and directly classifying snapshots from raw sensor responses by SVM. [sent-188, score-0.187]
58 We use a Support Vector Machine (SVM) classifier with linear kernel to map the snapshots from PN activity to odor identity. [sent-194, score-0.706]
59 We present each record in the dataset to the network and then log the network response from excitatory population in the form of NE simultaneous time series (see Fig. [sent-205, score-0.315]
60 Then, at each percentile of the odor period ts ∈ {0. [sent-207, score-0.69]
61 5 + k/100}100 , we take a snapshot from each NE -dimensional time series and label it k=0 by the odor identity (pure acetaldehyde, pure toluene, or mixture). [sent-208, score-0.771]
62 The classification profile versus time is extracted when the ts sweep through the odor period is complete. [sent-213, score-0.732]
63 This pair is determined as the one maximizing classification success rate k=0 when samples from the end of odor period is used ts = 1. [sent-216, score-0.748]
64 Dataset 1 induces an easier instance of the identification problem toward the end of odor period, which can be resolved reasonably well using raw sensor data at the steady state. [sent-221, score-0.703]
65 Therefore, the gain over baseline due to AL processing is not so significant in later portions of the odor period for Dataset 1. [sent-222, score-0.675]
66 Again, this is because the former is an easier problem when the sensors reach the steady-state at ts = 1. [sent-224, score-0.134]
67 3(c) that there are actually periods early in the period where the raw sensor data can be fairly indicative of the class information; however, it is not possible to predict these intervals in advance. [sent-228, score-0.21]
68 55s, are artifacts (due to classification of pure noise) since we know that there is hardly any vapor in the measurement chamber during that period (see Fig. [sent-230, score-0.232]
69 In any case, in both problems, the suggested AL dynamics (with adjusted parameters) contributes substantially to the classification performance during the transient of the sensory signal. [sent-232, score-0.14]
70 3 Mushroom Body Classifier The MBs of insects employ a large number of identical small intrinsic cells, the so-called Kenyon cells, and fewer output neurons in the MB lobes. [sent-236, score-0.133]
71 It has been observed that, unlike in the AL, the activity in the KCs is very sparse, both across the population and for individual cells over time. [sent-237, score-0.163]
72 Theoretical work suggests that a large number of cells with sparse activity enables efficient classification with random connectivity [4]. [sent-238, score-0.199]
73 The power of this architecture lies in its versatility: The connectivity is not optimized for any specific task and can, therefore, accommodate a variety of input types. [sent-239, score-0.11]
74 1 The Model The insect MB consists of four crucial elements (see Fig. [sent-241, score-0.18]
75 It has been shown in locusts that the activity patterns in the AL are practically discretized by a periodic feedforward inhibition onto the MB calyces and that the activity levels in KCs are very low [7]. [sent-243, score-0.172]
76 Based on the observed discrete and sparse activity pattern in insect MB, we choose to represent KC units as simple algebraic McCulloch-Pitts ‘neurons. [sent-244, score-0.301]
77 ’ The neural activity values taken NE KC by this neural model are binary (0 = no spike and 1 = spike): µj = Φ j= i=1 cji xi − θ 1, 2, . [sent-245, score-0.116]
78 The vector x is the representation of the odor that is received as a snapshot from the excitatory PN units of AL model. [sent-249, score-0.762]
79 Since the degree of connectivity from the input neurons to the KC neurons did not appear to be critical for the performance of the system, we made it uniform by setting the connection probability as pc = 0. [sent-258, score-0.25]
80 The plasticity of the output layer is due to a binary learning signal that rewards the weights of output units responding to the correct stimulus. [sent-265, score-0.227]
81 Although the basic system described so far implements the divergent (and static) input layer observed in insect calyx, it is very unstable against fluctuations in the total number of active input neurons due to the divergence of connectivity. [sent-266, score-0.35]
82 In our model, the output units in the MB lobes are again McCullochNKC LB Pitts neurons: zl = Φ , l = 1, 2, . [sent-271, score-0.113]
83 The output vector z of the MB lobes has dimension NLB (equals 3 in our problem) and θLB is the threshold for the decision neurons in the MB lobes. [sent-276, score-0.148]
84 The NLB × NKC connectivity matrix wlj has integer entries. [sent-277, score-0.142]
85 Similar to the above-mentioned gain control, we allow only the decision neuron that receives the highest synaptic input to fire. [sent-278, score-0.132]
86 These synaptic strengths wlj are subject to changes during learning according to a Hebbian type plasticity rule described next. [sent-279, score-0.138]
87 2 Training The hypothesis of locating reinforcement learning in mushroom bodies goes back to Montague and collaborators [6]. [sent-281, score-0.144]
88 Every odor class is associated with an output neuron of the MB, so there are three output nodes firing for either pure toluene, pure acetaldehyde, or mixture type of input. [sent-282, score-0.943]
89 The plasticity rule is applied on the connectivity matrix W , whose entries are randomly and independently initialized within [0, 10]. [sent-283, score-0.127]
90 The entries of the connectivity matrix at the time of the nth input are denoted by wlj (n). [sent-286, score-0.17]
91 This learning rule strenghtens a synaptic connection with probability p+ if presynaptic activity is accompanied by postsynaptic activity. [sent-288, score-0.119]
92 For p+ = p− = 1, we trained the output layer of MB using the labelled AL outputs sampled at 10 points in the odor period. [sent-294, score-0.645]
93 7 4 Conclusions We have presented a complete odor identification scheme based on the key principles of insect olfaction, and demonstrated its validity in discriminating mixtures of odors from pure odors using actual records from metal-oxide gas sensors. [sent-300, score-1.217]
94 By exploiting the dynamical nature of the AL stage and the sparsity in MB representation, the overall model provides an explanation for the high speed and accuracy of odor identification in insect olfactory processing. [sent-305, score-0.973]
95 The mixture identification problem investigated here is in general more difficult than the traditional problem of discriminating pure odors, since the mixture class can be made arbitrarily close to the pure odor classes. [sent-308, score-0.884]
96 Sensory processing in the drosphila antennal lobe increases reliability and separability of ensemble odor representations. [sent-321, score-0.829]
97 Odor coding in a model olfactory organ: The Drosophila maxillary palp. [sent-335, score-0.18]
98 Oscillations and sparsening of odor representations in the mushroom body. [sent-372, score-0.673]
99 Chemosensory processing in a spiking model of the olfactory bulb: Chemotopic convergence and center surround inhibition. [sent-384, score-0.18]
100 Processing and classification of chemical data inspired by insect olfaction. [sent-394, score-0.18]
wordName wordTfidf (topN-words)
[('odor', 0.568), ('al', 0.329), ('mb', 0.243), ('insect', 0.18), ('olfactory', 0.18), ('antennal', 0.149), ('ge', 0.138), ('nkc', 0.12), ('gi', 0.112), ('lobe', 0.112), ('pure', 0.112), ('kcs', 0.105), ('mushroom', 0.105), ('orn', 0.105), ('pns', 0.105), ('transient', 0.101), ('classi', 0.101), ('gas', 0.091), ('snapshot', 0.091), ('acetaldehyde', 0.09), ('odors', 0.09), ('sensor', 0.089), ('sensors', 0.087), ('activity', 0.086), ('connectivity', 0.082), ('kc', 0.078), ('period', 0.075), ('ginp', 0.075), ('lns', 0.075), ('toluene', 0.075), ('neurons', 0.07), ('excitatory', 0.068), ('hebbian', 0.061), ('catory', 0.06), ('glomeruli', 0.06), ('mbs', 0.06), ('orns', 0.06), ('tgs', 0.06), ('wlj', 0.06), ('records', 0.058), ('pn', 0.056), ('network', 0.054), ('er', 0.053), ('receptor', 0.052), ('snapshots', 0.052), ('ni', 0.05), ('record', 0.049), ('lb', 0.048), ('ts', 0.047), ('mixture', 0.046), ('raw', 0.046), ('population', 0.046), ('dynamical', 0.045), ('plasticity', 0.045), ('calyx', 0.045), ('lobes', 0.045), ('nlb', 0.045), ('vapor', 0.045), ('layer', 0.044), ('dataset', 0.044), ('supplementary', 0.043), ('sweep', 0.042), ('cation', 0.042), ('bodies', 0.039), ('pxy', 0.039), ('neuron', 0.039), ('sensory', 0.039), ('ki', 0.038), ('signal', 0.037), ('conditioning', 0.037), ('topology', 0.036), ('units', 0.035), ('svm', 0.035), ('conductances', 0.034), ('output', 0.033), ('synaptic', 0.033), ('gain', 0.032), ('success', 0.032), ('le', 0.031), ('cells', 0.031), ('integration', 0.031), ('biomimetic', 0.03), ('chemotopic', 0.03), ('cji', 0.03), ('insects', 0.03), ('reproducibility', 0.03), ('xne', 0.03), ('pro', 0.029), ('inhibitory', 0.029), ('synapses', 0.029), ('wij', 0.028), ('mixtures', 0.028), ('input', 0.028), ('material', 0.027), ('si', 0.027), ('instantaneous', 0.027), ('rate', 0.026), ('bulk', 0.026), ('cooperation', 0.026), ('downstream', 0.026)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999994 27 nips-2008-Artificial Olfactory Brain for Mixture Identification
Author: Mehmet K. Muezzinoglu, Alexander Vergara, Ramon Huerta, Thomas Nowotny, Nikolai Rulkov, Henry Abarbanel, Allen Selverston, Mikhail Rabinovich
Abstract: The odor transduction process has a large time constant and is susceptible to various types of noise. Therefore, the olfactory code at the sensor/receptor level is in general a slow and highly variable indicator of the input odor in both natural and artificial situations. Insects overcome this problem by using a neuronal device in their Antennal Lobe (AL), which transforms the identity code of olfactory receptors to a spatio-temporal code. This transformation improves the decision of the Mushroom Bodies (MBs), the subsequent classifier, in both speed and accuracy. Here we propose a rate model based on two intrinsic mechanisms in the insect AL, namely integration and inhibition. Then we present a MB classifier model that resembles the sparse and random structure of insect MB. A local Hebbian learning procedure governs the plasticity in the model. These formulations not only help to understand the signal conditioning and classification methods of insect olfactory systems, but also can be leveraged in synthetic problems. Among them, we consider here the discrimination of odor mixtures from pure odors. We show on a set of records from metal-oxide gas sensors that the cascade of these two new models facilitates fast and accurate discrimination of even highly imbalanced mixtures from pure odors. 1
2 0.11018007 58 nips-2008-Dependence of Orientation Tuning on Recurrent Excitation and Inhibition in a Network Model of V1
Author: Klaus Wimmer, Marcel Stimberg, Robert Martin, Lars Schwabe, Jorge Mariño, James Schummers, David C. Lyon, Mriganka Sur, Klaus Obermayer
Abstract: The computational role of the local recurrent network in primary visual cortex is still a matter of debate. To address this issue, we analyze intracellular recording data of cat V1, which combine measuring the tuning of a range of neuronal properties with a precise localization of the recording sites in the orientation preference map. For the analysis, we consider a network model of Hodgkin-Huxley type neurons arranged according to a biologically plausible two-dimensional topographic orientation preference map. We then systematically vary the strength of the recurrent excitation and inhibition relative to the strength of the afferent input. Each parametrization gives rise to a different model instance for which the tuning of model neurons at different locations of the orientation map is compared to the experimentally measured orientation tuning of membrane potential, spike output, excitatory, and inhibitory conductances. A quantitative analysis shows that the data provides strong evidence for a network model in which the afferent input is dominated by strong, balanced contributions of recurrent excitation and inhibition. This recurrent regime is close to a regime of “instability”, where strong, self-sustained activity of the network occurs. The firing rate of neurons in the best-fitting network is particularly sensitive to small modulations of model parameters, which could be one of the functional benefits of a network operating in this particular regime. 1
3 0.085163601 43 nips-2008-Cell Assemblies in Large Sparse Inhibitory Networks of Biologically Realistic Spiking Neurons
Author: Adam Ponzi, Jeff Wickens
Abstract: Cell assemblies exhibiting episodes of recurrent coherent activity have been observed in several brain regions including the striatum[1] and hippocampus CA3[2]. Here we address the question of how coherent dynamically switching assemblies appear in large networks of biologically realistic spiking neurons interacting deterministically. We show by numerical simulations of large asymmetric inhibitory networks with fixed external excitatory drive that if the network has intermediate to sparse connectivity, the individual cells are in the vicinity of a bifurcation between a quiescent and firing state and the network inhibition varies slowly on the spiking timescale, then cells form assemblies whose members show strong positive correlation, while members of different assemblies show strong negative correlation. We show that cells and assemblies switch between firing and quiescent states with time durations consistent with a power-law. Our results are in good qualitative agreement with the experimental studies. The deterministic dynamical behaviour is related to winner-less competition[3], shown in small closed loop inhibitory networks with heteroclinic cycles connecting saddle-points. 1
4 0.078623042 204 nips-2008-Self-organization using synaptic plasticity
Author: Vicençc Gómez, Andreas Kaltenbrunner, Vicente López, Hilbert J. Kappen
Abstract: Large networks of spiking neurons show abrupt changes in their collective dynamics resembling phase transitions studied in statistical physics. An example of this phenomenon is the transition from irregular, noise-driven dynamics to regular, self-sustained behavior observed in networks of integrate-and-fire neurons as the interaction strength between the neurons increases. In this work we show how a network of spiking neurons is able to self-organize towards a critical state for which the range of possible inter-spike-intervals (dynamic range) is maximized. Self-organization occurs via synaptic dynamics that we analytically derive. The resulting plasticity rule is defined locally so that global homeostasis near the critical state is achieved by local regulation of individual synapses. 1
Author: Christoph Kolodziejski, Bernd Porr, Minija Tamosiunaite, Florentin Wörgötter
Abstract: In this theoretical contribution we provide mathematical proof that two of the most important classes of network learning - correlation-based differential Hebbian learning and reward-based temporal difference learning - are asymptotically equivalent when timing the learning with a local modulatory signal. This opens the opportunity to consistently reformulate most of the abstract reinforcement learning framework from a correlation based perspective that is more closely related to the biophysics of neurons. 1
6 0.06414552 230 nips-2008-Temporal Difference Based Actor Critic Learning - Convergence and Neural Implementation
7 0.059676878 38 nips-2008-Bio-inspired Real Time Sensory Map Realignment in a Robotic Barn Owl
8 0.057891618 200 nips-2008-Robust Kernel Principal Component Analysis
9 0.057068728 103 nips-2008-Implicit Mixtures of Restricted Boltzmann Machines
10 0.05356276 214 nips-2008-Sparse Online Learning via Truncated Gradient
11 0.050087031 96 nips-2008-Hebbian Learning of Bayes Optimal Decisions
12 0.049220949 90 nips-2008-Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity
13 0.049155712 130 nips-2008-MCBoost: Multiple Classifier Boosting for Perceptual Co-clustering of Images and Visual Features
14 0.048910674 62 nips-2008-Differentiable Sparse Coding
15 0.048601892 152 nips-2008-Non-stationary dynamic Bayesian networks
16 0.048100971 109 nips-2008-Interpreting the neural code with Formal Concept Analysis
17 0.047963284 118 nips-2008-Learning Transformational Invariants from Natural Movies
18 0.047738608 231 nips-2008-Temporal Dynamics of Cognitive Control
19 0.047486495 160 nips-2008-On Computational Power and the Order-Chaos Phase Transition in Reservoir Computing
20 0.046440255 153 nips-2008-Nonlinear causal discovery with additive noise models
topicId topicWeight
[(0, -0.146), (1, 0.018), (2, 0.104), (3, 0.058), (4, -0.045), (5, 0.008), (6, -0.029), (7, -0.027), (8, 0.041), (9, 0.049), (10, 0.035), (11, 0.116), (12, -0.07), (13, -0.006), (14, 0.02), (15, -0.087), (16, 0.06), (17, -0.047), (18, 0.013), (19, -0.143), (20, 0.0), (21, 0.031), (22, -0.053), (23, 0.05), (24, 0.04), (25, 0.007), (26, -0.068), (27, -0.064), (28, -0.012), (29, -0.002), (30, 0.029), (31, 0.01), (32, -0.031), (33, 0.035), (34, 0.052), (35, 0.094), (36, -0.015), (37, 0.067), (38, -0.033), (39, -0.031), (40, 0.018), (41, -0.016), (42, -0.033), (43, 0.052), (44, 0.047), (45, -0.03), (46, 0.075), (47, -0.008), (48, 0.005), (49, -0.005)]
simIndex simValue paperId paperTitle
same-paper 1 0.91218543 27 nips-2008-Artificial Olfactory Brain for Mixture Identification
Author: Mehmet K. Muezzinoglu, Alexander Vergara, Ramon Huerta, Thomas Nowotny, Nikolai Rulkov, Henry Abarbanel, Allen Selverston, Mikhail Rabinovich
Abstract: The odor transduction process has a large time constant and is susceptible to various types of noise. Therefore, the olfactory code at the sensor/receptor level is in general a slow and highly variable indicator of the input odor in both natural and artificial situations. Insects overcome this problem by using a neuronal device in their Antennal Lobe (AL), which transforms the identity code of olfactory receptors to a spatio-temporal code. This transformation improves the decision of the Mushroom Bodies (MBs), the subsequent classifier, in both speed and accuracy. Here we propose a rate model based on two intrinsic mechanisms in the insect AL, namely integration and inhibition. Then we present a MB classifier model that resembles the sparse and random structure of insect MB. A local Hebbian learning procedure governs the plasticity in the model. These formulations not only help to understand the signal conditioning and classification methods of insect olfactory systems, but also can be leveraged in synthetic problems. Among them, we consider here the discrimination of odor mixtures from pure odors. We show on a set of records from metal-oxide gas sensors that the cascade of these two new models facilitates fast and accurate discrimination of even highly imbalanced mixtures from pure odors. 1
2 0.81863534 58 nips-2008-Dependence of Orientation Tuning on Recurrent Excitation and Inhibition in a Network Model of V1
Author: Klaus Wimmer, Marcel Stimberg, Robert Martin, Lars Schwabe, Jorge Mariño, James Schummers, David C. Lyon, Mriganka Sur, Klaus Obermayer
Abstract: The computational role of the local recurrent network in primary visual cortex is still a matter of debate. To address this issue, we analyze intracellular recording data of cat V1, which combine measuring the tuning of a range of neuronal properties with a precise localization of the recording sites in the orientation preference map. For the analysis, we consider a network model of Hodgkin-Huxley type neurons arranged according to a biologically plausible two-dimensional topographic orientation preference map. We then systematically vary the strength of the recurrent excitation and inhibition relative to the strength of the afferent input. Each parametrization gives rise to a different model instance for which the tuning of model neurons at different locations of the orientation map is compared to the experimentally measured orientation tuning of membrane potential, spike output, excitatory, and inhibitory conductances. A quantitative analysis shows that the data provides strong evidence for a network model in which the afferent input is dominated by strong, balanced contributions of recurrent excitation and inhibition. This recurrent regime is close to a regime of “instability”, where strong, self-sustained activity of the network occurs. The firing rate of neurons in the best-fitting network is particularly sensitive to small modulations of model parameters, which could be one of the functional benefits of a network operating in this particular regime. 1
3 0.6760686 43 nips-2008-Cell Assemblies in Large Sparse Inhibitory Networks of Biologically Realistic Spiking Neurons
Author: Adam Ponzi, Jeff Wickens
Abstract: Cell assemblies exhibiting episodes of recurrent coherent activity have been observed in several brain regions including the striatum[1] and hippocampus CA3[2]. Here we address the question of how coherent dynamically switching assemblies appear in large networks of biologically realistic spiking neurons interacting deterministically. We show by numerical simulations of large asymmetric inhibitory networks with fixed external excitatory drive that if the network has intermediate to sparse connectivity, the individual cells are in the vicinity of a bifurcation between a quiescent and firing state and the network inhibition varies slowly on the spiking timescale, then cells form assemblies whose members show strong positive correlation, while members of different assemblies show strong negative correlation. We show that cells and assemblies switch between firing and quiescent states with time durations consistent with a power-law. Our results are in good qualitative agreement with the experimental studies. The deterministic dynamical behaviour is related to winner-less competition[3], shown in small closed loop inhibitory networks with heteroclinic cycles connecting saddle-points. 1
4 0.67053282 160 nips-2008-On Computational Power and the Order-Chaos Phase Transition in Reservoir Computing
Author: Benjamin Schrauwen, Lars Buesing, Robert A. Legenstein
Abstract: Randomly connected recurrent neural circuits have proven to be very powerful models for online computations when a trained memoryless readout function is appended. Such Reservoir Computing (RC) systems are commonly used in two flavors: with analog or binary (spiking) neurons in the recurrent circuits. Previous work showed a fundamental difference between these two incarnations of the RC idea. The performance of a RC system built from binary neurons seems to depend strongly on the network connectivity structure. In networks of analog neurons such dependency has not been observed. In this article we investigate this apparent dichotomy in terms of the in-degree of the circuit nodes. Our analyses based amongst others on the Lyapunov exponent reveal that the phase transition between ordered and chaotic network behavior of binary circuits qualitatively differs from the one in analog circuits. This explains the observed decreased computational performance of binary circuits of high node in-degree. Furthermore, a novel mean-field predictor for computational performance is introduced and shown to accurately predict the numerically obtained results. 1
5 0.66250277 204 nips-2008-Self-organization using synaptic plasticity
Author: Vicençc Gómez, Andreas Kaltenbrunner, Vicente López, Hilbert J. Kappen
Abstract: Large networks of spiking neurons show abrupt changes in their collective dynamics resembling phase transitions studied in statistical physics. An example of this phenomenon is the transition from irregular, noise-driven dynamics to regular, self-sustained behavior observed in networks of integrate-and-fire neurons as the interaction strength between the neurons increases. In this work we show how a network of spiking neurons is able to self-organize towards a critical state for which the range of possible inter-spike-intervals (dynamic range) is maximized. Self-organization occurs via synaptic dynamics that we analytically derive. The resulting plasticity rule is defined locally so that global homeostasis near the critical state is achieved by local regulation of individual synapses. 1
6 0.6327731 38 nips-2008-Bio-inspired Real Time Sensory Map Realignment in a Robotic Barn Owl
7 0.57287574 158 nips-2008-Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks
8 0.49918881 240 nips-2008-Tracking Changing Stimuli in Continuous Attractor Neural Networks
9 0.46055689 230 nips-2008-Temporal Difference Based Actor Critic Learning - Convergence and Neural Implementation
10 0.45785394 90 nips-2008-Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity
11 0.45228261 152 nips-2008-Non-stationary dynamic Bayesian networks
12 0.44708174 209 nips-2008-Short-Term Depression in VLSI Stochastic Synapse
13 0.43353823 148 nips-2008-Natural Image Denoising with Convolutional Networks
14 0.42214617 74 nips-2008-Estimating the Location and Orientation of Complex, Correlated Neural Activity using MEG
16 0.40266535 156 nips-2008-Nonparametric sparse hierarchical models describe V1 fMRI responses to natural images
17 0.40124759 124 nips-2008-Load and Attentional Bayes
18 0.40043187 222 nips-2008-Stress, noradrenaline, and realistic prediction of mouse behaviour using reinforcement learning
19 0.39665213 109 nips-2008-Interpreting the neural code with Formal Concept Analysis
20 0.37605268 3 nips-2008-A Massively Parallel Digital Learning Processor
topicId topicWeight
[(4, 0.024), (6, 0.093), (7, 0.059), (12, 0.026), (15, 0.015), (25, 0.014), (28, 0.1), (57, 0.303), (59, 0.036), (63, 0.015), (68, 0.106), (71, 0.018), (77, 0.037), (83, 0.036)]
simIndex simValue paperId paperTitle
1 0.91461742 191 nips-2008-Recursive Segmentation and Recognition Templates for 2D Parsing
Author: Leo Zhu, Yuanhao Chen, Yuan Lin, Chenxi Lin, Alan L. Yuille
Abstract: Language and image understanding are two major goals of artificial intelligence which can both be conceptually formulated in terms of parsing the input signal into a hierarchical representation. Natural language researchers have made great progress by exploiting the 1D structure of language to design efficient polynomialtime parsing algorithms. By contrast, the two-dimensional nature of images makes it much harder to design efficient image parsers and the form of the hierarchical representations is also unclear. Attempts to adapt representations and algorithms from natural language have only been partially successful. In this paper, we propose a Hierarchical Image Model (HIM) for 2D image parsing which outputs image segmentation and object recognition. This HIM is represented by recursive segmentation and recognition templates in multiple layers and has advantages for representation, inference, and learning. Firstly, the HIM has a coarse-to-fine representation which is capable of capturing long-range dependency and exploiting different levels of contextual information. Secondly, the structure of the HIM allows us to design a rapid inference algorithm, based on dynamic programming, which enables us to parse the image rapidly in polynomial time. Thirdly, we can learn the HIM efficiently in a discriminative manner from a labeled dataset. We demonstrate that HIM outperforms other state-of-the-art methods by evaluation on the challenging public MSRC image dataset. Finally, we sketch how the HIM architecture can be extended to model more complex image phenomena. 1
2 0.90393919 233 nips-2008-The Gaussian Process Density Sampler
Author: Iain Murray, David MacKay, Ryan P. Adams
Abstract: We present the Gaussian Process Density Sampler (GPDS), an exchangeable generative model for use in nonparametric Bayesian density estimation. Samples drawn from the GPDS are consistent with exact, independent samples from a fixed density function that is a transformation of a function drawn from a Gaussian process prior. Our formulation allows us to infer an unknown density from data using Markov chain Monte Carlo, which gives samples from the posterior distribution over density functions and from the predictive distribution on data space. We can also infer the hyperparameters of the Gaussian process. We compare this density modeling technique to several existing techniques on a toy problem and a skullreconstruction task. 1
3 0.90271157 148 nips-2008-Natural Image Denoising with Convolutional Networks
Author: Viren Jain, Sebastian Seung
Abstract: We present an approach to low-level vision that combines two main ideas: the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models. We demonstrate this approach on the challenging problem of natural image denoising. Using a test set with a hundred natural images, we find that convolutional networks provide comparable and in some cases superior performance to state of the art wavelet and Markov random field (MRF) methods. Moreover, we find that a convolutional network offers similar performance in the blind denoising setting as compared to other techniques in the non-blind setting. We also show how convolutional networks are mathematically related to MRF approaches by presenting a mean field theory for an MRF specially designed for image denoising. Although these approaches are related, convolutional networks avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference. This makes it possible to learn image processing architectures that have a high degree of representational power (we train models with over 15,000 parameters), but whose computational expense is significantly less than that associated with inference in MRF approaches with even hundreds of parameters. 1 Background Low-level image processing tasks include edge detection, interpolation, and deconvolution. These tasks are useful both in themselves, and as a front-end for high-level visual tasks like object recognition. This paper focuses on the task of denoising, defined as the recovery of an underlying image from an observation that has been subjected to Gaussian noise. One approach to image denoising is to transform an image from pixel intensities into another representation where statistical regularities are more easily captured. For example, the Gaussian scale mixture (GSM) model introduced by Portilla and colleagues is based on a multiscale wavelet decomposition that provides an effective description of local image statistics [1, 2]. Another approach is to try and capture statistical regularities of pixel intensities directly using Markov random fields (MRFs) to define a prior over the image space. Initial work used handdesigned settings of the parameters, but recently there has been increasing success in learning the parameters of such models from databases of natural images [3, 4, 5, 6, 7, 8]. Prior models can be used for tasks such as image denoising by augmenting the prior with a noise model. Alternatively, an MRF can be used to model the probability distribution of the clean image conditioned on the noisy image. This conditional random field (CRF) approach is said to be discriminative, in contrast to the generative MRF approach. Several researchers have shown that the CRF approach can outperform generative learning on various image restoration and labeling tasks [9, 10]. CRFs have recently been applied to the problem of image denoising as well [5]. 1 The present work is most closely related to the CRF approach. Indeed, certain special cases of convolutional networks can be seen as performing maximum likelihood inference on a CRF [11]. The advantage of the convolutional network approach is that it avoids a general difficulty with applying MRF-based methods to image analysis: the computational expense associated with both parameter estimation and inference in probabilistic models. For example, naive methods of learning MRFbased models involve calculation of the partition function, a normalization factor that is generally intractable for realistic models and image dimensions. As a result, a great deal of research has been devoted to approximate MRF learning and inference techniques that meliorate computational difficulties, generally at the cost of either representational power or theoretical guarantees [12, 13]. Convolutional networks largely avoid these difficulties by posing the computational task within the statistical framework of regression rather than density estimation. Regression is a more tractable computation and therefore permits models with greater representational power than methods based on density estimation. This claim will be argued for with empirical results on the denoising problem, as well as mathematical connections between MRF and convolutional network approaches. 2 Convolutional Networks Convolutional networks have been extensively applied to visual object recognition using architectures that accept an image as input and, through alternating layers of convolution and subsampling, produce one or more output values that are thresholded to yield binary predictions regarding object identity [14, 15]. In contrast, we study networks that accept an image as input and produce an entire image as output. Previous work has used such architectures to produce images with binary targets in image restoration problems for specialized microscopy data [11, 16]. Here we show that similar architectures can also be used to produce images with the analog fluctuations found in the intensity distributions of natural images. Network Dynamics and Architecture A convolutional network is an alternating sequence of linear filtering and nonlinear transformation operations. The input and output layers include one or more images, while intermediate layers contain “hidden
4 0.89643812 236 nips-2008-The Mondrian Process
Author: Daniel M. Roy, Yee W. Teh
Abstract: We describe a novel class of distributions, called Mondrian processes, which can be interpreted as probability distributions over kd-tree data structures. Mondrian processes are multidimensional generalizations of Poisson processes and this connection allows us to construct multidimensional generalizations of the stickbreaking process described by Sethuraman (1994), recovering the Dirichlet process in one dimension. After introducing the Aldous-Hoover representation for jointly and separately exchangeable arrays, we show how the process can be used as a nonparametric prior distribution in Bayesian models of relational data. 1
5 0.89402521 80 nips-2008-Extended Grassmann Kernels for Subspace-Based Learning
Author: Jihun Hamm, Daniel D. Lee
Abstract: Subspace-based learning problems involve data whose elements are linear subspaces of a vector space. To handle such data structures, Grassmann kernels have been proposed and used previously. In this paper, we analyze the relationship between Grassmann kernels and probabilistic similarity measures. Firstly, we show that the KL distance in the limit yields the Projection kernel on the Grassmann manifold, whereas the Bhattacharyya kernel becomes trivial in the limit and is suboptimal for subspace-based problems. Secondly, based on our analysis of the KL distance, we propose extensions of the Projection kernel which can be extended to the set of affine as well as scaled subspaces. We demonstrate the advantages of these extended kernels for classification and recognition tasks with Support Vector Machines and Kernel Discriminant Analysis using synthetic and real image databases. 1
same-paper 6 0.88195199 27 nips-2008-Artificial Olfactory Brain for Mixture Identification
7 0.82883757 100 nips-2008-How memory biases affect information transmission: A rational analysis of serial reproduction
8 0.78322607 208 nips-2008-Shared Segmentation of Natural Scenes Using Dependent Pitman-Yor Processes
9 0.75259143 158 nips-2008-Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks
10 0.70535624 116 nips-2008-Learning Hybrid Models for Image Annotation with Partially Labeled Data
11 0.70403659 234 nips-2008-The Infinite Factorial Hidden Markov Model
12 0.70344669 35 nips-2008-Bayesian Synchronous Grammar Induction
13 0.69303715 200 nips-2008-Robust Kernel Principal Component Analysis
14 0.68827355 66 nips-2008-Dynamic visual attention: searching for coding length increments
15 0.68560106 232 nips-2008-The Conjoint Effect of Divisive Normalization and Orientation Selectivity on Redundancy Reduction
16 0.68175656 192 nips-2008-Reducing statistical dependencies in natural signals using radial Gaussianization
17 0.67942148 42 nips-2008-Cascaded Classification Models: Combining Models for Holistic Scene Understanding
18 0.67192799 197 nips-2008-Relative Performance Guarantees for Approximate Inference in Latent Dirichlet Allocation
19 0.66963404 127 nips-2008-Logistic Normal Priors for Unsupervised Probabilistic Grammar Induction
20 0.66461623 118 nips-2008-Learning Transformational Invariants from Natural Movies