nips nips2003 nips2003-157 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Peter Dayan, Michael Häusser, Michael London
Abstract: Computational mysteries surround the kernels relating the magnitude and sign of changes in efficacy as a function of the time difference between pre- and post-synaptic activity at a synapse. One important idea34 is that kernels result from filtering, ie an attempt by synapses to eliminate noise corrupting learning. This idea has hitherto been applied to trace learning rules; we apply it to experimentally-defined kernels, using it to reverse-engineer assumed signal statistics. We also extend it to consider the additional goal for filtering of weighting learning according to statistical surprise, as in the Z-score transform. This provides a fresh view of observed kernels and can lead to different, and more natural, signal statistics.
Reference: text
sentIndex sentText sentNum sentScore
1 uk Abstract Computational mysteries surround the kernels relating the magnitude and sign of changes in efficacy as a function of the time difference between pre- and post-synaptic activity at a synapse. [sent-8, score-0.381]
2 One important idea34 is that kernels result from filtering, ie an attempt by synapses to eliminate noise corrupting learning. [sent-9, score-0.546]
3 This idea has hitherto been applied to trace learning rules; we apply it to experimentally-defined kernels, using it to reverse-engineer assumed signal statistics. [sent-10, score-0.254]
4 We also extend it to consider the additional goal for filtering of weighting learning according to statistical surprise, as in the Z-score transform. [sent-11, score-0.18]
5 This provides a fresh view of observed kernels and can lead to different, and more natural, signal statistics. [sent-12, score-0.427]
6 These experimentally-determined rules (usually called spike-time dependent plasticity or STDP rules), which are constantly being refined, 18,3o have inspired substantial further theoretical work on their modeling and interpretation. [sent-14, score-0.794]
7 2·9,l0,22·28·29·33 Figure l(Dl-Gl)* depict some of the main STDP findings/ of which the best-investigated are shown in figure l(Dl;El), and are variants of a 'standard' STDP rule. [sent-15, score-0.048]
8 Earlier work considered rate-based rather than spikebased temporal rules, and so we adopt the broader term 'time dependent plasticity' or TDP. [sent-16, score-0.315]
9 Note the strong temporal asymmetry in both the standard rules. [sent-17, score-0.271]
10 Two main qualitative notions explored in various of the works cited above are that the temporal asymmetries in TDP rules are associated with causality or prediction. [sent-19, score-0.776]
11 However, looking specifically at the standard STDP rules, models interested in prediction *We refer to graphs in this figure by row and column. [sent-20, score-0.197]
12 concentrate mostly on the L1P component and have difficulty explaining the predsely-timed nature of the LTD. [sent-21, score-0.175]
13 Why should it be particularly detrimental to the weight of a synapse that the pre-synaptic action potential comes just after a post-synaptic action-potential, rather than 200ms later, for instance? [sent-22, score-0.178]
14 In the case of time-difference or temporal difference rules, 29•32 why might the LTD component be so different from the mirror reflection of the L1P component (figure 1(£1)), at least short of being tied to some particular biophysical characteristic of the post-synaptic cell. [sent-24, score-0.43]
15 Wallis & Baddeley34 formalized the intuition underlying one class of TDP rules (the so-called trace based rules, figure l(A1)) in terms of temporal filtering. [sent-26, score-0.785]
16 In their model, the actual output is a noisy version of a 'true' underlying signal. [sent-27, score-0.153]
17 They suggested, and showed in an example, that learning proceeds more proficiently if the output is filtered by an optimal noise-removal filter (in their case, a Wiener filter) before entering into the learning rule. [sent-28, score-0.431]
18 This is like using a prior over the signal, and performing learning based on the (mean) of the posterior over the signal given the observations (ie the output). [sent-29, score-0.192]
19 If objects in the world normally persist for substantial periods, then, under some reasonable assumptions about noise, it turns out to be appropriate to apply a low-pass filter to the output. [sent-30, score-0.328]
20 Of course, as seen in column 1 of figure 1, TDP rules are generally not tracelike. [sent-32, score-0.432]
21 Here, we extend the Wallis-Baddeley (WB) treatment to rate-based versions of the actual rules shown in the figure. [sent-33, score-0.476]
22 We consider two possibilities, which infer optimal signal models from the rules, based on two different assumptions about their computational role. [sent-34, score-0.192]
23 The other, which is closely related to recent work on adaptation and modulation, 3 •s, 15 •36 has the kernel normalize frequency components according to their standard deviations, as well as removing noise. [sent-36, score-0.092]
24 Under this interpretation, the learning signal is a Z-score-transformed veFsion of the output. [sent-37, score-0.192]
25 In section 3, we extend this model to the case of the observed rules for synaptic plasticity. [sent-39, score-0.535]
26 n} with firing rates Xi ( t) at time t to a neuron with output rate y ( t). [sent-43, score-0.104]
27 WB 34 consider the case that the output can be decomposed as y(t) = s(t) + n(t), where s(t) is a 'true' underlying signal and n(t) is noise corrupting the signal. [sent-45, score-0.535]
28 They suggest defining the filter so that s(t) = f dt' y(t')cf>(t-t') is the optimal least-squares estimate of the signal. [sent-46, score-0.276]
29 Thus, learning would be based on the best available information about the signal s (t). [sent-47, score-0.192]
30 If signal and noise are statistically stationary signals, with power spectra IS ( w) 12 and IN (w) 12 respectively at (temporal) frequency w, then the magnitude of the Fourier transform [[] (A]~ ;§ signal spectrum ! [sent-48, score-0.791]
31 The ordinates of the plots have been individually normalized; but the abscissce for all the temporal (t) plots and, separately, all the the spectral (w) plots, are the same, for the purposes of comparison. [sent-61, score-0.342]
32 In the text, we refer to individual graphs in this figure by their row letter (A-G) and column number (1-4). [sent-63, score-0.195]
wordName wordTfidf (topN-words)
[('rules', 0.393), ('tdp', 0.343), ('wiener', 0.253), ('filter', 0.222), ('temporal', 0.22), ('stdp', 0.217), ('plasticity', 0.216), ('signal', 0.192), ('kernels', 0.175), ('wb', 0.163), ('cf', 0.163), ('corrupting', 0.137), ('filtering', 0.133), ('ie', 0.109), ('synaptic', 0.095), ('spectrum', 0.083), ('specifically', 0.08), ('synapse', 0.077), ('spectra', 0.075), ('power', 0.072), ('magnitude', 0.065), ('wi', 0.063), ('trace', 0.062), ('plots', 0.061), ('output', 0.061), ('confirmed', 0.06), ('refined', 0.06), ('asymmetries', 0.06), ('cited', 0.06), ('speculation', 0.06), ('whitening', 0.06), ('bcm', 0.06), ('fresh', 0.06), ('filtered', 0.06), ('detrimental', 0.06), ('gower', 0.06), ('ucl', 0.06), ('frequency', 0.057), ('underlying', 0.056), ('noise', 0.055), ('substantial', 0.055), ('cd', 0.054), ('broader', 0.054), ('surprise', 0.054), ('formalized', 0.054), ('surround', 0.054), ('defining', 0.054), ('difficulty', 0.054), ('dept', 0.054), ('governs', 0.054), ('dl', 0.051), ('michael', 0.051), ('asymmetry', 0.051), ('biophysical', 0.051), ('ltd', 0.051), ('persist', 0.051), ('entering', 0.051), ('depict', 0.048), ('constantly', 0.048), ('dayan', 0.048), ('cp', 0.048), ('extend', 0.047), ('sign', 0.047), ('filters', 0.045), ('street', 0.045), ('possibilities', 0.045), ('mirror', 0.045), ('explaining', 0.045), ('suggested', 0.044), ('firing', 0.043), ('physiology', 0.043), ('notions', 0.043), ('graphs', 0.043), ('governing', 0.042), ('tied', 0.042), ('medium', 0.042), ('action', 0.041), ('modeling', 0.041), ('text', 0.041), ('dependent', 0.041), ('concentrate', 0.04), ('continues', 0.04), ('relating', 0.04), ('modulation', 0.04), ('column', 0.039), ('letter', 0.039), ('proceeds', 0.037), ('el', 0.037), ('timing', 0.037), ('ex', 0.037), ('refer', 0.037), ('row', 0.037), ('component', 0.036), ('eliminate', 0.036), ('actual', 0.036), ('peter', 0.035), ('kernel', 0.035), ('synapses', 0.034), ('periods', 0.034), ('decomposed', 0.034)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000002 157 nips-2003-Plasticity Kernels and Temporal Statistics
Author: Peter Dayan, Michael Häusser, Michael London
Abstract: Computational mysteries surround the kernels relating the magnitude and sign of changes in efficacy as a function of the time difference between pre- and post-synaptic activity at a synapse. One important idea34 is that kernels result from filtering, ie an attempt by synapses to eliminate noise corrupting learning. This idea has hitherto been applied to trace learning rules; we apply it to experimentally-defined kernels, using it to reverse-engineer assumed signal statistics. We also extend it to consider the additional goal for filtering of weighting learning according to statistical surprise, as in the Z-score transform. This provides a fresh view of observed kernels and can lead to different, and more natural, signal statistics.
2 0.15374294 183 nips-2003-Synchrony Detection by Analogue VLSI Neurons with Bimodal STDP Synapses
Author: Adria Bofill-i-petit, Alan F. Murray
Abstract: We present test results from spike-timing correlation learning experiments carried out with silicon neurons with STDP (Spike Timing Dependent Plasticity) synapses. The weight change scheme of the STDP synapses can be set to either weight-independent or weight-dependent mode. We present results that characterise the learning window implemented for both modes of operation. When presented with spike trains with different types of synchronisation the neurons develop bimodal weight distributions. We also show that a 2-layered network of silicon spiking neurons with STDP synapses can perform hierarchical synchrony detection. 1
3 0.14569406 27 nips-2003-Analytical Solution of Spike-timing Dependent Plasticity Based on Synaptic Biophysics
Author: Bernd Porr, Ausra Saudargiene, Florentin Wörgötter
Abstract: Spike timing plasticity (STDP) is a special form of synaptic plasticity where the relative timing of post- and presynaptic activity determines the change of the synaptic weight. On the postsynaptic side, active backpropagating spikes in dendrites seem to play a crucial role in the induction of spike timing dependent plasticity. We argue that postsynaptically the temporal change of the membrane potential determines the weight change. Coming from the presynaptic side induction of STDP is closely related to the activation of NMDA channels. Therefore, we will calculate analytically the change of the synaptic weight by correlating the derivative of the membrane potential with the activity of the NMDA channel. Thus, for this calculation we utilise biophysical variables of the physiological cell. The final result shows a weight change curve which conforms with measurements from biology. The positive part of the weight change curve is determined by the NMDA activation. The negative part of the weight change curve is determined by the membrane potential change. Therefore, the weight change curve should change its shape depending on the distance from the soma of the postsynaptic cell. We find temporally asymmetric weight change close to the soma and temporally symmetric weight change in the distal dendrite. 1
4 0.07989987 18 nips-2003-A Summating, Exponentially-Decaying CMOS Synapse for Spiking Neural Systems
Author: Rock Z. Shi, Timothy K. Horiuchi
Abstract: Synapses are a critical element of biologically-realistic, spike-based neural computation, serving the role of communication, computation, and modification. Many different circuit implementations of synapse function exist with different computational goals in mind. In this paper we describe a new CMOS synapse design that separately controls quiescent leak current, synaptic gain, and time-constant of decay. This circuit implements part of a commonly-used kinetic model of synaptic conductance. We show a theoretical analysis and experimental data for prototypes fabricated in a commercially-available 1.5µm CMOS process. 1
5 0.072645523 162 nips-2003-Probabilistic Inference of Speech Signals from Phaseless Spectrograms
Author: Kannan Achan, Sam T. Roweis, Brendan J. Frey
Abstract: Many techniques for complex speech processing such as denoising and deconvolution, time/frequency warping, multiple speaker separation, and multiple microphone analysis operate on sequences of short-time power spectra (spectrograms), a representation which is often well-suited to these tasks. However, a significant problem with algorithms that manipulate spectrograms is that the output spectrogram does not include a phase component, which is needed to create a time-domain signal that has good perceptual quality. Here we describe a generative model of time-domain speech signals and their spectrograms, and show how an efficient optimizer can be used to find the maximum a posteriori speech signal, given the spectrogram. In contrast to techniques that alternate between estimating the phase and a spectrally-consistent signal, our technique directly infers the speech signal, thus jointly optimizing the phase and a spectrally-consistent signal. We compare our technique with a standard method using signal-to-noise ratios, but we also provide audio files on the web for the purpose of demonstrating the improvement in perceptual quality that our technique offers. 1
6 0.066932768 114 nips-2003-Limiting Form of the Sample Covariance Eigenspectrum in PCA and Kernel PCA
7 0.065813296 165 nips-2003-Reasoning about Time and Knowledge in Neural Symbolic Learning Systems
8 0.060741693 184 nips-2003-The Diffusion-Limited Biochemical Signal-Relay Channel
9 0.058300328 13 nips-2003-A Neuromorphic Multi-chip Model of a Disparity Selective Complex Cell
10 0.057655506 69 nips-2003-Factorization with Uncertainty and Missing Data: Exploiting Temporal Coherence
11 0.055953328 15 nips-2003-A Probabilistic Model of Auditory Space Representation in the Barn Owl
12 0.053197987 49 nips-2003-Decoding V1 Neuronal Activity using Particle Filtering with Volterra Kernels
13 0.052047595 119 nips-2003-Local Phase Coherence and the Perception of Blur
14 0.051989105 110 nips-2003-Learning a World Model and Planning with a Self-Organizing, Dynamic Neural System
15 0.051651105 160 nips-2003-Prediction on Spike Data Using Kernel Algorithms
16 0.051540621 115 nips-2003-Linear Dependent Dimensionality Reduction
17 0.050800152 5 nips-2003-A Classification-based Cocktail-party Processor
18 0.049883071 104 nips-2003-Learning Curves for Stochastic Gradient Descent in Linear Feedforward Networks
19 0.047273137 57 nips-2003-Dynamical Modeling with Kernels for Nonlinear Time Series Prediction
20 0.047178179 61 nips-2003-Entrainment of Silicon Central Pattern Generators for Legged Locomotory Control
topicId topicWeight
[(0, -0.148), (1, 0.042), (2, 0.176), (3, 0.031), (4, 0.027), (5, 0.042), (6, 0.005), (7, -0.007), (8, -0.004), (9, 0.02), (10, 0.061), (11, -0.004), (12, -0.031), (13, -0.03), (14, -0.087), (15, -0.101), (16, 0.085), (17, -0.021), (18, 0.012), (19, -0.038), (20, -0.031), (21, 0.122), (22, 0.001), (23, 0.115), (24, -0.041), (25, 0.069), (26, 0.043), (27, 0.104), (28, -0.051), (29, -0.051), (30, 0.114), (31, -0.119), (32, -0.008), (33, -0.098), (34, -0.117), (35, -0.073), (36, 0.057), (37, -0.051), (38, -0.21), (39, 0.125), (40, 0.129), (41, -0.132), (42, -0.05), (43, -0.147), (44, 0.098), (45, -0.04), (46, -0.091), (47, 0.077), (48, -0.14), (49, -0.041)]
simIndex simValue paperId paperTitle
same-paper 1 0.97973019 157 nips-2003-Plasticity Kernels and Temporal Statistics
Author: Peter Dayan, Michael Häusser, Michael London
Abstract: Computational mysteries surround the kernels relating the magnitude and sign of changes in efficacy as a function of the time difference between pre- and post-synaptic activity at a synapse. One important idea34 is that kernels result from filtering, ie an attempt by synapses to eliminate noise corrupting learning. This idea has hitherto been applied to trace learning rules; we apply it to experimentally-defined kernels, using it to reverse-engineer assumed signal statistics. We also extend it to consider the additional goal for filtering of weighting learning according to statistical surprise, as in the Z-score transform. This provides a fresh view of observed kernels and can lead to different, and more natural, signal statistics.
2 0.77673578 27 nips-2003-Analytical Solution of Spike-timing Dependent Plasticity Based on Synaptic Biophysics
Author: Bernd Porr, Ausra Saudargiene, Florentin Wörgötter
Abstract: Spike timing plasticity (STDP) is a special form of synaptic plasticity where the relative timing of post- and presynaptic activity determines the change of the synaptic weight. On the postsynaptic side, active backpropagating spikes in dendrites seem to play a crucial role in the induction of spike timing dependent plasticity. We argue that postsynaptically the temporal change of the membrane potential determines the weight change. Coming from the presynaptic side induction of STDP is closely related to the activation of NMDA channels. Therefore, we will calculate analytically the change of the synaptic weight by correlating the derivative of the membrane potential with the activity of the NMDA channel. Thus, for this calculation we utilise biophysical variables of the physiological cell. The final result shows a weight change curve which conforms with measurements from biology. The positive part of the weight change curve is determined by the NMDA activation. The negative part of the weight change curve is determined by the membrane potential change. Therefore, the weight change curve should change its shape depending on the distance from the soma of the postsynaptic cell. We find temporally asymmetric weight change close to the soma and temporally symmetric weight change in the distal dendrite. 1
3 0.59080124 183 nips-2003-Synchrony Detection by Analogue VLSI Neurons with Bimodal STDP Synapses
Author: Adria Bofill-i-petit, Alan F. Murray
Abstract: We present test results from spike-timing correlation learning experiments carried out with silicon neurons with STDP (Spike Timing Dependent Plasticity) synapses. The weight change scheme of the STDP synapses can be set to either weight-independent or weight-dependent mode. We present results that characterise the learning window implemented for both modes of operation. When presented with spike trains with different types of synchronisation the neurons develop bimodal weight distributions. We also show that a 2-layered network of silicon spiking neurons with STDP synapses can perform hierarchical synchrony detection. 1
4 0.53101087 184 nips-2003-The Diffusion-Limited Biochemical Signal-Relay Channel
Author: Peter J. Thomas, Donald J. Spencer, Sierra K. Hampton, Peter Park, Joseph P. Zurkus
Abstract: Biochemical signal-transduction networks are the biological information-processing systems by which individual cells, from neurons to amoebae, perceive and respond to their chemical environments. We introduce a simplified model of a single biochemical relay and analyse its capacity as a communications channel. A diffusible ligand is released by a sending cell and received by binding to a transmembrane receptor protein on a receiving cell. This receptor-ligand interaction creates a nonlinear communications channel with non-Gaussian noise. We model this channel numerically and study its response to input signals of different frequencies in order to estimate its channel capacity. Stochastic effects introduced in both the diffusion process and the receptor-ligand interaction give the channel low-pass characteristics. We estimate the channel capacity using a water-filling formula adapted from the additive white-noise Gaussian channel. 1 Introduction: The Diffusion-Limited Biochemical Signal-Relay Channel The term signal-transduction network refers to the web of biochemical interactions by which single cells process sensory information about their environment. Just as neural networks underly the interaction of many multicellular organisms with their environments, these biochemical networks allow cells to perceive, evaluate and react to chemical stimuli [1]. Examples include chemical signaling across the synaptic cleft, calcium signaling within the postsynaptic dendritic spine, pathogen localization by the immune system, ∗ † Corresponding author: pjthomas@salk.edu dspencer@salk.edu growth-cone guidance during neuronal development, phototransduction in the retina, rhythmic chemotactic signaling in social amoebae, and many others. The introduction of quantitative measurements of the distribution and activation of chemical reactants within living cells [2] has prepared the way for detailed quantitative analysis of their properties, aided by numerical simulations. One of the key questions that can now be addressed is the fundamental limits to cell-to-cell communication using chemical signaling. To communicate via chemical signaling cells must contend with the unreliability inherent in chemical diffusion and in the interactions of limited numbers of signaling molecules and receptors [3]. We study a simplified situation in which one cell secretes a signaling molecule, or ligand, which can be detected by a receptor on another cell. Limiting ourselves to one ligand-receptor interaction allows a treatment of this communications system using elementary concepts from information theory. The information capacity of this fundamental signaling system is the maximum of the mutual information between the ensemble of input signals, the time-varying rate of ligand secretion s(t), and the output signal r(t), a piecewise continuous function taking the values one or zero as the receptor is bound to ligand or unbound. Using numerical simulation we can estimate the channel capacity via a standard ”water-filling” information measure [4], as described below. 2 Methods: Numerical Simulation of the Biochemical Relay We simulate a biochemical relay system as follows: in a two-dimensional rectangular volume V measuring 5 micrometers by 10 micrometers, we locate two cells spaced 5 micrometers apart. Cell A emits ligand molecules from location xs = [2.5µ, 2.5µ] with rate s(t) ≥ 0; they diffuse with a given diffusion constant D and decay at a rate α. Both secretion and decay occur as random Poisson processes, and diffusion is realized as a discrete random walk with Gaussian-distributed displacements. The boundaries of V are taken to be reflecting. We track the positions of each of N particles {xi , i = 1, · · · , N } at intervals of ∆t = 1msec. The local concentration in a neighborhood of size σ around a location x is given by the convolution N δ(x − xi )g(x − x , σ) dx c(x, t) = ˆ (1) V i=1 where g(·, σ) is a normalized Gaussian distribution in the plane, with mean 0 and variance σ 2 . The motions of the individual particles cause c(x, t) to fluctuate about the mean conˆ centration, causing the local concentration at cell B, c(xr , t) to be a noisy, low-pass filtered ˆ version of the original signal s(t) (see Figure 1). Cell B, located at xr = [7.5µ, 2.5µ], registers the presence of ligand through binding and unbinding transitions, which form a two-state Markov process with time-varying transition rates. Given an unbound receptor, the binding transition happens at a rate that depends on the ligand concentration around the receptor: k+ c(xr , t). The size of the neighborhood σ ˆ reflects the range of the receptor, with binding most likely in a small region close to xr . Once the receptor is bound to a ligand molecule, no more binding events occur until the receptor releases the ligand. The receiver is insensitive to fluctuations in c(xr , t) while it is ˆ in the bound state (see Figure 1). The unbinding transition occurs with a fixed rate k− . For concreteness, we take values for D, α, k− , k+ , and σ appropriate for cyclic AMP signaling between Dictyostelium amoebae, a model organism for chemical communica1 tion: D = 0.25µ2 msec−1 , α = 1 sec−1 , σ = 0.1µ, k− = 1 sec−1 , k+ = 2πσ2 sec−1 . Kd = k− /k+ is the dissociation constant, the concentration at which the receptor on average is bound half the time. For the chosen values of the reaction constants k± , we have Figure 1: Biochemical Signaling Simulation. Top: Cell A secretes a signaling molecule (red dots) with a time-varying rate r(t). Molecules diffuse throughout the two-dimensional volume, leading to locally fluctuating concentrations that carry a corrupted version of the signal. Molecules within a neighborhood of cell B can bind to a receptor molecule, giving a received signal s(t) ∈ {0, 1}. Bottom Left: Input signal. Mean instantaneous rate of molecule release (thousands of molecules per second). Molecule release is a Poisson process with time-varying rate. Bottom Center: Local concentration fluctuations, as seen by cell B, indicated by the number of molecules within 0.2 microns of the receptor. The receptor is sensitive to fluctuations in local concentrations only while it is unbound. While the receptor is bound, it does not register changes in the local concentration (indicated by constant plateaus corresponding to intervals when r(t) = 1 in bottom right panel. Bottom Right: Output signal r(t). At each moment the receptor is either bound (1) or unbound (0). The receiver output is a piecewise constant function with a finite number of transitions. Kd ≈ 15.9 molecules ≈ 26.4nMol, comparable to the most sensitive values reported for µ2 the cyclic AMP receptor [2]. At this concentration the volume V = 50µ2 contains about 800 signaling molecules, assuming a nominal depth of 1µ. 3 Results: Estimating Information Capacity via Frequency Response Communications channels mediated by diffusion and ligand receptor interaction are nonlinear with non-Gaussian noise. The expected value of the output signal, 0 ≤ E[r] < 1, is a sigmoidal function of the log concentration for a constant concentration c: E[r] = 1 c = c + Kd 1 + e−(y−y0 ) (2) where y = ln(c), y0 = ln(Kd ). The mean response saturates for high concentrations, c Kd , and the noise statistics become pronouncedly Poissonian (rather than Gaussian) for low concentrations. Several different kinds of stimuli can be used to characterize such a channel. The steadystate response to constant input reflects the static (equilibrium) transfer function. Concentrations ranging from 100Kd to 0.01Kd occupy 98% of the steady-state operating range, 0.99 > E[r] > 0.01 [5]. For a finite observation time T the actual fraction of time spent bound, rT , is distributed about E[r] with a variance that depends on T . The biochemi¯ cal relay may be used as a binary symmetric channel randomly selecting a ‘high’ or ‘low’ secretion rate, and ‘decoding’ by setting a suitable threshold for rT . As T increases, the ¯ variance of rT and the probability of error decrease. ¯ The binary symmetric channel makes only crude use of this signaling mechanism. Other possible communication schemes include sending all-or-none bursts of signaling molecule, as in synaptic transmission, or detecting discrete stepped responses. Here we use the frequency response of the channel as a way of estimating the information capacity of the biochemical channel. For an idealized linear channel with additive white Gaussian noise (AWNG channel) the channel capacity under a mean input power constraint P is given by the so-called “waterfilling formula” [4], C= 1 2 ωmax log2 1 + ω=ωmin (ν − N (ω))+ N (ω) dω (3) given the constraining condition ωmax (ν − N (ω))+ dω ≤ P (4) ω=ωmin where the constant ν is the sum of the noise and the signal power in the usable frequency range, N (ω) is the power of the additive noise at frequency ω and (X)+ indicates the positive part of X. The formula applies when each frequency band (ω, ω +dω) is subject to noise of power N (ω) independently of all other frequency bands, and reflects the optimal allocation of signal power S(ω) = (ν − N (ω))+ , with greater signal power invested in frequencies at which the noise power is smallest. The capacity C is in bits/second. For an input signal of finite duration T = 100 sec, we can independently specify the amplitudes and phases of its frequency components at ω = [0.01 Hz, 0.02 Hz, · · · , 500 Hz], where 500 Hz is the Nyquist frequency given a 1 msec simulation timestep. Because the population of secreted signaling molecules decays exponentially with a time constant of 1/α = 1 sec, the concentration signal is unable to pass frequencies ω ≥ 1Hz (see Figure 2) providing a natural high-frequency cutoff. For the AWGN channel the input and Figure 2: Frequency Response of Biochemical Relay Channel. The sending cell secreted signaling molecules at a mean rate of 1000 + 1000 sin(2πωt) molecules per second. From top to bottom, the input frequencies were 1.0, 0.5, 0.2, 0.1, 0.05, 0.02 and 0.01 Hz. The total signal duration was T = 100 seconds. Left Column: Total number of molecules in the volume. Attenuation of the original signal results from exponential decay of the signaling molecule population. Right Column: A one-second moving average of the output signal r(t), which takes the value one when the receptor molecule is bound to ligand, and zero when the receptor is unbound. Figure 3: Frequency Transmission Spectrum Noise power N (ω), calculated as the total power in r(t)−¯ in all frequency components save the input frequency ω. Frequencies were r binned in intervals of 0.01 Hz = 1/T . The maximum possible power in r(t) over all frequencies is 0.25; the power successfully transmitted by the channel is given by 0.25/N (ω). The lower curve is N (ω) for input signals of the form s(t) = 1000 + 1000 sin 2πωt, which uses the full dynamic range of the receptor. Decreasing the dynamic range used reduces the amount of power transmitted at the sending frequency: the upper curve is N (ω) for signals of the form s(t) = 1000 + 500 sin 2πωt. output signals share the same units (e.g. rms voltage); for the biological relay the input s(t) is in molecules/second while the output r(t) is a function with binary range {r = 0, r = 1}. The maximum of the mean output power for a binary function r(t) T 2 1 is T t=0 |r(t) − r| dt ≤ 1 . This total possible output power will be distributed be¯ 4 tween different frequencies depending on the frequency of the input. We wish to estimate the channel capacity by comparing the portion of the output power present in the sending frequency ω to the limiting output power 0.25. Therefore we set the total output power constant to ν = 0.25. Given a pure sinusoidal input signal s(t) = a0 + a1 sin(2πωt), we consider the power in the output spectrum at ω Hz to be the residual power from the input and the rest of the power in the spectrum of r(t) to be analogous to the additive noise power spectrum N (ω) in the AWNG channel. We calculate N (ω) to be the total power of r(t) − r ¯ in all frequency bands except ω. For signals of length T = 100 sec, the possible frequencies are discretized at intervals ∆ω = 0.01 Hz. Because the noise power N (ω) ≤ 0.25, the water-filling formula (3) for the capacity reduces to 1 Cest = 2 1Hz log2 0.01Hz 0.25 N (ω) dω. (5) As mentioned above frequencies ω ≥ 1 Hz do not transmit any information about the signal (see Figure 2) and do not contribute to the capacity. We approximate this integral using linear interpolation of log2 (N (ω)) between the measured values at ω = [0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0] Hz. (See Figure 3.) This procedure gives an estimate of the channel capacity, Cest = 0.087 bits/second. 4 Discussion & Conclusions Diffusion and the Markov switching between bound and unbound states create a low-pass filter that removes high-frequency information in the biochemical relay channel. A general Poisson-type communications channel, such as commonly encountered in optical communications engineering, can achieve an arbitrarily large capacity by transmitting high frequencies and high amplitudes, unless bounded by a max or mean amplitude constraint [6]. In the biochemical channel, the effective input amplitude is naturally constrained by the saturation of the receptor at concentrations above the Kd . And the high frequency transmission is limited by the inherent dynamics of the Markov process. Therefore this channel has a finite capacity. The channel capacity estimate we derived, Cest = 0.087 bits/second, seems quite low compared to signaling rates in the nervous system, requiring long signaling times to transfer information successfully. However temporal dynamics in cellular systems can be quite deliberate; cell-cell communication in the social amoeba Dictyostelium, for example, is achieved by means of a carrier wave with a period of seven minutes. In addition, cells typically possess thousands of copies of the receptors for important signaling molecules, allowing for more complex detection schemes than those investigated here. Our simplified treatment suggests several avenues for further work. For example, signal transducing receptors often form Markov chains with more complicated dynamics reflecting many more than two states [7]. Also, the nonlinear nature of the channel is probably not well served by our additive noise approximation, and might be better suited to a treatment via multiplicative noise [8]. Whether cells engage in complicated temporal coding/decoding schemes, as has been proposed for neural information processing, or whether instead they achieve efficient communication by evolutionary matching of the noise characteristics of sender and receiver, remain to be investigated. We note that the dependence of the channel capacity C on such parameters as the system geometry, the diffusion and decay constants, the binding constants and the range of the receptor may shed light on evolutionary mechanisms and constraints on communication within cellular biological systems. Acknowledgments This work would not have been possible without the generous support of the Howard Hughes Medical Institute and the resources of the Computational Neurobiology Laboratory, Terrence J. Sejnowski, Director. References [1] Rappel, W.M., Thomas, P.J., Levine, H. & Loomis, W.F. (2002) Establishing Direction during Chemotaxis in Eukaryotic Cells. Biophysical Journal 83:1361-1367. [2] Ueda, M., Sako, Y., Tanaka, T., Devreotes, P. & Yanagida, T. (2001) Single Molecule Analysis of Chemotactic Signaling in Dictyostelium Cells. Science 294:864-867. [3] Detwiler, P.B., Ramanathan, S., Sengupta, A. & Shraiman, B.I. (2000) Engineering Aspects of Enzymatic Signal Transduction: Photoreceptors in the Retina. Biophysical Journal79:2801-2817. [4] Cover, T.M. & Thomas, J.A. (1991) Elements of Information Theory, New York: Wiley. [5] Getz, W.M. & Lansky, P. (2001) Receptor Dissociation Constants and the Information Entropy of Membranes Coding Ligand Concentration. Chem. Senses 26:95-104. [6] Frey, R.M. (1991) Information Capacity of the Poisson Channel. IEEE Transactions on Information Theory 37(2):244-256. [7] Uteshev, V.V. & Pennefather, P.S. (1997) Analytical Description of the Activation of Multi-State Receptors by Continuous Neurotransmitter Signals at Brain Synapses. Biophysical Journal72:11271134. [8] Mitra, P.P. & Stark, J.B. (2001) Nonlinear limits to the information capacity of optical fibre communications. Nature411:1027-1030.
5 0.44748017 162 nips-2003-Probabilistic Inference of Speech Signals from Phaseless Spectrograms
Author: Kannan Achan, Sam T. Roweis, Brendan J. Frey
Abstract: Many techniques for complex speech processing such as denoising and deconvolution, time/frequency warping, multiple speaker separation, and multiple microphone analysis operate on sequences of short-time power spectra (spectrograms), a representation which is often well-suited to these tasks. However, a significant problem with algorithms that manipulate spectrograms is that the output spectrogram does not include a phase component, which is needed to create a time-domain signal that has good perceptual quality. Here we describe a generative model of time-domain speech signals and their spectrograms, and show how an efficient optimizer can be used to find the maximum a posteriori speech signal, given the spectrogram. In contrast to techniques that alternate between estimating the phase and a spectrally-consistent signal, our technique directly infers the speech signal, thus jointly optimizing the phase and a spectrally-consistent signal. We compare our technique with a standard method using signal-to-noise ratios, but we also provide audio files on the web for the purpose of demonstrating the improvement in perceptual quality that our technique offers. 1
6 0.36087465 61 nips-2003-Entrainment of Silicon Central Pattern Generators for Legged Locomotory Control
7 0.36010307 165 nips-2003-Reasoning about Time and Knowledge in Neural Symbolic Learning Systems
8 0.35168001 89 nips-2003-Impact of an Energy Normalization Transform on the Performance of the LF-ASD Brain Computer Interface
9 0.34721798 144 nips-2003-One Microphone Blind Dereverberation Based on Quasi-periodicity of Speech Signals
10 0.3403337 69 nips-2003-Factorization with Uncertainty and Missing Data: Exploiting Temporal Coherence
11 0.33749181 119 nips-2003-Local Phase Coherence and the Perception of Blur
12 0.32733846 18 nips-2003-A Summating, Exponentially-Decaying CMOS Synapse for Spiking Neural Systems
13 0.32208943 110 nips-2003-Learning a World Model and Planning with a Self-Organizing, Dynamic Neural System
14 0.30625904 104 nips-2003-Learning Curves for Stochastic Gradient Descent in Linear Feedforward Networks
15 0.30101895 187 nips-2003-Training a Quantum Neural Network
16 0.27688101 15 nips-2003-A Probabilistic Model of Auditory Space Representation in the Barn Owl
17 0.27572924 114 nips-2003-Limiting Form of the Sample Covariance Eigenspectrum in PCA and Kernel PCA
18 0.26347616 159 nips-2003-Predicting Speech Intelligibility from a Population of Neurons
19 0.25754845 76 nips-2003-GPPS: A Gaussian Process Positioning System for Cellular Networks
20 0.2545543 175 nips-2003-Sensory Modality Segregation
topicId topicWeight
[(29, 0.013), (30, 0.617), (35, 0.033), (53, 0.068), (63, 0.013), (71, 0.05), (76, 0.03), (85, 0.033), (91, 0.049)]
simIndex simValue paperId paperTitle
same-paper 1 0.93822598 157 nips-2003-Plasticity Kernels and Temporal Statistics
Author: Peter Dayan, Michael Häusser, Michael London
Abstract: Computational mysteries surround the kernels relating the magnitude and sign of changes in efficacy as a function of the time difference between pre- and post-synaptic activity at a synapse. One important idea34 is that kernels result from filtering, ie an attempt by synapses to eliminate noise corrupting learning. This idea has hitherto been applied to trace learning rules; we apply it to experimentally-defined kernels, using it to reverse-engineer assumed signal statistics. We also extend it to consider the additional goal for filtering of weighting learning according to statistical surprise, as in the Z-score transform. This provides a fresh view of observed kernels and can lead to different, and more natural, signal statistics.
2 0.8678956 62 nips-2003-Envelope-based Planning in Relational MDPs
Author: Natalia H. Gardiol, Leslie P. Kaelbling
Abstract: A mobile robot acting in the world is faced with a large amount of sensory data and uncertainty in its action outcomes. Indeed, almost all interesting sequential decision-making domains involve large state spaces and large, stochastic action sets. We investigate a way to act intelligently as quickly as possible in domains where finding a complete policy would take a hopelessly long time. This approach, Relational Envelopebased Planning (REBP) tackles large, noisy problems along two axes. First, describing a domain as a relational MDP (instead of as an atomic or propositionally-factored MDP) allows problem structure and dynamics to be captured compactly with a small set of probabilistic, relational rules. Second, an envelope-based approach to planning lets an agent begin acting quickly within a restricted part of the full state space and to judiciously expand its envelope as resources permit. 1
3 0.83355653 71 nips-2003-Fast Embedding of Sparse Similarity Graphs
Author: John C. Platt
Abstract: This paper applies fast sparse multidimensional scaling (MDS) to a large graph of music similarity, with 267K vertices that represent artists, albums, and tracks; and 3.22M edges that represent similarity between those entities. Once vertices are assigned locations in a Euclidean space, the locations can be used to browse music and to generate playlists. MDS on very large sparse graphs can be effectively performed by a family of algorithms called Rectangular Dijsktra (RD) MDS algorithms. These RD algorithms operate on a dense rectangular slice of the distance matrix, created by calling Dijsktra a constant number of times. Two RD algorithms are compared: Landmark MDS, which uses the Nyström approximation to perform MDS; and a new algorithm called Fast Sparse Embedding, which uses FastMap. These algorithms compare favorably to Laplacian Eigenmaps, both in terms of speed and embedding quality. 1
4 0.82171017 144 nips-2003-One Microphone Blind Dereverberation Based on Quasi-periodicity of Speech Signals
Author: Tomohiro Nakatani, Masato Miyoshi, Keisuke Kinoshita
Abstract: Speech dereverberation is desirable with a view to achieving, for example, robust speech recognition in the real world. However, it is still a challenging problem, especially when using a single microphone. Although blind equalization techniques have been exploited, they cannot deal with speech signals appropriately because their assumptions are not satisfied by speech signals. We propose a new dereverberation principle based on an inherent property of speech signals, namely quasi-periodicity. The present methods learn the dereverberation filter from a lot of speech data with no prior knowledge of the data, and can achieve high quality speech dereverberation especially when the reverberation time is long. 1
5 0.43501136 34 nips-2003-Approximate Policy Iteration with a Policy Language Bias
Author: Alan Fern, Sungwook Yoon, Robert Givan
Abstract: We explore approximate policy iteration, replacing the usual costfunction learning step with a learning step in policy space. We give policy-language biases that enable solution of very large relational Markov decision processes (MDPs) that no previous technique can solve. In particular, we induce high-quality domain-specific planners for classical planning domains (both deterministic and stochastic variants) by solving such domains as extremely large MDPs. 1
6 0.43107146 162 nips-2003-Probabilistic Inference of Speech Signals from Phaseless Spectrograms
7 0.35351619 5 nips-2003-A Classification-based Cocktail-party Processor
8 0.33076334 15 nips-2003-A Probabilistic Model of Auditory Space Representation in the Barn Owl
9 0.32801116 159 nips-2003-Predicting Speech Intelligibility from a Population of Neurons
10 0.29665828 20 nips-2003-All learning is Local: Multi-agent Learning in Global Reward Games
11 0.28479975 184 nips-2003-The Diffusion-Limited Biochemical Signal-Relay Channel
12 0.27517432 50 nips-2003-Denoising and Untangling Graphs Using Degree Priors
13 0.27498001 27 nips-2003-Analytical Solution of Spike-timing Dependent Plasticity Based on Synaptic Biophysics
14 0.27121425 128 nips-2003-Minimax Embeddings
15 0.26272035 115 nips-2003-Linear Dependent Dimensionality Reduction
16 0.25711232 119 nips-2003-Local Phase Coherence and the Perception of Blur
17 0.25501186 76 nips-2003-GPPS: A Gaussian Process Positioning System for Cellular Networks
18 0.24948414 13 nips-2003-A Neuromorphic Multi-chip Model of a Disparity Selective Complex Cell
19 0.24790092 10 nips-2003-A Low-Power Analog VLSI Visual Collision Detector
20 0.24775046 123 nips-2003-Markov Models for Automated ECG Interval Analysis