nips nips2006 nips2006-107 knowledge-graph by maker-knowledge-mining

107 nips-2006-Large Margin Multi-channel Analog-to-Digital Conversion with Applications to Neural Prosthesis


Source: pdf

Author: Amit Gore, Shantanu Chakrabartty

Abstract: A key challenge in designing analog-to-digital converters for cortically implanted prosthesis is to sense and process high-dimensional neural signals recorded by the micro-electrode arrays. In this paper, we describe a novel architecture for analog-to-digital (A/D) conversion that combines Σ∆ conversion with spatial de-correlation within a single module. The architecture called multiple-input multiple-output (MIMO) Σ∆ is based on a min-max gradient descent optimization of a regularized linear cost function that naturally lends to an A/D formulation. Using an online formulation, the architecture can adapt to slow variations in cross-channel correlations, observed due to relative motion of the microelectrodes with respect to the signal sources. Experimental results with real recorded multi-channel neural data demonstrate the effectiveness of the proposed algorithm in alleviating cross-channel redundancy across electrodes and performing data-compression directly at the A/D converter. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract A key challenge in designing analog-to-digital converters for cortically implanted prosthesis is to sense and process high-dimensional neural signals recorded by the micro-electrode arrays. [sent-3, score-0.549]

2 In this paper, we describe a novel architecture for analog-to-digital (A/D) conversion that combines Σ∆ conversion with spatial de-correlation within a single module. [sent-4, score-0.553]

3 The architecture called multiple-input multiple-output (MIMO) Σ∆ is based on a min-max gradient descent optimization of a regularized linear cost function that naturally lends to an A/D formulation. [sent-5, score-0.191]

4 Using an online formulation, the architecture can adapt to slow variations in cross-channel correlations, observed due to relative motion of the microelectrodes with respect to the signal sources. [sent-6, score-0.192]

5 Experimental results with real recorded multi-channel neural data demonstrate the effectiveness of the proposed algorithm in alleviating cross-channel redundancy across electrodes and performing data-compression directly at the A/D converter. [sent-7, score-0.221]

6 1 Introduction Design of cortically implanted neural prosthetic sensors (CINPS)is an active area of research in the rapidly emerging field of brain machine interfaces (BMI) [1, 2]. [sent-8, score-0.337]

7 The core technology used in these sensors are micro-electrode arrays (MEAs) that facilitate real-time recording from thousands of neurons simultaneously. [sent-9, score-0.188]

8 These recordings are then actively processed at the sensor (shown in Figure 1) and transmitted to an off-scalp neural processor which controls the movement of a prosthetic limb [1]. [sent-10, score-0.206]

9 A key challenge in designing implanted integrated circuits (IC) for CINPS is to efficiently process high-dimensional signals generated at the interface of micro-electrode arrays [3, 4]. [sent-11, score-0.345]

10 Sensor arrays consisting of more than 1000 recording elements are common [5, 6] which significantly increase the transmission rate at the sensor. [sent-12, score-0.23]

11 A simple strategy of recording, parallel data conversion and transmitting the recorded neural signals ( at a sampling rate of 10 KHz) can easily exceed the power dissipation limit of 80mW/cm2 determined by local heating of biological tissue [7]. [sent-13, score-0.652]

12 In addition to increased power dissipation, high-transmission rate also adversely affects the real-time control of neural prosthesis [3]. [sent-14, score-0.161]

13 One of the solutions that have been proposed by several researchers is to perform compression of the neural signals directly at the sensor, to reduce its wireless transmission rate and hence its power dissipation [8, 4]. [sent-15, score-0.384]

14 In this paper we present an approach where de-correlation or redundancy elimination is performed directly at analog-to-digital converter. [sent-16, score-0.121]

15 It has been shown that neural cross-talk and common-mode effects introduces unwanted redundancy at the output of the electrode array [4]. [sent-17, score-0.274]

16 As a result, neural signals typically occupy only a small sub-space within the high-dimensional space spanned by the micro-electrode signals. [sent-18, score-0.152]

17 An optimal strategy for designing a multi-channel analogto-digital converter is to identify and operate within the sub-space spanned by the neural signals and in the process eliminate cross-channel redundancy. [sent-19, score-0.649]

18 Our approach will be to formalize a cost function consisting of L1 norm of the internal state vector whose gradient updates naturally lends to a digital time-series expansion. [sent-21, score-0.261]

19 Within this framework the correlation distance between the channels will be minimized which amounts to searching for signal spaces that are maximally separated from each other. [sent-22, score-0.172]

20 The architecture called multiple-input multiple-output (MIMO) Σ∆ converter is the first reported data conversion technique to embed large margin principles. [sent-23, score-0.798]

21 The approach, however, is generic and can be extended to designing higher order ADC. [sent-24, score-0.04]

22 To illustrate the concept of MIMO A/D conversion, the paper is organized as follows: section 2 introduces a regularization framework for the proposed MIMO data converter and introduces the min-max gradient descent approach. [sent-25, score-0.572]

23 Section 3 applies the technique to simulated and recorded neural data. [sent-26, score-0.129]

24 For the sake of simplicity we will first assume that the input to converter is a M dimensional vector x ∈ RM where each dimension represents a single channel in the multi-electrode array. [sent-29, score-0.606]

25 Also denote a linear transformation matrix A ∈ RM ×M and an regression weight vector w ∈ RM . [sent-32, score-0.065]

26 The cost function in equation 2 consists of two factors: the first factor is an L1 regularizer which constrains the norm of the vector w and the second factor that maximizes the correlation between vector w and an input vector x transformed using a linear projection denoted by matrix A. [sent-34, score-0.165]

27 The choice of L1 norm and the form of cost function in equation (2) will become clear when we present its corresponding gradient update rule. [sent-35, score-0.177]

28 To ensure that the optimization problem in equation 1 is well defined, the norm of the input vector ||x||∞ ≤ 1 will be assumed to be bounded. [sent-36, score-0.17]

29 Under bounded condition, the closed form solution to optimization problem in equation 1 can be found to be w∗ = 0. [sent-37, score-0.091]

30 From the perspective of A/D conversion we will show that the iterative steps leading towards solution to the optimization problem in equation 1 are more important than the final solution itself. [sent-38, score-0.37]

31 Given an initial estimate of the state vector w[0] the online gradient descent step for Figure 2: Architecture of the proposed first-order MIMO Σ∆ converter. [sent-39, score-0.093]

32 The choice of L1 norm in optimization function in equation 1 ensures that for η > 0 the iteration 3 exhibits oscillatory behavior around the solution w∗ . [sent-41, score-0.157]

33 Combining equation (3) with equation (2) the following recursion is obtained: w[n] = w[n − 1] − η w[n] = w[n − 1] + η(Ax − d[n]) (4) d[n] = sgn(w[n − 1]) (5) where and sgn(u) denotes an element-wise signum operation such that d[n] ∈ {+1, −1}M represents a digital time-series. [sent-42, score-0.314]

34 The iterations in 3 represents the recursion step for M first-order Σ∆ converters [9] coupled together by the linear transform A. [sent-43, score-0.158]

35 If we assume that the norm of matrix ||A||∞ ≤ 1 is bounded, it can be shown that ||w∞ || < 1 + η. [sent-44, score-0.069]

36 Following N update steps the recursion given by equation 4 yields Ax − 1 N N d[n] = n=1 1 (w[N ] − w[0]) ηN (6) which using the bounded property of w asymptotically leads to 1 N N d[n] −→ Ax (7) n=1 as N → ∞. [sent-45, score-0.178]

37 Therefore consistent with the theory of Σ∆ conversion [9] the moving average of vector digital sequence d[n] converges to the transformed input vector Ax as the number of update steps N increases. [sent-46, score-0.511]

38 It can also be shown that N update steps yields a digital representation which is log2 (N ) bits accurate. [sent-47, score-0.232]

39 1 Online adaptation and compression The next step is to determine the form of the matrix A which parameterize the family of linear transformations spanning the signal space. [sent-49, score-0.211]

40 The aim of optimizing for A is to find multi-channel signal configuration that is maximally separated from each other. [sent-50, score-0.127]

41 For this purposes we denote one channel as a reference relative to which all distances/correlations will be measured. [sent-51, score-0.153]

42 This is unlike independent component analysis (ICA) based approaches [12], where the objective is to search for maximally independent signal space including the reference channel. [sent-52, score-0.167]

43 Even though several forms of the matrix A = [aij ] can be chosen, for reasons which will discussed later in this paper the matrix A is chosen to be a lower triangular matrix such that aij = 0; i < j and aij = 1; i = j. [sent-53, score-0.354]

44 The choice of a lower triangular matrix ensures that the matrix A is always invertible. [sent-54, score-0.094]

45 It also implies that the first channel is unaffected by the proposed transform A and will be the reference channel. [sent-55, score-0.183]

46 The problem of compression or redundancy elimination is therefore to optimize the cross-elements aij , i = j such that the cross-correlation terms in optimization function given by equation 1 are minimized. [sent-56, score-0.373]

47 This can be written as a min-max optimization criterion where an inner optimization performs analog-to-digital conversion, where as the outer loop adapts the linear transform matrix A such as to maximize the margin of separation between the respective signal spaces. [sent-57, score-0.242]

48 The update rule in equation 9 can be made amenable to hardware implementation by considering only the sign of the regression vector w[n] and the input vector x as aij [n] = aij [n − 1] − εdi [n] sign(xj ); ∀i > j. [sent-59, score-0.394]

49 (10) The update rule in equation 10 bears strong resemblance to online update rules used in independent component analysis (ICA) [12, 13]. [sent-60, score-0.176]

50 The difference with the proposed technique however is the integrated data conversion coupled with spatial decorrelation/compression. [sent-61, score-0.275]

51 The output of the MIMO Σ∆ converter is a digital stream whose pulse density is proportional to the transformed input data vector as 1 N N d[n] −→ A[n]x (11) n=1 By construction the MIMO converter produces a digital stream whose pulse-density contains only non-redundant information. [sent-62, score-1.456]

52 To achieve compression some of the digital channels can be discarded (based on their relative energy criterion ) and can also be shut down to conserve power. [sent-63, score-0.243]

53 The original signal can be reconstructed from the compressed digital stream by applying an inverse transformation A−1 as N x= 1 A[n]−1 ( d[n]). [sent-64, score-0.374]

54 N n=1 (12) An advantage of using a lower triangular form for the linear transformation matrix A with its diagonal elements as unity, is that its inverse always well-defined. [sent-65, score-0.107]

55 Thus signal reconstruction using the output of the analog-to-digital converter is also always well defined. [sent-66, score-0.658]

56 Since the transformation matrix A is continually being updated, the information related to the linear transform also needs to be periodically transmitted to ensure faithful reconstruction at the external prosthetic controller. [sent-67, score-0.275]

57 However, analogous to many naturally occurring signal the underlying statistics of multi-dimensional signal changes slowly as the signal itself. [sent-68, score-0.279]

58 Therefore the transmission of the matrix A needs to be performed at a relatively slower rate than the transmission of the compressed neural signals. [sent-69, score-0.243]

59 Similar to conventional Σ∆ conversion [9], the framework for MIMO Σ∆ can be extended to timevarying input vector under the assumption of high oversampling criterion [9]. [sent-70, score-0.367]

60 For a MIMO A/D converter oversampling ratio (OSR) is defined by the ratio of the update frequency fs and the maximum Nyquist rate amongst all elements of the input vector x[n]. [sent-71, score-0.675]

61 The resolution of the MIMO Σ∆ is also determined by the OSR as log2 (OSR) and during the oversampling period the input signal vector can be assumed to be approximately stationary. [sent-72, score-0.217]

62 For time-varying input vector (a) (b) Figure 3: Functional verification of MIMO Σ∆ converter on artificially generated multi-channel data (a) Data presented to the MIMO Σ∆ converter (b) Analog representation of digital output produced by MIMO converter x[n] = {xj [n]}, j = 1, . [sent-73, score-1.609]

63 , M the matrix update equation in equation 10 can be generalized after N steps as 1 1 aij [N ] = ε N N N di [n]sgn(xj [n]); ∀i > j. [sent-75, score-0.341]

64 (13) n=1 Thus if the norm of the matrix A is bounded, then asymptotically N → ∞ the equation 13 imply that the cross-channel correlation between the digital output and the sign of the input signal approaches zero. [sent-76, score-0.482]

65 The architecture for the MIMO Σ∆ converter illustrating recursions (4) and (11) is shown in Figure 2. [sent-78, score-0.571]

66 As shown in the Figure 2 the regression vectors w[n] within the framework of MIMO Σ∆ represents the output of the Σ∆ integrator. [sent-79, score-0.048]

67 All the adaptation and linear transformation steps can be implemented using analog VLSI with adaptation steps implemented either using multiplying digital-to-analog converters or floating gates synapses. [sent-80, score-0.379]

68 Even though any channel can be chosen as a reference channel, our experiments indicate that the channel with maximum cross-correlation and maximum signal power serves as the best choice. [sent-81, score-0.387]

69 Figure 4: Reconstruction performance in terms of mean square error computed using artificial data for different OSR 3 Results The functionality of the proposed MIMO sigma-delta converter was verified using artificially generated data and with real multi-channel recorded neural data. [sent-82, score-0.618]

70 The first set of experiments simulated an artificially generated 8 channel data. [sent-83, score-0.113]

71 Figure 3(a) illustrates the multi-channel data where each channel was obtained by random linear mixing of two sinusoids with frequency 20Hz and 40Hz. [sent-84, score-0.157]

72 The multi-channel data was presented to a MIMO sigma delta converter implemented in software. [sent-85, score-0.457]

73 The equivalent analog representation of the pulse density encoded digital stream was obtained using a moving window averaging technique with window size equal to the oversampling ratio (OSR). [sent-86, score-0.498]

74 The resultant analog representation of the ADC output is shown in 3(b). [sent-87, score-0.132]

75 It can be seen in the figure that after initial adaptation steps the output corresponding to first two channels converges to the fundamental sinusoids, where as the rest of the digital streams converged to an equivalent zero output. [sent-88, score-0.358]

76 This simple experiment demonstrates the functionality of MIMO sigma-delta in eliminating cross-channel redundancy. [sent-89, score-0.059]

77 The first two digital streams were used to reconstruct the original recording using equation 12. [sent-90, score-0.334]

78 Figure 4 shows the reconstruction error averaged over a time window of 2048 samples showing that the error indeed converges to zero, as the MIMO converter adapts. [sent-91, score-0.543]

79 It can be seen that even though better reconstruction error can be achieved by using higher OSR, the adaptation procedure compensates for errors introduced due to low resolution. [sent-93, score-0.108]

80 In fact the reconstruction performance is optimal for intermediate OSR. [sent-94, score-0.06]

81 The data was recorded at a sampling rate of 20KHz and at a resolution of 16 bits. [sent-96, score-0.072]

82 Figure 5(a) shows a clip of multi-channel recording for duration of 0. [sent-97, score-0.128]

83 This validates the principle of operation of the MIMO conversion where the multi-channel neural recording lie on a low-dimensional manifold whose parameters are relatively stationary with respect to the signal statistics. [sent-102, score-0.509]

84 Figure 7: Demonstration of common-mode rejection performed by MIMO Σ∆: (a) Original multichannel signal at the input of converter (b) analog representation of the converter output (c) a magnified clip of the output produced by the converter illustrating preservation of neural information. [sent-103, score-1.896]

85 The last set of experiments demonstrate the ability of the proposed MIMO converter to reject common mode disturbance across all the channels. [sent-104, score-0.495]

86 Rejection of common-mode signal is one of the most important requirement for processing neural signals whose amplitude range from 50µV - 500µV , where as the common-mode interference resulting from EMG or electrical coupling could be as high as 10mV [14]. [sent-105, score-0.298]

87 Therefore most of the micro-electrode arrays use bio-potential amplifiers for enhancing signal-to-noise ratio and common-mode rejection. [sent-106, score-0.099]

88 For this set of experiments, the recorded neural data obtained from the previous experiment was contaminated by an additive 60Hz sinusoidal interference of amplitude 1mV . [sent-107, score-0.181]

89 The results are shown in Figure 7 illustrating that the reference channel absorbs all the common-mode disturbance where as the neural information is preserved in other channels. [sent-108, score-0.295]

90 In fact theoretically it can be shown that the common-mode rejection ratio for the proposed MIMO ADC is dependent only on the OSR and is given by 20 log10 OSR. [sent-109, score-0.065]

91 4 Conclusion In this paper we presented a novel MIMO analog-to-digital conversion algorithm with application to multi-channel neural prosthesis. [sent-110, score-0.3]

92 The roots of the algorithm lie within the framework of large margin principles, where the data converter maximizes the relative distance between signal space corresponding to different channels. [sent-111, score-0.581]

93 Experimental results with real multi-channel neural data demonstrate the effectiveness of the proposed method in eliminating cross-channel redundancy and hence reducing data throughput and power dissipation requirements of a multi-channel biotelemetry sensor. [sent-112, score-0.292]

94 There are several open questions that needs to be addressed as a continuation of this research which includes extension of the algorithm second-order Σ∆ architectures, embedding of kernels into the ADC formulation and reformulation of the update rule to perform ICA directly on the ADC. [sent-113, score-0.042]

95 Karim Oweiss for providing multi-channel neural data for the MIMO ADC experiments. [sent-116, score-0.057]

96 Nicolelis, Learning to control a brain-machine interface for reaching and grasping by primates, PLoS Biol. [sent-142, score-0.05]

97 Shenoy, High information transmission rates in a neural prosthetic system, in Soc. [sent-155, score-0.209]

98 [6] Maynard EM, Nordhausen CT, Normann RA, The Utah intracortical electrode array: a recording structure for potential brain computer interfaces. [sent-173, score-0.12]

99 Davies, Characterization of tissue morphology, angiogenesis, and temperature in adaptive response of muscle tissue to chronic heating, Lab Investigation, vol. [sent-182, score-0.092]

100 A fully integrated neural recording amplifier with DC input stabilization. [sent-217, score-0.218]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('mimo', 0.614), ('converter', 0.457), ('conversion', 0.243), ('osr', 0.175), ('digital', 0.154), ('aij', 0.117), ('channel', 0.113), ('recording', 0.093), ('signal', 0.093), ('redundancy', 0.092), ('converters', 0.088), ('dissipation', 0.088), ('oversampling', 0.088), ('prosthetic', 0.088), ('analog', 0.084), ('adc', 0.076), ('implanted', 0.076), ('prosthesis', 0.076), ('signals', 0.074), ('arrays', 0.073), ('recorded', 0.072), ('architecture', 0.067), ('cortically', 0.066), ('transmission', 0.064), ('equation', 0.06), ('ax', 0.06), ('reconstruction', 0.06), ('neural', 0.057), ('stream', 0.056), ('interface', 0.05), ('output', 0.048), ('adaptation', 0.048), ('illustrating', 0.047), ('tissue', 0.046), ('channels', 0.045), ('compression', 0.044), ('cinps', 0.044), ('heating', 0.044), ('oversampled', 0.044), ('oweiss', 0.044), ('sinusoids', 0.044), ('norm', 0.043), ('update', 0.042), ('sgn', 0.042), ('triangular', 0.042), ('ica', 0.041), ('reference', 0.04), ('recursion', 0.04), ('designing', 0.04), ('transformation', 0.039), ('rejection', 0.039), ('embs', 0.038), ('multichannel', 0.038), ('pulse', 0.038), ('disturbance', 0.038), ('input', 0.036), ('cially', 0.036), ('steps', 0.036), ('clip', 0.035), ('maximally', 0.034), ('functionality', 0.032), ('transmitted', 0.032), ('lends', 0.032), ('compressed', 0.032), ('gradient', 0.032), ('integrated', 0.032), ('online', 0.032), ('optimization', 0.031), ('margin', 0.031), ('vlsi', 0.031), ('nervous', 0.031), ('modules', 0.031), ('transform', 0.03), ('descent', 0.029), ('sensor', 0.029), ('interference', 0.029), ('elimination', 0.029), ('wireless', 0.029), ('power', 0.028), ('pp', 0.028), ('interfaces', 0.028), ('introduces', 0.027), ('veri', 0.027), ('electrode', 0.027), ('ampli', 0.027), ('streams', 0.027), ('eliminating', 0.027), ('ratio', 0.026), ('matrix', 0.026), ('window', 0.026), ('architectures', 0.025), ('exhibits', 0.023), ('amplitude', 0.023), ('stationary', 0.023), ('array', 0.023), ('rm', 0.022), ('sign', 0.022), ('electrical', 0.022), ('sensors', 0.022), ('spanned', 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9999994 107 nips-2006-Large Margin Multi-channel Analog-to-Digital Conversion with Applications to Neural Prosthesis

Author: Amit Gore, Shantanu Chakrabartty

Abstract: A key challenge in designing analog-to-digital converters for cortically implanted prosthesis is to sense and process high-dimensional neural signals recorded by the micro-electrode arrays. In this paper, we describe a novel architecture for analog-to-digital (A/D) conversion that combines Σ∆ conversion with spatial de-correlation within a single module. The architecture called multiple-input multiple-output (MIMO) Σ∆ is based on a min-max gradient descent optimization of a regularized linear cost function that naturally lends to an A/D formulation. Using an online formulation, the architecture can adapt to slow variations in cross-channel correlations, observed due to relative motion of the microelectrodes with respect to the signal sources. Experimental results with real recorded multi-channel neural data demonstrate the effectiveness of the proposed algorithm in alleviating cross-channel redundancy across electrodes and performing data-compression directly at the A/D converter. 1

2 0.071083054 179 nips-2006-Sparse Representation for Signal Classification

Author: Ke Huang, Selin Aviyente

Abstract: In this paper, application of sparse representation (factorization) of signals over an overcomplete basis (dictionary) for signal classification is discussed. Searching for the sparse representation of a signal over an overcomplete dictionary is achieved by optimizing an objective function that includes two terms: one that measures the signal reconstruction error and another that measures the sparsity. This objective function works well in applications where signals need to be reconstructed, like coding and denoising. On the other hand, discriminative methods, such as linear discriminative analysis (LDA), are better suited for classification tasks. However, discriminative methods are usually sensitive to corruption in signals due to lacking crucial properties for signal reconstruction. In this paper, we present a theoretical framework for signal classification with sparse representation. The approach combines the discrimination power of the discriminative methods with the reconstruction property and the sparsity of the sparse representation that enables one to deal with signal corruptions: noise, missing data and outliers. The proposed approach is therefore capable of robust classification with a sparse representation of signals. The theoretical results are demonstrated with signal classification tasks, showing that the proposed approach outperforms the standard discriminative methods and the standard sparse representation in the case of corrupted signals. 1

3 0.064511992 46 nips-2006-Blind source separation for over-determined delayed mixtures

Author: Lars Omlor, Martin Giese

Abstract: Blind source separation, i.e. the extraction of unknown sources from a set of given signals, is relevant for many applications. A special case of this problem is dimension reduction, where the goal is to approximate a given set of signals by superpositions of a minimal number of sources. Since in this case the signals outnumber the sources the problem is over-determined. Most popular approaches for addressing this problem are based on purely linear mixing models. However, many applications like the modeling of acoustic signals, EMG signals, or movement trajectories, require temporal shift-invariance of the extracted components. This case has only rarely been treated in the computational literature, and specifically for the case of dimension reduction almost no algorithms have been proposed. We present a new algorithm for the solution of this problem, which is based on a timefrequency transformation (Wigner-Ville distribution) of the generative model. We show that this algorithm outperforms classical source separation algorithms for linear mixtures, and also a related method for mixtures with delays. In addition, applying the new algorithm to trajectories of human gaits, we demonstrate that it is suitable for the extraction of spatio-temporal components that are easier to interpret than components extracted with other classical algorithms. 1

4 0.061043445 167 nips-2006-Recursive ICA

Author: Honghao Shan, Lingyun Zhang, Garrison W. Cottrell

Abstract: Independent Component Analysis (ICA) is a popular method for extracting independent features from visual data. However, as a fundamentally linear technique, there is always nonlinear residual redundancy that is not captured by ICA. Hence there have been many attempts to try to create a hierarchical version of ICA, but so far none of the approaches have a natural way to apply them more than once. Here we show that there is a relatively simple technique that transforms the absolute values of the outputs of a previous application of ICA into a normal distribution, to which ICA maybe applied again. This results in a recursive ICA algorithm that may be applied any number of times in order to extract higher order structure from previous layers. 1

5 0.058851272 148 nips-2006-Nonlinear physically-based models for decoding motor-cortical population activity

Author: Gregory Shakhnarovich, Sung-phil Kim, Michael J. Black

Abstract: Neural motor prostheses (NMPs) require the accurate decoding of motor cortical population activity for the control of an artificial motor system. Previous work on cortical decoding for NMPs has focused on the recovery of hand kinematics. Human NMPs however may require the control of computer cursors or robotic devices with very different physical and dynamical properties. Here we show that the firing rates of cells in the primary motor cortex of non-human primates can be used to control the parameters of an artificial physical system exhibiting realistic dynamics. The model represents 2D hand motion in terms of a point mass connected to a system of idealized springs. The nonlinear spring coefficients are estimated from the firing rates of neurons in the motor cortex. We evaluate linear and a nonlinear decoding algorithms using neural recordings from two monkeys performing two different tasks. We found that the decoded spring coefficients produced accurate hand trajectories compared with state-of-the-art methods for direct decoding of hand kinematics. Furthermore, using a physically-based system produced decoded movements that were more “natural” in that their frequency spectrum more closely matched that of natural hand movements. 1

6 0.055918287 59 nips-2006-Context dependent amplification of both rate and event-correlation in a VLSI network of spiking neurons

7 0.055291653 36 nips-2006-Attentional Processing on a Spike-Based VLSI Neural Network

8 0.038469188 16 nips-2006-A Theory of Retinal Population Coding

9 0.037494056 12 nips-2006-A Probabilistic Algorithm Integrating Source Localization and Noise Suppression of MEG and EEG data

10 0.036893006 18 nips-2006-A selective attention multi--chip system with dynamic synapses and spiking neurons

11 0.03625194 186 nips-2006-Support Vector Machines on a Budget

12 0.036030509 87 nips-2006-Graph Laplacian Regularization for Large-Scale Semidefinite Programming

13 0.03555949 99 nips-2006-Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons

14 0.034665637 126 nips-2006-Logistic Regression for Single Trial EEG Classification

15 0.034097303 200 nips-2006-Unsupervised Regression with Applications to Nonlinear System Identification

16 0.033847842 197 nips-2006-Uncertainty, phase and oscillatory hippocampal recall

17 0.033611752 76 nips-2006-Emergence of conjunctive visual features by quadratic independent component analysis

18 0.033198215 129 nips-2006-Map-Reduce for Machine Learning on Multicore

19 0.033073813 141 nips-2006-Multiple timescales and uncertainty in motor adaptation

20 0.032775093 120 nips-2006-Learning to Traverse Image Manifolds


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.118), (1, -0.06), (2, 0.024), (3, 0.008), (4, -0.051), (5, -0.032), (6, 0.003), (7, 0.018), (8, -0.014), (9, 0.046), (10, -0.032), (11, -0.053), (12, -0.059), (13, -0.035), (14, 0.028), (15, -0.052), (16, 0.02), (17, 0.004), (18, 0.056), (19, 0.034), (20, 0.047), (21, -0.009), (22, 0.003), (23, -0.016), (24, -0.019), (25, 0.039), (26, -0.082), (27, 0.036), (28, 0.03), (29, 0.017), (30, 0.047), (31, -0.043), (32, 0.045), (33, -0.047), (34, -0.033), (35, -0.026), (36, -0.029), (37, -0.021), (38, 0.008), (39, -0.071), (40, -0.028), (41, -0.035), (42, -0.052), (43, -0.049), (44, -0.139), (45, 0.096), (46, 0.045), (47, -0.03), (48, -0.039), (49, 0.0)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.909756 107 nips-2006-Large Margin Multi-channel Analog-to-Digital Conversion with Applications to Neural Prosthesis

Author: Amit Gore, Shantanu Chakrabartty

Abstract: A key challenge in designing analog-to-digital converters for cortically implanted prosthesis is to sense and process high-dimensional neural signals recorded by the micro-electrode arrays. In this paper, we describe a novel architecture for analog-to-digital (A/D) conversion that combines Σ∆ conversion with spatial de-correlation within a single module. The architecture called multiple-input multiple-output (MIMO) Σ∆ is based on a min-max gradient descent optimization of a regularized linear cost function that naturally lends to an A/D formulation. Using an online formulation, the architecture can adapt to slow variations in cross-channel correlations, observed due to relative motion of the microelectrodes with respect to the signal sources. Experimental results with real recorded multi-channel neural data demonstrate the effectiveness of the proposed algorithm in alleviating cross-channel redundancy across electrodes and performing data-compression directly at the A/D converter. 1

2 0.59411007 179 nips-2006-Sparse Representation for Signal Classification

Author: Ke Huang, Selin Aviyente

Abstract: In this paper, application of sparse representation (factorization) of signals over an overcomplete basis (dictionary) for signal classification is discussed. Searching for the sparse representation of a signal over an overcomplete dictionary is achieved by optimizing an objective function that includes two terms: one that measures the signal reconstruction error and another that measures the sparsity. This objective function works well in applications where signals need to be reconstructed, like coding and denoising. On the other hand, discriminative methods, such as linear discriminative analysis (LDA), are better suited for classification tasks. However, discriminative methods are usually sensitive to corruption in signals due to lacking crucial properties for signal reconstruction. In this paper, we present a theoretical framework for signal classification with sparse representation. The approach combines the discrimination power of the discriminative methods with the reconstruction property and the sparsity of the sparse representation that enables one to deal with signal corruptions: noise, missing data and outliers. The proposed approach is therefore capable of robust classification with a sparse representation of signals. The theoretical results are demonstrated with signal classification tasks, showing that the proposed approach outperforms the standard discriminative methods and the standard sparse representation in the case of corrupted signals. 1

3 0.57669783 148 nips-2006-Nonlinear physically-based models for decoding motor-cortical population activity

Author: Gregory Shakhnarovich, Sung-phil Kim, Michael J. Black

Abstract: Neural motor prostheses (NMPs) require the accurate decoding of motor cortical population activity for the control of an artificial motor system. Previous work on cortical decoding for NMPs has focused on the recovery of hand kinematics. Human NMPs however may require the control of computer cursors or robotic devices with very different physical and dynamical properties. Here we show that the firing rates of cells in the primary motor cortex of non-human primates can be used to control the parameters of an artificial physical system exhibiting realistic dynamics. The model represents 2D hand motion in terms of a point mass connected to a system of idealized springs. The nonlinear spring coefficients are estimated from the firing rates of neurons in the motor cortex. We evaluate linear and a nonlinear decoding algorithms using neural recordings from two monkeys performing two different tasks. We found that the decoded spring coefficients produced accurate hand trajectories compared with state-of-the-art methods for direct decoding of hand kinematics. Furthermore, using a physically-based system produced decoded movements that were more “natural” in that their frequency spectrum more closely matched that of natural hand movements. 1

4 0.53076434 75 nips-2006-Efficient sparse coding algorithms

Author: Honglak Lee, Alexis Battle, Rajat Raina, Andrew Y. Ng

Abstract: Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that capture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1 -regularized least squares problem and an L2 -constrained least squares problem. We propose novel algorithms to solve both of these optimization problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field surround suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons. 1

5 0.51119798 16 nips-2006-A Theory of Retinal Population Coding

Author: Eizaburo Doi, Michael S. Lewicki

Abstract: Efficient coding models predict that the optimal code for natural images is a population of oriented Gabor receptive fields. These results match response properties of neurons in primary visual cortex, but not those in the retina. Does the retina use an optimal code, and if so, what is it optimized for? Previous theories of retinal coding have assumed that the goal is to encode the maximal amount of information about the sensory signal. However, the image sampled by retinal photoreceptors is degraded both by the optics of the eye and by the photoreceptor noise. Therefore, de-blurring and de-noising of the retinal signal should be important aspects of retinal coding. Furthermore, the ideal retinal code should be robust to neural noise and make optimal use of all available neurons. Here we present a theoretical framework to derive codes that simultaneously satisfy all of these desiderata. When optimized for natural images, the model yields filters that show strong similarities to retinal ganglion cell (RGC) receptive fields. Importantly, the characteristics of receptive fields vary with retinal eccentricities where the optical blur and the number of RGCs are significantly different. The proposed model provides a unified account of retinal coding, and more generally, it may be viewed as an extension of the Wiener filter with an arbitrary number of noisy units. 1

6 0.45139101 141 nips-2006-Multiple timescales and uncertainty in motor adaptation

7 0.43504184 18 nips-2006-A selective attention multi--chip system with dynamic synapses and spiking neurons

8 0.42103446 186 nips-2006-Support Vector Machines on a Budget

9 0.41378871 49 nips-2006-Causal inference in sensorimotor integration

10 0.39833379 25 nips-2006-An Application of Reinforcement Learning to Aerobatic Helicopter Flight

11 0.39295241 72 nips-2006-Efficient Learning of Sparse Representations with an Energy-Based Model

12 0.3877168 46 nips-2006-Blind source separation for over-determined delayed mixtures

13 0.3778176 106 nips-2006-Large Margin Hidden Markov Models for Automatic Speech Recognition

14 0.37596026 87 nips-2006-Graph Laplacian Regularization for Large-Scale Semidefinite Programming

15 0.37205756 59 nips-2006-Context dependent amplification of both rate and event-correlation in a VLSI network of spiking neurons

16 0.3685025 149 nips-2006-Nonnegative Sparse PCA

17 0.3650814 24 nips-2006-Aggregating Classification Accuracy across Time: Application to Single Trial EEG

18 0.35935333 174 nips-2006-Similarity by Composition

19 0.34797823 194 nips-2006-Towards a general independent subspace analysis

20 0.34182003 6 nips-2006-A Kernel Subspace Method by Stochastic Realization for Learning Nonlinear Dynamical Systems


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(1, 0.107), (3, 0.039), (7, 0.056), (9, 0.065), (20, 0.015), (22, 0.042), (24, 0.352), (34, 0.011), (44, 0.051), (47, 0.016), (57, 0.05), (65, 0.054), (69, 0.027), (71, 0.012), (93, 0.011)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.85245115 25 nips-2006-An Application of Reinforcement Learning to Aerobatic Helicopter Flight

Author: Pieter Abbeel, Adam Coates, Morgan Quigley, Andrew Y. Ng

Abstract: Autonomous helicopter flight is widely regarded to be a highly challenging control problem. This paper presents the first successful autonomous completion on a real RC helicopter of the following four aerobatic maneuvers: forward flip and sideways roll at low speed, tail-in funnel, and nose-in funnel. Our experimental results significantly extend the state of the art in autonomous helicopter flight. We used the following approach: First we had a pilot fly the helicopter to help us find a helicopter dynamics model and a reward (cost) function. Then we used a reinforcement learning (optimal control) algorithm to find a controller that is optimized for the resulting model and reward function. More specifically, we used differential dynamic programming (DDP), an extension of the linear quadratic regulator (LQR). 1

same-paper 2 0.75256628 107 nips-2006-Large Margin Multi-channel Analog-to-Digital Conversion with Applications to Neural Prosthesis

Author: Amit Gore, Shantanu Chakrabartty

Abstract: A key challenge in designing analog-to-digital converters for cortically implanted prosthesis is to sense and process high-dimensional neural signals recorded by the micro-electrode arrays. In this paper, we describe a novel architecture for analog-to-digital (A/D) conversion that combines Σ∆ conversion with spatial de-correlation within a single module. The architecture called multiple-input multiple-output (MIMO) Σ∆ is based on a min-max gradient descent optimization of a regularized linear cost function that naturally lends to an A/D formulation. Using an online formulation, the architecture can adapt to slow variations in cross-channel correlations, observed due to relative motion of the microelectrodes with respect to the signal sources. Experimental results with real recorded multi-channel neural data demonstrate the effectiveness of the proposed algorithm in alleviating cross-channel redundancy across electrodes and performing data-compression directly at the A/D converter. 1

3 0.60394174 179 nips-2006-Sparse Representation for Signal Classification

Author: Ke Huang, Selin Aviyente

Abstract: In this paper, application of sparse representation (factorization) of signals over an overcomplete basis (dictionary) for signal classification is discussed. Searching for the sparse representation of a signal over an overcomplete dictionary is achieved by optimizing an objective function that includes two terms: one that measures the signal reconstruction error and another that measures the sparsity. This objective function works well in applications where signals need to be reconstructed, like coding and denoising. On the other hand, discriminative methods, such as linear discriminative analysis (LDA), are better suited for classification tasks. However, discriminative methods are usually sensitive to corruption in signals due to lacking crucial properties for signal reconstruction. In this paper, we present a theoretical framework for signal classification with sparse representation. The approach combines the discrimination power of the discriminative methods with the reconstruction property and the sparsity of the sparse representation that enables one to deal with signal corruptions: noise, missing data and outliers. The proposed approach is therefore capable of robust classification with a sparse representation of signals. The theoretical results are demonstrated with signal classification tasks, showing that the proposed approach outperforms the standard discriminative methods and the standard sparse representation in the case of corrupted signals. 1

4 0.43380332 83 nips-2006-Generalized Maximum Margin Clustering and Unsupervised Kernel Learning

Author: Hamed Valizadegan, Rong Jin

Abstract: Maximum margin clustering was proposed lately and has shown promising performance in recent studies [1, 2]. It extends the theory of support vector machine to unsupervised learning. Despite its good performance, there are three major problems with maximum margin clustering that question its efficiency for real-world applications. First, it is computationally expensive and difficult to scale to large-scale datasets because the number of parameters in maximum margin clustering is quadratic in the number of examples. Second, it requires data preprocessing to ensure that any clustering boundary will pass through the origins, which makes it unsuitable for clustering unbalanced dataset. Third, it is sensitive to the choice of kernel functions, and requires external procedure to determine the appropriate values for the parameters of kernel functions. In this paper, we propose “generalized maximum margin clustering” framework that addresses the above three problems simultaneously. The new framework generalizes the maximum margin clustering algorithm by allowing any clustering boundaries including those not passing through the origins. It significantly improves the computational efficiency by reducing the number of parameters. Furthermore, the new framework is able to automatically determine the appropriate kernel matrix without any labeled data. Finally, we show a formal connection between maximum margin clustering and spectral clustering. We demonstrate the efficiency of the generalized maximum margin clustering algorithm using both synthetic datasets and real datasets from the UCI repository. 1

5 0.4323138 65 nips-2006-Denoising and Dimension Reduction in Feature Space

Author: Mikio L. Braun, Klaus-Robert Müller, Joachim M. Buhmann

Abstract: We show that the relevant information about a classification problem in feature space is contained up to negligible error in a finite number of leading kernel PCA components if the kernel matches the underlying learning problem. Thus, kernels not only transform data sets such that good generalization can be achieved even by linear discriminant functions, but this transformation is also performed in a manner which makes economic use of feature space dimensions. In the best case, kernels provide efficient implicit representations of the data to perform classification. Practically, we propose an algorithm which enables us to recover the subspace and dimensionality relevant for good classification. Our algorithm can therefore be applied (1) to analyze the interplay of data set and kernel in a geometric fashion, (2) to help in model selection, and to (3) de-noise in feature space in order to yield better classification results. 1

6 0.4251416 167 nips-2006-Recursive ICA

7 0.42358273 184 nips-2006-Stratification Learning: Detecting Mixed Density and Dimensionality in High Dimensional Point Clouds

8 0.42194524 127 nips-2006-MLLE: Modified Locally Linear Embedding Using Multiple Weights

9 0.4204714 32 nips-2006-Analysis of Empirical Bayesian Methods for Neuroelectromagnetic Source Localization

10 0.42024782 87 nips-2006-Graph Laplacian Regularization for Large-Scale Semidefinite Programming

11 0.41946477 117 nips-2006-Learning on Graph with Laplacian Regularization

12 0.419054 3 nips-2006-A Complexity-Distortion Approach to Joint Pattern Alignment

13 0.41738361 51 nips-2006-Clustering Under Prior Knowledge with Application to Image Segmentation

14 0.41656828 175 nips-2006-Simplifying Mixture Models through Function Approximation

15 0.41626006 84 nips-2006-Generalized Regularized Least-Squares Learning with Predefined Features in a Hilbert Space

16 0.41567618 158 nips-2006-PG-means: learning the number of clusters in data

17 0.41486999 35 nips-2006-Approximate inference using planar graph decomposition

18 0.41425407 20 nips-2006-Active learning for misspecified generalized linear models

19 0.41381523 76 nips-2006-Emergence of conjunctive visual features by quadratic independent component analysis

20 0.41360307 106 nips-2006-Large Margin Hidden Markov Models for Automatic Speech Recognition