nips nips2010 nips2010-57 knowledge-graph by maker-knowledge-mining

57 nips-2010-Decoding Ipsilateral Finger Movements from ECoG Signals in Humans


Source: pdf

Author: Yuzong Liu, Mohit Sharma, Charles Gaona, Jonathan Breshears, Jarod Roland, Zachary Freudenburg, Eric Leuthardt, Kilian Q. Weinberger

Abstract: Several motor related Brain Computer Interfaces (BCIs) have been developed over the years that use activity decoded from the contralateral hemisphere to operate devices. Contralateral primary motor cortex is also the region most severely affected by hemispheric stroke. Recent studies have identified ipsilateral cortical activity in planning of motor movements and its potential implications for a stroke relevant BCI. The most fundamental functional loss after a hemispheric stroke is the loss of fine motor control of the hand. Thus, whether ipsilateral cortex encodes finger movements is critical to the potential feasibility of BCI approaches in the future. This study uses ipsilateral cortical signals from humans (using ECoG) to decode finger movements. We demonstrate, for the first time, successful finger movement detection using machine learning algorithms. Our results show high decoding accuracies in all cases which are always above chance. We also show that significant accuracies can be achieved with the use of only a fraction of all the features recorded and that these core features are consistent with previous physiological findings. The results of this study have substantial implications for advancing neuroprosthetic approaches to stroke populations not currently amenable to existing BCI techniques. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Louis 3 Department of Neurosurgery, Washington University School of Medicine Abstract Several motor related Brain Computer Interfaces (BCIs) have been developed over the years that use activity decoded from the contralateral hemisphere to operate devices. [sent-8, score-0.488]

2 Contralateral primary motor cortex is also the region most severely affected by hemispheric stroke. [sent-9, score-0.508]

3 Recent studies have identified ipsilateral cortical activity in planning of motor movements and its potential implications for a stroke relevant BCI. [sent-10, score-1.254]

4 The most fundamental functional loss after a hemispheric stroke is the loss of fine motor control of the hand. [sent-11, score-0.495]

5 Thus, whether ipsilateral cortex encodes finger movements is critical to the potential feasibility of BCI approaches in the future. [sent-12, score-0.794]

6 This study uses ipsilateral cortical signals from humans (using ECoG) to decode finger movements. [sent-13, score-0.665]

7 We demonstrate, for the first time, successful finger movement detection using machine learning algorithms. [sent-14, score-0.167]

8 The evolving understanding of motor function in the brain has led to novel Brain Computer Interface (BCI) platforms that can potentially assist patients with severe motor disabilities. [sent-21, score-0.678]

9 A BCI is a device that can decode human intent from brain activity alone in order to create an alternate communication and control channel for people with severe motor impairments [39]. [sent-22, score-0.438]

10 This brain-derived control is dependent on the emerging understanding of cortical physiology as it pertains to motor function. [sent-23, score-0.435]

11 Examples are seen in the seminal discoveries by Georgopoulus and Schwartz that neurons in motor cortex show directional tuning and, when taken as a population, can predict direction and speed of arm movements in monkey models [12, 19]. [sent-24, score-0.634]

12 His group was one of the first to describe the changes in amplitudes in sensorimotor rhythms associated with motor movement [24]. [sent-27, score-0.426]

13 All these methods are based on a functioning motor cortex capable of controlling the contralateral limb. [sent-29, score-0.489]

14 For a BCI to assist a hemiparetic patient, the implant will likely need to utilize unaffected cortex ipsilateral to the affected limb (opposite the side of the stroke). [sent-32, score-0.652]

15 To do so, an expanded understanding of how and to what degree of complexity motor and motor associated cortex encodes ipsilateral hand movements is essential. [sent-33, score-1.356]

16 Electrocorticography (ECoG), or signal recorded from the surface of the brain, offers an excellent opportunity to further define what level of motor information can be deciphered from human ipsilateral cortex related to movements (e. [sent-34, score-1.085]

17 gross motor movements versus fine motor kinematics of individual finger movements). [sent-36, score-0.784]

18 When analyzed on a functional level, many studies have revealed that different frequency bandwidths carry highly specific and anatomically focal information about cortical processing. [sent-40, score-0.177]

19 Thus far, however, no studies have utilized these ECoG spectral features to definitively analyze and decode cortical processing of the specific kinematics of ipsilateral finger movements. [sent-41, score-0.692]

20 In the past year, the first demonstration of this concept of utilizing ipsilateral motor signals for simple device control have been published both with ECoG (in healthy subjects) and MEG (in stroke patients) [4, 38]. [sent-42, score-0.919]

21 In this study we set out to further explore the decoding of individual finger movements of the ipsilateral hand that could potentially be utilized for more sophisticated BCIs in the future. [sent-43, score-0.897]

22 Each had electrode arrays placed over the frontal lobe and a portion of sensorimotor cortex for approximately a week. [sent-45, score-0.227]

23 The principal results show that individual ipsilateral finger movements can be decoded with high accuracy. [sent-47, score-0.738]

24 Through machine learning techniques, our group was able to determine the intent to flex and extend individual finger movements of the ipsilateral hand. [sent-48, score-0.708]

25 These results indicate that an ECoG based BCI platform could potentially operate a hand orthotic based on ipsilateral motor signals. [sent-49, score-0.779]

26 This could provide a neuroprosthetic alternative to patients with hemispheric stroke who have otherwise failed non-invasive and medical rehabilitative techniques. [sent-50, score-0.316]

27 2 Data Collection The subjects in this study were three patients (females; 8, 36, 48 years of age) with intractable epilepsy who underwent temporary placement of intracranial electrode arrays to localize seizure foci prior to surgical resection. [sent-51, score-0.238]

28 Subject 1 had a right hemispheric 8×8 grid while subjects 2 and 3 had left hemispheric 8×8 grids. [sent-53, score-0.204]

29 All motor hand kinematics were monitored by the patient wearing a USB linked 5DT Data Glove 5 Ultras (Fifth Dimension, Irvine, CA) on each hand. [sent-73, score-0.337]

30 Average time lags were then used to align the ECoG signal to the finger movement signal. [sent-89, score-0.198]

31 Those features optimized for predicting individual finger movement were then reviewed in light of anatomic location and spectral association in each subject. [sent-90, score-0.194]

32 Electrode Co-Registration Radiographs were used to identify the stereotactic coordinates of each grid electrode [10], and cortical areas were defined the GetLOC package for ECoG electrode localization [18]. [sent-96, score-0.276]

33 3 Algorithms In this section, we describe the machine learning algorithms used for the finger movement decoding tasks. [sent-99, score-0.29]

34 In the setting of brain decoding, it seems reasonable to assume that there are certain features which are associated with the general cortical processing of finger movements. [sent-137, score-0.225]

35 Functional magnetic resonance imaging (fMRI) studies have shown that although speech is represented in general cortical areas, individual features specific to different kinds of words can be found [16, 23]. [sent-139, score-0.176]

36 4 Results In this section we evaluate our algorithms for ipsilateral decoding on three subjects. [sent-150, score-0.621]

37 First, we approximate the time-lag between ECoG signal and finger movement, then we present decoding results on finger movement detection, discrimination and also joint decoding of all fingers in one hand. [sent-151, score-0.488]

38 Area Under the Curve Time Lag We first study the effects of 1 decoding time lag between cortical signal and movement using features. [sent-152, score-0.51]

39 Figure 1 Average 0 shows the decoding accuracy as a function 0 150 300 450 600 750 Time Lag (ms) of time-lag for four individual finger movements in Subject 1. [sent-161, score-0.377]

40 Offsets between Figure 1: Decoding time lag for ipsilateral finger move0 and 800 ms are tested for all fingers and ment in Subject 1. [sent-162, score-0.537]

41 The δT (ms) between input feature vectors and target labels, average time lag for the ipsilateral finger and the y-axis is the area under the ROC curve commovement for Subject 1 is observed to be puted from L1-regularized logistic regression model. [sent-164, score-0.595]

42 similar time lags between cortical activity and actual movements [38]. [sent-167, score-0.412]

43 All further analysis is based on cortical activity (features) shifted relative to movement by the average time-lag reported here. [sent-168, score-0.327]

44 8 False Positive Rate 1 (c) Subject 3 Figure 2: ROC curve for the ipsilateral finger movement decoder. [sent-193, score-0.647]

45 The elements of the matrix shows the percentage of all movements of a particular finger that has been classified as particular predicted finger. [sent-211, score-0.209]

46 Although multitask learning has been employed in the context of brain signal decoding [2], we are the first to decode ECoG signals in humans. [sent-219, score-0.391]

47 We group all the individual finger movement together, such that each task has similarity with others. [sent-220, score-0.168]

48 By carefully searching the best parameters that regulates the trade-off between learning commonality among all finger movement and specificity of exact finger movement, the classification algorithm can be significantly improved. [sent-232, score-0.177]

49 We also compare the 1/ 2regularized logistic regression-based multitask learning with SVM-based multitask learning. [sent-233, score-0.192]

50 5 Weight Analysis An important part of decoding finger movements from cortical activity is to map the features back to cortical domain. [sent-268, score-0.689]

51 Physiologically, it is important to understand the features which contribute most to the decoding algorithms i. [sent-269, score-0.171]

52 As shown in Table 2 below, the decoding accuracy, indicated by AUC, does not change much as we increase the number of features used for classification. [sent-272, score-0.171]

53 Figure 4 above shows the normalized weights from the features used to classify finger movements from non-movements. [sent-275, score-0.235]

54 This is what we would expect since these two areas are the one’s most involved in the planning of motor movements. [sent-277, score-0.259]

55 As previously reported, the frequency range with the highest weights falls in the lower frequencies in ipsilateral movements [38]. [sent-278, score-0.711]

56 It represents the variability in cortical processing of ipsilateral finger movements. [sent-293, score-0.603]

57 It can also be seen that cortical processing occurs as a network involving dorsolateral prefrontal cortex, pre-motor and motor areas. [sent-294, score-0.386]

58 6 Discussion The notion that motor cortex plays a role in ipsilateral body movements was first asserted by NybergHansen et al. [sent-298, score-1.053]

59 Originally this was felt to represent more axial motor control. [sent-300, score-0.259]

60 Further studies in single-neuron recordings in monkey models extended this observation to include ipsilateral hand and finger function. [sent-301, score-0.554]

61 demonstrated that a small percentage of primary motor cortical neurons showed increased activity with ipsilateral hand movements [32]. [sent-303, score-1.203]

62 This site was found to be anatomically distinct from contralateral hand sites and, when stimulated, produced ipsilateral hand movements [1]. [sent-304, score-0.942]

63 Additionally, a larger subset of premotor neurons was found to demonstrate more robust activations with cues to initiate movement during both ipsilateral and contralateral movements than with primary motor sites [3, 6]. [sent-305, score-1.406]

64 These findings in animal models support the conclusion that a small percent of motor and a larger percent of premotor cortex participate in control of ipsilateral limb and hand movements. [sent-306, score-1.05]

65 In humans, there appears to be a dichotomy in how motor regions contribute depending on whether the primary or non-primary motor cortex is examined. [sent-307, score-0.66]

66 demonstrated that there was a negative change from baseline in fMRI bold sequence in M1 associated with ipsilateral movements and postulated this to represent increased inhibition [21]. [sent-309, score-0.685]

67 Their group showed that anatomically distinct primary motor sites demonstrated increased activation that became more pronounced during the execution of complex movements [36]. [sent-312, score-0.549]

68 The role that premotor cortex plays appears to be distinct from that of primary motor cortex. [sent-313, score-0.54]

69 In normal subjects, fMRI shows that there is more robust bilateral activation of the dorsal premotor cortex with either contralateral or ipsilateral hand movements [15]. [sent-314, score-1.13]

70 (2004) demonstrated that ipsilateral premotor areas have magnetoencephalography (MEG) dipole peak latencies that significantly precede contralateral M1 sensorimotor cortex in performing unilateral finger movements. [sent-316, score-0.904]

71 Using electroencephalography (EEG), ipsilateral hand movements have been shown to induce alteration in cortical potentials prior to movement; this is referred to as premotor positivity [33, 29]. [sent-317, score-1.048]

72 Taken together, these findings support more of a motor planning role, rather than execution role, in ipsilateral hand actions. [sent-320, score-0.779]

73 Decoding the information present in the ECoG signal with regard to ipsilateral finger movements is important in defining the potential use of BCI methodologies for patients with hemispheric dysfunction due to stroke or trauma. [sent-321, score-0.996]

74 If high resolution motor kinematics can be decoded from the ECoG signal (e. [sent-322, score-0.355]

75 Since up to one-half of hemispheric stroke 7 patients are chronically left with permanent loss of function in their affected hand, this could have substantial clinical impact [20]. [sent-325, score-0.339]

76 Functional imaging has shown these severely affected patients to have increased activity in the premotor regions of their unaffected hemispheres [28, 37]. [sent-326, score-0.305]

77 Thus, incomplete recovery and its association with heightened ipsilateral activation may reflect the up-regulation of motor planning with an inability to execute or actuate the selected motor choice. [sent-329, score-0.994]

78 By decoding the brain signals associated with a given motor intention, the BCI may then convert these signals into commands that could control a robotic assist device that would allow for improved hand function (i. [sent-331, score-0.631]

79 The BCI would allow the ipsilateral premotor cortex to bypass the physiological bottleneck determined by injured and dysfunctional contralateral primary cortex (due to stroke) and the small and variable percentage of uncrossed motor fibers from ipsilateral M1. [sent-334, score-1.722]

80 7 Conclusion To our knowledge, this work describes the first instance of successful detection of individual finger movements from human ipsilateral ECoG signals. [sent-336, score-0.73]

81 The results presented here suggest that there exists information on the cortex ipsilateral to the moving fingers which can be decoded with high accuracy using machine learning algorithms. [sent-338, score-0.615]

82 For patients suffering from stroke and hemiparesis, decoding finger movements from the unaffected hemisphere can be of tremendous help. [sent-340, score-0.594]

83 Our future goals involve simultaneous decoding of finger and arm movements (using standard center out joystick task) from both ipsilateral and contralateral hemispheres. [sent-341, score-0.974]

84 An output zone of the monkey primary motor cortex specialized for bilateral hand movement. [sent-348, score-0.511]

85 Supplementary motor area in the monkey: activity of neurons during performance of a learned motor task. [sent-359, score-0.573]

86 Neural activity in primary motor and dorsal premotor cortex in reaching tasks with the contralateral versus ipsilateral arm. [sent-382, score-1.192]

87 On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex. [sent-417, score-0.491]

88 Multimodal imaging of brain reorganization in motor areas of the contralesional hemisphere of well recovered patients after capsular stroke. [sent-430, score-0.42]

89 The role of ipsilateral premotor cortex in hand movement after stroke. [sent-452, score-0.913]

90 Microscale recording from human motor cortex: implications for minimally invasive electrocorticographic brain-computer interfaces. [sent-466, score-0.28]

91 Recovery of upper extremity function in stroke patients: the copenhagen stroke study. [sent-487, score-0.256]

92 fmri signal decreases in ipsilateral primary motor cortex during unilateral hand movements are related to duration and side of movement. [sent-493, score-1.238]

93 Role of the premotor cortex in recovery from middle cerebral artery infarction. [sent-542, score-0.248]

94 Movement-associated cortical potentials with unilateral and bilateral simultaneous hand movement. [sent-547, score-0.24]

95 Neuronal activity in cortical motor areas related to ipsilateral, contralateral, and bilateral digit movements of the monkey. [sent-565, score-0.682]

96 Cortical topography of premotor and motor potentials preceding self-paced, voluntary movement of dominant and non-dominant hands. [sent-570, score-0.543]

97 Contralateral and ipsilateral emg responses to transcranial magnetic stimulation during recovery of arm and hand function after stroke. [sent-586, score-0.543]

98 Ipsilateral motor cortex activity during unimanual hand movements relates to task complexity. [sent-594, score-0.676]

99 Unique cortical physiology associated with ipsilateral hand movements and neuroprosthetic implications. [sent-611, score-0.919]

100 Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans. [sent-624, score-0.199]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('nger', 0.557), ('ipsilateral', 0.476), ('motor', 0.259), ('ecog', 0.233), ('movements', 0.209), ('decoding', 0.145), ('movement', 0.145), ('premotor', 0.139), ('stroke', 0.128), ('cortical', 0.127), ('contralateral', 0.121), ('bci', 0.111), ('cortex', 0.109), ('hemispheric', 0.085), ('eeg', 0.084), ('multitask', 0.08), ('brain', 0.072), ('patients', 0.066), ('yt', 0.064), ('electrode', 0.064), ('lag', 0.061), ('activity', 0.055), ('electroencephalography', 0.053), ('finger', 0.053), ('neurophysiology', 0.052), ('ngers', 0.046), ('mtl', 0.045), ('auc', 0.045), ('hand', 0.044), ('glove', 0.042), ('pfurtscheller', 0.042), ('wk', 0.042), ('subject', 0.041), ('fmri', 0.039), ('clinical', 0.038), ('neuroprosthetic', 0.037), ('unilateral', 0.037), ('roc', 0.036), ('ring', 0.035), ('subjects', 0.034), ('kinematics', 0.034), ('monkey', 0.034), ('signals', 0.033), ('primary', 0.033), ('logistic', 0.032), ('classi', 0.032), ('signal', 0.032), ('multiclass', 0.032), ('arrays', 0.032), ('bilateral', 0.032), ('neurology', 0.032), ('stl', 0.032), ('regulates', 0.032), ('wolpaw', 0.032), ('thumb', 0.03), ('decoded', 0.03), ('decode', 0.029), ('xt', 0.028), ('features', 0.026), ('physiology', 0.026), ('curve', 0.026), ('frequency', 0.026), ('hz', 0.025), ('anatomically', 0.024), ('sites', 0.024), ('individual', 0.023), ('unaffected', 0.023), ('hemisphere', 0.023), ('control', 0.023), ('arm', 0.023), ('detection', 0.022), ('interface', 0.022), ('discriminate', 0.022), ('affected', 0.022), ('assist', 0.022), ('sensorimotor', 0.022), ('bcis', 0.021), ('birbaumer', 0.021), ('corticospinal', 0.021), ('cued', 0.021), ('electrocorticographic', 0.021), ('electrocorticography', 0.021), ('exure', 0.021), ('intracranial', 0.021), ('lags', 0.021), ('llr', 0.021), ('mcfarland', 0.021), ('meg', 0.021), ('moran', 0.021), ('nitively', 0.021), ('scalp', 0.021), ('schalk', 0.021), ('seizure', 0.021), ('stereotactic', 0.021), ('tanji', 0.021), ('thresh', 0.021), ('verstynen', 0.021), ('wisneski', 0.021), ('yz', 0.021), ('discrimination', 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000006 57 nips-2010-Decoding Ipsilateral Finger Movements from ECoG Signals in Humans

Author: Yuzong Liu, Mohit Sharma, Charles Gaona, Jonathan Breshears, Jarod Roland, Zachary Freudenburg, Eric Leuthardt, Kilian Q. Weinberger

Abstract: Several motor related Brain Computer Interfaces (BCIs) have been developed over the years that use activity decoded from the contralateral hemisphere to operate devices. Contralateral primary motor cortex is also the region most severely affected by hemispheric stroke. Recent studies have identified ipsilateral cortical activity in planning of motor movements and its potential implications for a stroke relevant BCI. The most fundamental functional loss after a hemispheric stroke is the loss of fine motor control of the hand. Thus, whether ipsilateral cortex encodes finger movements is critical to the potential feasibility of BCI approaches in the future. This study uses ipsilateral cortical signals from humans (using ECoG) to decode finger movements. We demonstrate, for the first time, successful finger movement detection using machine learning algorithms. Our results show high decoding accuracies in all cases which are always above chance. We also show that significant accuracies can be achieved with the use of only a fraction of all the features recorded and that these core features are consistent with previous physiological findings. The results of this study have substantial implications for advancing neuroprosthetic approaches to stroke populations not currently amenable to existing BCI techniques. 1

2 0.16493216 167 nips-2010-Mixture of time-warped trajectory models for movement decoding

Author: Elaine Corbett, Eric Perreault, Konrad Koerding

Abstract: Applications of Brain-Machine-Interfaces typically estimate user intent based on biological signals that are under voluntary control. For example, we might want to estimate how a patient with a paralyzed arm wants to move based on residual muscle activity. To solve such problems it is necessary to integrate obtained information over time. To do so, state of the art approaches typically use a probabilistic model of how the state, e.g. position and velocity of the arm, evolves over time – a so-called trajectory model. We wanted to further develop this approach using two intuitive insights: (1) At any given point of time there may be a small set of likely movement targets, potentially identified by the location of objects in the workspace or by gaze information from the user. (2) The user may want to produce movements at varying speeds. We thus use a generative model with a trajectory model incorporating these insights. Approximate inference on that generative model is implemented using a mixture of extended Kalman filters. We find that the resulting algorithm allows us to decode arm movements dramatically better than when we use a trajectory model with linear dynamics. 1 In trod u cti on When patients have lost a limb or the ability to communicate with the outside world, brain machine interfaces (BMIs) are often used to enable robotic prostheses or restore communication. To achieve this, the user's intended state of the device must be decoded from biological signals. In the context of Bayesian statistics, two aspects are important for the design of an estimator of a temporally evolving state: the observation model, which describes how measured variables relate to the system’s state and the trajectory model which describes how the state changes over time in a probabilistic manner. Following this logic many recent BMI applications have relied on Bayesian estimation for a wide range of problems including the decoding of intended human [1] and animal [2] movements. In the context of BMIs, Bayesian approaches offer a principled way of formalizing the uncertainty about signals and thus often result in improvements over other signal processing techniques [1]-[3]. Most work on state estimation in dynamical systems has assumed linear dynamics and Gaussian noise. Under these circumstances, efficient algorithms result from belief propagation. The most frequent application uses the Kalman filter (KF), which recursively combines noisy state observations with the probabilistic evolution of state defined by the trajectory model to estimate the marginal distribution over states [4]. Such approaches have been used widely for applications including upper [1] and lower [5] extremity prosthetic 1 devices, functional electric stimulation [6] and human computer interactions [7]. As these algorithms are so commonly used, it seems promising to develop extensions to nonlinear trajectory models that may better describe the probabilistic distribution of movements in everyday life. One salient departure from the standard assumptions is that people tend to produce both slow and fast movements, depending on the situation. Models with linear dynamics only allow such deviation through the noise term, which makes these models poor at describing the natural variation of movement speeds during real world tasks. Explicitly incorporating movement speed into the trajectory model should lead to better movement estimates. Knowledge of the target position should also strongly affect trajectory models. After all , we tend to accelerate our arm early during movement and slow down later on. Target information can be linearly incorporated into the trajectory model, and this has greatly improved predictions [8]-[12]. Alternatively, if there are a small number of potential targets then a mixture of trajectory models approach [13] can be used. Here we are interested in the case where available data provide a prior over potential t argets but where movement targets may be anywhere. We want to incorporate target uncertainty and allow generalization to novel targets. Prior information about potential targets could come from a number of sources but would generally be noisy. For example, activity in the dorsal premotor cortex provides information about intended target location prior to movement and may be used where such recordings are available [14]. Target information may also be found noninvasively by tracking eye movements. However, such data will generally provide non-zero priors for a number of possible target locations as the subject saccades over the scene. While subjects almost always look at a target before reaching for it [15], there may be a delay of up to a second between looking at the target and the reach – a time interval over which up to 3 saccades are typically made. Each of these fixations could be the target. Hence, a probabilistic distribution of targets is appropriate when using either neural recordings or eye tracking to estimate potential reach targets Here we present an algorithm that uses a mixture of extended Kalman Filters (EKFs) to combine our insights related to the variation of movement speed and the availability of probabilistic target knowledge. Each of the mixture component s allows the speed of the movement to vary continuously over time. We tested how well we could use EMGs and eye movements to decode hand position of humans performing a three -dimensional large workspace reaching task. We find that using a trajectory model that allows for probabilistic target information and variation of speed leads to dramatic improvements in decoding quality. 2 Gen e ral Decod i n g S etti n g We wanted to test how well different decoding algorithms can decode human movement, over a wide range of dynamics. While many recent studies have looked at more restrictive, two-dimensional movements, a system to restore arm function should produce a wide range of 3D trajectories. We recorded arm kinematics and EMGs of healthy subjects during unconstrained 3D reaches to targets over a large workspace. Two healthy subjects were asked to reach at slow, normal and fast speeds, as they would in everyday life. Subjects were seated as they reached towards 16 LEDs in blocks of 150s, which were located on two planes positioned such that all targets were just reachable (Fig 1A). The target LED was lit for one second prior to an auditory go cue, at which time the subject would reach to the target at the appropriate speed. Slow, normal and fast reaches were allotted 3 s, 1.5s and 1s respectively; however, subjects determined the speed. An approximate total of 450 reaches were performed per subject. The subjects provided informed consent, and the protocol was approved by the Northwestern University Institutional Review Board. EMG signals were measured from the pectoralis major, and the three deltoid muscles of the shoulder. This represents a small subset of the muscles involved in reaching, and approximates those muscles retaining some voluntary control following mid-level cervical spinal cord injuries. 2 The EMG signals were band-pass filtered between 10 and 1,000 Hz, and subsequently anti aliased filtered. Hand, wrist, shoulder and head positions were tracked using an Optotrak motion analysis system. We simultaneously recorded eye movements with an ASL EYETRAC-6 head mounted eye tracker. Approximately 25% of the reaches were assigned to the test set, and the rest were used for training. Reaches for which either the motion capture data was incomplete, or there was visible motion artifact on the EMG were removed. As the state we used hand positions and joint angles (3 shoulder, 2 elbow, position, velocity and acceleration, 24 dimensions). Joint angles were calculated from the shoulder and wrist marker data using digitized bony landmarks which defined a coordinate system for the upper limb as detailed by Wu et al. [16]. As the motion data were sampled at 60Hz, the mean absolute value o f the EMG in the corresponding 16.7ms windows was used as an observation of the state at each time-step. Algorithm accuracy was quantified by normalizing the root -mean-squared error by the straight line distance between the first and final position of the endpoint for each reach. We compared the algorithms statistically using repeated measures ANOVAs with Tukey post -hoc tests, treating reach and subject as random effects. In the rest of the paper we will ask how well these reaching movements can be decoded from EMG and eye-tracking data. Figure 1: A Experimental setup and B sample kinematics and processed EMGs for one reach 3 Kal man Fi l ters w i th Target i n f ormati on All models that we consider in this paper assume linear observations with Gaussian noise: (1) where x is the state, y is the observation and v is the measurement noise with p(v) ~ N(0,R), and R is the observation covariance matrix. The model fitted the measured EMGs with an average r2 of 0.55. This highlights the need to integrate information over time. The standard approach also assumes linear dynamics and Gaussian process noise: (2) where, x t represents the hand and joint angle positions, w is the process noise with p(w) ~ N(0,Q), and Q is the state covariance matrix. The Kalman filter does optimal inference for this generative model. This model can effectively capture the dynamics of stereotypical reaches to a single target by appropriately tuning its parameters. However, when used to describe reaches to multiple targets, the model cannot describe target dependent aspects of reaching but boils down to a random drift model. Fast velocities are underestimated as they are unlikely under the trajectory model and there is excessive drift close to the target (Fig. 2A). 3 In many decoding applications we may know the subject’s target. A range of recent studies have addressed the issue of incorporating this information into the trajectory model [8, 13], and we might assume the effect of the target on the dynamics to be linear. This naturally suggests adding the target to the state space, which works well in practice [9, 12]. By appending the target to the state vector (KFT), the simple linear format of the KF may be retained: (3) where xTt is the vector of target positions, with dimensionality less than or equal to that of xt. This trajectory model thus allows describing both the rapid acceleration that characterizes the beginning of a reach and the stabilization towards its end. We compared the accuracy of the KF and the KFT to the Single Target Model (STM), a KF trained only on reaches to the target being tested (Fig. 2). The STM represents the best possible prediction that could be obtained with a Kalman filter. Assuming the target is perfectly known, we implemented the KFT by correctly initializing the target state xT at the beginning of the reach. We will relax this assumption below. The initial hand and joint angle positions were also assumed to be known. Figure 2: A Sample reach and predictions and B average accuracies with standard errors for KFT, KF and MTM. Consistent with the recent literature, both methods that incorporated target information produced higher prediction accuracy than the standard KF (both p<0.0001). Interestingly, there was no significant difference between the KFT and the STM (p=0.9). It seems that when we have knowledge of the target, we do not lose much by training a single model over the whole workspace rather than modeling the targets individually. This is encouraging, as we desire a BMI system that can generalize to any target within the workspace, not just specifically to those that are available in the training data. Clearly, adding the target to the state space allows the dynamics of typical movements to be modeled effectively, resulting in dramatic increases in decoding performance. 4 Ti me Warp i n g 4.1 I m p l e m e n t i n g a t i m e - w a r p e d t r a j e c t o r y mo d e l While the KFT above can capture the general reach trajectory profile, it does not allow for natural variability in the speed of movements. Depending on our task objectives, which would not directly be observed by a BMI, we might lazily reach toward a target or move a t maximal speed. We aim to change the trajectory model to explicitly incorporate a warping factor by which the average movement speed is scaled, allowing for such variability. As the movement speed will be positive in all practical cases, we model the logarithm of this factor, 4 and append it to the state vector: (4) We create a time-warped trajectory model by noting that if the average rate of a trajectory is to be scaled by a factor S, the position at time t will equal that of the original trajectory at time St. Differentiating, the velocity will be multiplied by S, and the acceleration by S 2. For simplicity, the trajectory noise is assumed to be additive and Gaussian, and the model is assumed to be stationary: (5) where Ip is the p-dimensional identity matrix and is a p p matrix of zeros. Only the terms used to predict the acceleration states need to be estimated to build the state transition matrix, and they are scaled as a nonlinear function of xs. After adding the variable movement speed to the state space the system is no longer linear. Therefore we need a different solution strategy. Instead of the typical KFT we use the Extended Kalman Filter (EKFT) to implement a nonlinear trajectory model by linearizing the dynamics around the best estimate at each time-step [17]. With this approach we add only small computational overhead to the KFT recursions. 4.2 Tr a i n i n g t h e t i m e w a r p i n g mo d e l The filter parameters were trained using a variant of the Expectation Maximization (EM) algorithm [18]. For extended Kalman filter learning the initialization for the variables may matter. S was initialized with the ground truth average reach speeds for each movement relative to the average speed across all movements. The state transition parameters were estimated using nonlinear least squares regression, while C, Q and R were estimated linearly for the new system, using the maximum likelihood solution [18] (M-step). For the E-step we used a standard extended Kalman smoother. We thus found the expected values for t he states given the current filter parameters. For this computation, and later when testing the algorithm, xs was initialized to its average value across all reaches while the remaining states were initialized to their true values. The smoothed estimate fo r xs was then used, along with the true values for the other states, to re-estimate the filter parameters in the M-step as before. We alternated between the E and M steps until the log likelihood converged (which it did in all cases). Following the training procedure, the diagonal of the state covariance matrix Q corresponding to xs was set to the variance of the smoothed xs over all reaches, according to how much this state should be allowed to change during prediction. This allowed the estimate of xs to develop over the course of the reach due to the evidence provided by the observations, better capturing the dynamics of reaches at different speeds. 4.3 P e r f o r ma n c e o f t h e t i m e - w a r p e d E K F T Incorporating time warping explicitly into the trajectory model pro duced a noticeable increase in decoding performance over the KFT. As the speed state xs is estimated throughout the course of the reach, based on the evidence provided by the observations, the trajectory model has the flexibility to follow the dynamics of the reach more accurately (Fig. 3). While at the normal self-selected speed the difference between the algorithms is small, for the slow and fast speeds, where the dynamics deviate from average, there i s a clear advantage to the time warping model. 5 Figure 3: Hand positions and predictions of the KFT and EKFT for sample reaches at A slow, B normal and C fast speeds. Note the different time scales between reaches. The models were first trained using data from all speeds (Fig. 4A). The EKFT was 1.8% more accurate on average (p<0.01), and the effect was significant at the slow (1.9%, p<0.05) and the fast (2.8%, p<0.01), but not at the normal (p=0.3) speed. We also trained the models from data using only reaches at the self-selected normal speed, as we wanted to see if there was enough variation to effectively train the EKFT (Fig. 4B). Interestingly, the performance of the EKFT was reduced by only 0.6%, and the KFT by 1.1%. The difference in performance between the EKFT and KFT was even more pronounced on aver age (2.3%, p<0.001), and for the slow and fast speeds (3.6 and 4.1%, both p< 0.0001). At the normal speed, the algorithms again were not statistically different (p=0.6). This result demonstrates that the EKFT is a practical option for a real BMI system, as it is not necessary to greatly vary the speeds while collecting training data for the model to be effective over a wide range of intended speeds. Explicitly incorporating speed information into the trajectory model helps decoding, by modeling the natural variation in volitional speed. Figure 4: Mean and standard error of EKFT and KFT accuracy at the different subjectselected speeds. Models were trained on reaches at A all speeds and B just normal speed reaches. Asterisks indicate statistically significant differences between the algorithms. 5 Mi xtu res of Target s So far, we have assumed that the targets of our reaches are perfectly known. In a real-world system, there will be uncertainty about the intended target of the reach. However, in typical applications there are a small number of possible objectives. Here we address this situation. Drawing on the recent literature, we use a mixture model to consider each of the possible targets [11, 13]. We condition the posterior probability for the state on the N possible targets, T: (6) 6 Using Bayes' Rule, this equation becomes: (7) As we are dealing with a mixture model, we perform the Kalman filter recursion for each possible target, xT, and our solution is a weighted sum of the outputs. The weights are proportional to the prior for that target, , and the likelihood of the model given that target . is independent of the target and does not need to be calculated. We tested mixtures of both algorithms, the mKFT and mEKFT, with real uncert ain priors obtained from eye-tracking in the one-second period preceding movement. As the targets were situated on two planes, the three-dimensional location of the eye gaze was found by projecting its direction onto those planes. The first, middle and last eye samples were selected, and all other samples were assigned to a group according to which of the three was closest. The mean and variance of these three groups were used to initialize three Kalman filters in the mixture model. The priors of the three groups were assigned proportional to the number of samples in them. If the subject looks at multiple positions prior to reaching, this method ensures with a high probability that the correct target was accounted for in one of the filters in the mixture. We also compared the MTM approach of Yu et al. [13], where a different KF model was generated for each target, and a mixture is performed over these models. This approach explicitly captures the dynamics of stereotypical reaches to specific targets. Given perfect target information, it would reduce to the STM described above. Priors for the MTM were found by assigning each valid eye sample to its closest two targets, and weighting the models proportional to the number of samples assigned to the corresponding target, divided by its distance from the mean of those samples. We tried other ways of assigning priors and the one presented gave the best results. We calculated the reduction in decoding quality when instead of perfect priors we provide eye-movement based noisy priors (Fig. 5). The accuracies of the mEKFT, the mKFT and the MTM were only degraded by 0.8, 1.9 and 2.1% respectively, compared to the perfect prior situation. The mEKFT was still close to 10% better than the KF. The mixture model framework is effective in accounting for uncertain priors. Figure 5: Mean and standard errors of accuracy for algorithms with perfect priors, and uncertain priors with full and partial training set. The asterisk indicates a statistically significant effects between the two training types, where real priors are used. Here, only reaches at normal speed were used to train the models, as this is a more realistic training set for a BMI application. This accounts for the degraded performance of the MTM with perfect priors relative to the STM from above (Fig. 2). With even more stereotyped training data for each target, the MTM doesn't generalize as well to new speeds. 7 We also wanted to know if the algorithms could generalize to new targets. In a real application, the available training data will generally not span the entire useable worksp ace. We compared the algorithms where reaches to all targets except the one being tested had been used to train the models. The performance of the MTM was significantly de graded unsurprisingly, as it was designed for reaches to a set of known targets. Performance of the mKFT and mEKFT degraded by about 1%, but not significantly (both p>0.7), demonstrating that the continuous approach to target information is preferable when the target could be anywhere in space, not just at locations for which training data is available. 6 Di scu ssi on and concl u si on s The goal of this work was to design a trajectory model that would improve decoding for BMIs with an application to reaching. We incorporated two features that prominently influence the dynamics of natural reach: the movement speed and the target location. Our approach is appropriate where uncertain target information is available. The model generalizes well to new regions of the workspace for which there is no training data, and across a broad range of reaching dynamics to widely spaced targets in three dimensions. The advantages over linear models in decoding precision we report here could be equally obtained using mixtures over many targets and speeds. While mixture models [11, 13] could allow for slow versus fast movements and any number of potential targets, this strategy will generally require many mixture components. Such an approach would require a lot more training data, as we have shown that it does not generalize well. It would also be run-time intensive which is problematic for prosthetic devices that rely on low power controllers. In contrast, the algorithm introduced here only takes a small amount of additional run-time in comparison to the standard KF approach. The EKF is only marginally slower than the standard KF and the algorithm will not generally need to consider more than 3 mixture components assuming the subject fixates the target within the second pre ceding the reach. In this paper we assumed that subjects always would fixate a reach target – along with other non-targets. While this is close to the way humans usually coordinate eyes and reaches [15], there might be cases where people do not fixate a reach target. Our approach could be easily extended to deal with such situations by adding a dummy mixture component that all ows the description of movements to any target. As an alternative to mixture approaches, a system can explicitly estimate the target position in the state vector [9]. This approach, however, would not straightforwardly allow for the rich target information available; we look at the target but also at other locations, strongly suggesting mixture distributions. A combination of the two approaches could further improve decoding quality. We could both estimate speed and target position for the EKFT in a continuous manner while retaining the mixture over target priors. We believe that the issues that we have addressed here are almost universal. Virtually all types of movements are executed at varying speed. A probabilistic distribution for a small number of action candidates may also be expected in most BMI applications – after all there are usually only a small number of actions that make sense in a given environment. While this work is presented in the context of decoding human reaching, it may be applied to a wide range of BMI applications including lower limb prosthetic devices and human computer interactions, as well as different signal sources such as electrode grid recordings and electroencephalograms. The increased user control in conveying their intended movements would significantly improve the functionality of a neuroprosthetic device. A c k n o w l e d g e me n t s T h e a u t h o r s t h a n k T. H a s w e l l , E . K r e p k o v i c h , a n d V. Ravichandran for assistance with experiments. This work was funded by the NSF Program in Cyber-Physical Systems. R e f e re n c e s [1] L.R. Hochberg, M.D. Serruya, G.M. Friehs, J.A. Mukand, M. Saleh, A.H. Caplan, A. Branner, D. 8 [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] Chen, R.D. Penn, and J.P. Donoghue, “Neuronal ensemble control of prosthetic devices by a human with tetraplegia,” Nature, vol. 442, 2006, pp. 164–171. W. Wu, Y. Gao, E. Bienenstock, J.P. Donoghue, and M.J. Black, “Bayesian population decoding of motor cortical activity using a Kalman filter,” Neural Computation, vol. 18, 2006, pp. 80–118. W. Wu, M.J. Black, Y. Gao, E. Bienenstock, M. Serruya, A. Shaikhouni, and J.P. Donoghue, “Neural decoding of cursor motion using a Kalman filter,” Advances in Neural Information Processing Systems 15: Proceedings of the 2002 Conference, 2003, p. 133. R.E. Kalman, “A new approach to linear filtering and prediction problems,” Journal of basic Engineering, vol. 82, 1960, pp. 35–45. G.G. Scandaroli, G.A. Borges, J.Y. Ishihara, M.H. Terra, A.F.D. Rocha, and F.A.D.O. Nascimento, “Estimation of foot orientation with respect to ground for an above knee robotic prosthesis,” Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems, St. Louis, MO, USA: IEEE Press, 2009, pp. 1112-1117. I. Cikajlo, Z. Matjačić, and T. Bajd, “Efficient FES triggering applying Kalman filter during sensory supported treadmill walking,” Journal of Medical Engineering & Technology, vol. 32, 2008, pp. 133144. S. Kim, J.D. Simeral, L.R. Hochberg, J.P. Donoghue, and M.J. Black, “Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia,” Journal of Neural Engineering, vol. 5, 2008, pp. 455-476. L. Srinivasan, U.T. Eden, A.S. Willsky, and E.N. Brown, “A state-space analysis for reconstruction of goal-directed movements using neural signals,” Neural computation, vol. 18, 2006, pp. 2465–2494. G.H. Mulliken, S. Musallam, and R.A. Andersen, “Decoding trajectories from posterior parietal cortex ensembles,” Journal of Neuroscience, vol. 28, 2008, p. 12913. W. Wu, J.E. Kulkarni, N.G. Hatsopoulos, and L. Paninski, “Neural Decoding of Hand Motion Using a Linear State-Space Model With Hidden States,” IEEE Transactions on neural systems and rehabilitation engineering, vol. 17, 2009, p. 1. J.E. Kulkarni and L. Paninski, “State-space decoding of goal-directed movements,” IEEE Signal Processing Magazine, vol. 25, 2008, p. 78. C. Kemere and T. Meng, “Optimal estimation of feed-forward-controlled linear systems,” IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005. Proceedings.(ICASSP'05), 2005. B.M. Yu, C. Kemere, G. Santhanam, A. Afshar, S.I. Ryu, T.H. Meng, M. Sahani, and K.V. Shenoy, “Mixture of trajectory models for neural decoding of goal-directed movements,” Journal of neurophysiology, vol. 97, 2007, p. 3763. N. Hatsopoulos, J. Joshi, and J.G. O'Leary, “Decoding continuous and discrete motor behaviors using motor and premotor cortical ensembles,” Journal of neurophysiology, vol. 92, 2004, p. 1165. R.S. Johansson, G. Westling, A. Backstrom, and J.R. Flanagan, “Eye-hand coordination in object manipulation,” Journal of Neuroscience, vol. 21, 2001, p. 6917. G. Wu, F.C. van der Helm, H.E.J. Veeger, M. Makhsous, P. Van Roy, C. Anglin, J. Nagels, A.R. Karduna, and K. McQuade, “ISB recommendation on definitions of joint coordinate systems of various joints for the reporting of human joint motion–Part II: shoulder, elbow, wrist and hand,” Journal of biomechanics, vol. 38, 2005, pp. 981–992. D. Simon, Optimal state estimation: Kalman, H [infinity] and nonlinear approaches, John Wiley and Sons, 2006. Z. Ghahramani and G.E. Hinton, “Parameter estimation for linear dynamical systems,” University of Toronto technical report CRG-TR-96-2, vol. 6, 1996. 9

3 0.10040824 44 nips-2010-Brain covariance selection: better individual functional connectivity models using population prior

Author: Gael Varoquaux, Alexandre Gramfort, Jean-baptiste Poline, Bertrand Thirion

Abstract: Spontaneous brain activity, as observed in functional neuroimaging, has been shown to display reproducible structure that expresses brain architecture and carries markers of brain pathologies. An important view of modern neuroscience is that such large-scale structure of coherent activity reflects modularity properties of brain connectivity graphs. However, to date, there has been no demonstration that the limited and noisy data available in spontaneous activity observations could be used to learn full-brain probabilistic models that generalize to new data. Learning such models entails two main challenges: i) modeling full brain connectivity is a difficult estimation problem that faces the curse of dimensionality and ii) variability between subjects, coupled with the variability of functional signals between experimental runs, makes the use of multiple datasets challenging. We describe subject-level brain functional connectivity structure as a multivariate Gaussian process and introduce a new strategy to estimate it from group data, by imposing a common structure on the graphical model in the population. We show that individual models learned from functional Magnetic Resonance Imaging (fMRI) data using this population prior generalize better to unseen data than models based on alternative regularization schemes. To our knowledge, this is the first report of a cross-validated model of spontaneous brain activity. Finally, we use the estimated graphical model to explore the large-scale characteristics of functional architecture and show for the first time that known cognitive networks appear as the integrated communities of functional connectivity graph. 1

4 0.096350849 29 nips-2010-An Approximate Inference Approach to Temporal Optimization in Optimal Control

Author: Konrad Rawlik, Marc Toussaint, Sethu Vijayakumar

Abstract: Algorithms based on iterative local approximations present a practical approach to optimal control in robotic systems. However, they generally require the temporal parameters (for e.g. the movement duration or the time point of reaching an intermediate goal) to be specified a priori. Here, we present a methodology that is capable of jointly optimizing the temporal parameters in addition to the control command profiles. The presented approach is based on a Bayesian canonical time formulation of the optimal control problem, with the temporal mapping from canonical to real time parametrised by an additional control variable. An approximate EM algorithm is derived that efficiently optimizes both the movement duration and control commands offering, for the first time, a practical approach to tackling generic via point problems in a systematic way under the optimal control framework. The proposed approach, which is applicable to plants with non-linear dynamics as well as arbitrary state dependent and quadratic control costs, is evaluated on realistic simulations of a redundant robotic plant.

5 0.079565771 128 nips-2010-Infinite Relational Modeling of Functional Connectivity in Resting State fMRI

Author: Morten Mørup, Kristoffer Madsen, Anne-marie Dogonowski, Hartwig Siebner, Lars K. Hansen

Abstract: Functional magnetic resonance imaging (fMRI) can be applied to study the functional connectivity of the neural elements which form complex network at a whole brain level. Most analyses of functional resting state networks (RSN) have been based on the analysis of correlation between the temporal dynamics of various regions of the brain. While these models can identify coherently behaving groups in terms of correlation they give little insight into how these groups interact. In this paper we take a different view on the analysis of functional resting state networks. Starting from the definition of resting state as functional coherent groups we search for functional units of the brain that communicate with other parts of the brain in a coherent manner as measured by mutual information. We use the infinite relational model (IRM) to quantify functional coherent groups of resting state networks and demonstrate how the extracted component interactions can be used to discriminate between functional resting state activity in multiple sclerosis and normal subjects. 1

6 0.076944157 97 nips-2010-Functional Geometry Alignment and Localization of Brain Areas

7 0.074114084 21 nips-2010-Accounting for network effects in neuronal responses using L1 regularized point process models

8 0.067609243 171 nips-2010-Movement extraction by detecting dynamics switches and repetitions

9 0.065455541 249 nips-2010-Spatial and anatomical regularization of SVM for brain image analysis

10 0.057053741 146 nips-2010-Learning Multiple Tasks using Manifold Regularization

11 0.053802155 147 nips-2010-Learning Multiple Tasks with a Sparse Matrix-Normal Penalty

12 0.051593933 99 nips-2010-Gated Softmax Classification

13 0.05060076 123 nips-2010-Individualized ROI Optimization via Maximization of Group-wise Consistency of Structural and Functional Profiles

14 0.050213978 161 nips-2010-Linear readout from a neural population with partial correlation data

15 0.04922317 200 nips-2010-Over-complete representations on recurrent neural networks can support persistent percepts

16 0.048346467 96 nips-2010-Fractionally Predictive Spiking Neurons

17 0.047386393 116 nips-2010-Identifying Patients at Risk of Major Adverse Cardiovascular Events Using Symbolic Mismatch

18 0.045509312 127 nips-2010-Inferring Stimulus Selectivity from the Spatial Structure of Neural Network Dynamics

19 0.040772911 268 nips-2010-The Neural Costs of Optimal Control

20 0.038525991 177 nips-2010-Multitask Learning without Label Correspondences


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.102), (1, 0.031), (2, -0.104), (3, 0.073), (4, 0.043), (5, -0.013), (6, -0.021), (7, -0.131), (8, -0.054), (9, 0.017), (10, 0.018), (11, 0.032), (12, 0.076), (13, 0.003), (14, 0.008), (15, 0.043), (16, -0.018), (17, 0.042), (18, -0.014), (19, -0.011), (20, -0.094), (21, -0.006), (22, 0.094), (23, 0.078), (24, 0.011), (25, -0.071), (26, -0.141), (27, 0.085), (28, 0.084), (29, -0.095), (30, 0.044), (31, 0.049), (32, 0.046), (33, -0.021), (34, -0.019), (35, 0.066), (36, -0.073), (37, 0.013), (38, -0.038), (39, 0.075), (40, 0.07), (41, 0.08), (42, -0.009), (43, 0.048), (44, 0.024), (45, -0.042), (46, 0.101), (47, -0.127), (48, 0.015), (49, 0.105)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.94154459 57 nips-2010-Decoding Ipsilateral Finger Movements from ECoG Signals in Humans

Author: Yuzong Liu, Mohit Sharma, Charles Gaona, Jonathan Breshears, Jarod Roland, Zachary Freudenburg, Eric Leuthardt, Kilian Q. Weinberger

Abstract: Several motor related Brain Computer Interfaces (BCIs) have been developed over the years that use activity decoded from the contralateral hemisphere to operate devices. Contralateral primary motor cortex is also the region most severely affected by hemispheric stroke. Recent studies have identified ipsilateral cortical activity in planning of motor movements and its potential implications for a stroke relevant BCI. The most fundamental functional loss after a hemispheric stroke is the loss of fine motor control of the hand. Thus, whether ipsilateral cortex encodes finger movements is critical to the potential feasibility of BCI approaches in the future. This study uses ipsilateral cortical signals from humans (using ECoG) to decode finger movements. We demonstrate, for the first time, successful finger movement detection using machine learning algorithms. Our results show high decoding accuracies in all cases which are always above chance. We also show that significant accuracies can be achieved with the use of only a fraction of all the features recorded and that these core features are consistent with previous physiological findings. The results of this study have substantial implications for advancing neuroprosthetic approaches to stroke populations not currently amenable to existing BCI techniques. 1

2 0.79103601 167 nips-2010-Mixture of time-warped trajectory models for movement decoding

Author: Elaine Corbett, Eric Perreault, Konrad Koerding

Abstract: Applications of Brain-Machine-Interfaces typically estimate user intent based on biological signals that are under voluntary control. For example, we might want to estimate how a patient with a paralyzed arm wants to move based on residual muscle activity. To solve such problems it is necessary to integrate obtained information over time. To do so, state of the art approaches typically use a probabilistic model of how the state, e.g. position and velocity of the arm, evolves over time – a so-called trajectory model. We wanted to further develop this approach using two intuitive insights: (1) At any given point of time there may be a small set of likely movement targets, potentially identified by the location of objects in the workspace or by gaze information from the user. (2) The user may want to produce movements at varying speeds. We thus use a generative model with a trajectory model incorporating these insights. Approximate inference on that generative model is implemented using a mixture of extended Kalman filters. We find that the resulting algorithm allows us to decode arm movements dramatically better than when we use a trajectory model with linear dynamics. 1 In trod u cti on When patients have lost a limb or the ability to communicate with the outside world, brain machine interfaces (BMIs) are often used to enable robotic prostheses or restore communication. To achieve this, the user's intended state of the device must be decoded from biological signals. In the context of Bayesian statistics, two aspects are important for the design of an estimator of a temporally evolving state: the observation model, which describes how measured variables relate to the system’s state and the trajectory model which describes how the state changes over time in a probabilistic manner. Following this logic many recent BMI applications have relied on Bayesian estimation for a wide range of problems including the decoding of intended human [1] and animal [2] movements. In the context of BMIs, Bayesian approaches offer a principled way of formalizing the uncertainty about signals and thus often result in improvements over other signal processing techniques [1]-[3]. Most work on state estimation in dynamical systems has assumed linear dynamics and Gaussian noise. Under these circumstances, efficient algorithms result from belief propagation. The most frequent application uses the Kalman filter (KF), which recursively combines noisy state observations with the probabilistic evolution of state defined by the trajectory model to estimate the marginal distribution over states [4]. Such approaches have been used widely for applications including upper [1] and lower [5] extremity prosthetic 1 devices, functional electric stimulation [6] and human computer interactions [7]. As these algorithms are so commonly used, it seems promising to develop extensions to nonlinear trajectory models that may better describe the probabilistic distribution of movements in everyday life. One salient departure from the standard assumptions is that people tend to produce both slow and fast movements, depending on the situation. Models with linear dynamics only allow such deviation through the noise term, which makes these models poor at describing the natural variation of movement speeds during real world tasks. Explicitly incorporating movement speed into the trajectory model should lead to better movement estimates. Knowledge of the target position should also strongly affect trajectory models. After all , we tend to accelerate our arm early during movement and slow down later on. Target information can be linearly incorporated into the trajectory model, and this has greatly improved predictions [8]-[12]. Alternatively, if there are a small number of potential targets then a mixture of trajectory models approach [13] can be used. Here we are interested in the case where available data provide a prior over potential t argets but where movement targets may be anywhere. We want to incorporate target uncertainty and allow generalization to novel targets. Prior information about potential targets could come from a number of sources but would generally be noisy. For example, activity in the dorsal premotor cortex provides information about intended target location prior to movement and may be used where such recordings are available [14]. Target information may also be found noninvasively by tracking eye movements. However, such data will generally provide non-zero priors for a number of possible target locations as the subject saccades over the scene. While subjects almost always look at a target before reaching for it [15], there may be a delay of up to a second between looking at the target and the reach – a time interval over which up to 3 saccades are typically made. Each of these fixations could be the target. Hence, a probabilistic distribution of targets is appropriate when using either neural recordings or eye tracking to estimate potential reach targets Here we present an algorithm that uses a mixture of extended Kalman Filters (EKFs) to combine our insights related to the variation of movement speed and the availability of probabilistic target knowledge. Each of the mixture component s allows the speed of the movement to vary continuously over time. We tested how well we could use EMGs and eye movements to decode hand position of humans performing a three -dimensional large workspace reaching task. We find that using a trajectory model that allows for probabilistic target information and variation of speed leads to dramatic improvements in decoding quality. 2 Gen e ral Decod i n g S etti n g We wanted to test how well different decoding algorithms can decode human movement, over a wide range of dynamics. While many recent studies have looked at more restrictive, two-dimensional movements, a system to restore arm function should produce a wide range of 3D trajectories. We recorded arm kinematics and EMGs of healthy subjects during unconstrained 3D reaches to targets over a large workspace. Two healthy subjects were asked to reach at slow, normal and fast speeds, as they would in everyday life. Subjects were seated as they reached towards 16 LEDs in blocks of 150s, which were located on two planes positioned such that all targets were just reachable (Fig 1A). The target LED was lit for one second prior to an auditory go cue, at which time the subject would reach to the target at the appropriate speed. Slow, normal and fast reaches were allotted 3 s, 1.5s and 1s respectively; however, subjects determined the speed. An approximate total of 450 reaches were performed per subject. The subjects provided informed consent, and the protocol was approved by the Northwestern University Institutional Review Board. EMG signals were measured from the pectoralis major, and the three deltoid muscles of the shoulder. This represents a small subset of the muscles involved in reaching, and approximates those muscles retaining some voluntary control following mid-level cervical spinal cord injuries. 2 The EMG signals were band-pass filtered between 10 and 1,000 Hz, and subsequently anti aliased filtered. Hand, wrist, shoulder and head positions were tracked using an Optotrak motion analysis system. We simultaneously recorded eye movements with an ASL EYETRAC-6 head mounted eye tracker. Approximately 25% of the reaches were assigned to the test set, and the rest were used for training. Reaches for which either the motion capture data was incomplete, or there was visible motion artifact on the EMG were removed. As the state we used hand positions and joint angles (3 shoulder, 2 elbow, position, velocity and acceleration, 24 dimensions). Joint angles were calculated from the shoulder and wrist marker data using digitized bony landmarks which defined a coordinate system for the upper limb as detailed by Wu et al. [16]. As the motion data were sampled at 60Hz, the mean absolute value o f the EMG in the corresponding 16.7ms windows was used as an observation of the state at each time-step. Algorithm accuracy was quantified by normalizing the root -mean-squared error by the straight line distance between the first and final position of the endpoint for each reach. We compared the algorithms statistically using repeated measures ANOVAs with Tukey post -hoc tests, treating reach and subject as random effects. In the rest of the paper we will ask how well these reaching movements can be decoded from EMG and eye-tracking data. Figure 1: A Experimental setup and B sample kinematics and processed EMGs for one reach 3 Kal man Fi l ters w i th Target i n f ormati on All models that we consider in this paper assume linear observations with Gaussian noise: (1) where x is the state, y is the observation and v is the measurement noise with p(v) ~ N(0,R), and R is the observation covariance matrix. The model fitted the measured EMGs with an average r2 of 0.55. This highlights the need to integrate information over time. The standard approach also assumes linear dynamics and Gaussian process noise: (2) where, x t represents the hand and joint angle positions, w is the process noise with p(w) ~ N(0,Q), and Q is the state covariance matrix. The Kalman filter does optimal inference for this generative model. This model can effectively capture the dynamics of stereotypical reaches to a single target by appropriately tuning its parameters. However, when used to describe reaches to multiple targets, the model cannot describe target dependent aspects of reaching but boils down to a random drift model. Fast velocities are underestimated as they are unlikely under the trajectory model and there is excessive drift close to the target (Fig. 2A). 3 In many decoding applications we may know the subject’s target. A range of recent studies have addressed the issue of incorporating this information into the trajectory model [8, 13], and we might assume the effect of the target on the dynamics to be linear. This naturally suggests adding the target to the state space, which works well in practice [9, 12]. By appending the target to the state vector (KFT), the simple linear format of the KF may be retained: (3) where xTt is the vector of target positions, with dimensionality less than or equal to that of xt. This trajectory model thus allows describing both the rapid acceleration that characterizes the beginning of a reach and the stabilization towards its end. We compared the accuracy of the KF and the KFT to the Single Target Model (STM), a KF trained only on reaches to the target being tested (Fig. 2). The STM represents the best possible prediction that could be obtained with a Kalman filter. Assuming the target is perfectly known, we implemented the KFT by correctly initializing the target state xT at the beginning of the reach. We will relax this assumption below. The initial hand and joint angle positions were also assumed to be known. Figure 2: A Sample reach and predictions and B average accuracies with standard errors for KFT, KF and MTM. Consistent with the recent literature, both methods that incorporated target information produced higher prediction accuracy than the standard KF (both p<0.0001). Interestingly, there was no significant difference between the KFT and the STM (p=0.9). It seems that when we have knowledge of the target, we do not lose much by training a single model over the whole workspace rather than modeling the targets individually. This is encouraging, as we desire a BMI system that can generalize to any target within the workspace, not just specifically to those that are available in the training data. Clearly, adding the target to the state space allows the dynamics of typical movements to be modeled effectively, resulting in dramatic increases in decoding performance. 4 Ti me Warp i n g 4.1 I m p l e m e n t i n g a t i m e - w a r p e d t r a j e c t o r y mo d e l While the KFT above can capture the general reach trajectory profile, it does not allow for natural variability in the speed of movements. Depending on our task objectives, which would not directly be observed by a BMI, we might lazily reach toward a target or move a t maximal speed. We aim to change the trajectory model to explicitly incorporate a warping factor by which the average movement speed is scaled, allowing for such variability. As the movement speed will be positive in all practical cases, we model the logarithm of this factor, 4 and append it to the state vector: (4) We create a time-warped trajectory model by noting that if the average rate of a trajectory is to be scaled by a factor S, the position at time t will equal that of the original trajectory at time St. Differentiating, the velocity will be multiplied by S, and the acceleration by S 2. For simplicity, the trajectory noise is assumed to be additive and Gaussian, and the model is assumed to be stationary: (5) where Ip is the p-dimensional identity matrix and is a p p matrix of zeros. Only the terms used to predict the acceleration states need to be estimated to build the state transition matrix, and they are scaled as a nonlinear function of xs. After adding the variable movement speed to the state space the system is no longer linear. Therefore we need a different solution strategy. Instead of the typical KFT we use the Extended Kalman Filter (EKFT) to implement a nonlinear trajectory model by linearizing the dynamics around the best estimate at each time-step [17]. With this approach we add only small computational overhead to the KFT recursions. 4.2 Tr a i n i n g t h e t i m e w a r p i n g mo d e l The filter parameters were trained using a variant of the Expectation Maximization (EM) algorithm [18]. For extended Kalman filter learning the initialization for the variables may matter. S was initialized with the ground truth average reach speeds for each movement relative to the average speed across all movements. The state transition parameters were estimated using nonlinear least squares regression, while C, Q and R were estimated linearly for the new system, using the maximum likelihood solution [18] (M-step). For the E-step we used a standard extended Kalman smoother. We thus found the expected values for t he states given the current filter parameters. For this computation, and later when testing the algorithm, xs was initialized to its average value across all reaches while the remaining states were initialized to their true values. The smoothed estimate fo r xs was then used, along with the true values for the other states, to re-estimate the filter parameters in the M-step as before. We alternated between the E and M steps until the log likelihood converged (which it did in all cases). Following the training procedure, the diagonal of the state covariance matrix Q corresponding to xs was set to the variance of the smoothed xs over all reaches, according to how much this state should be allowed to change during prediction. This allowed the estimate of xs to develop over the course of the reach due to the evidence provided by the observations, better capturing the dynamics of reaches at different speeds. 4.3 P e r f o r ma n c e o f t h e t i m e - w a r p e d E K F T Incorporating time warping explicitly into the trajectory model pro duced a noticeable increase in decoding performance over the KFT. As the speed state xs is estimated throughout the course of the reach, based on the evidence provided by the observations, the trajectory model has the flexibility to follow the dynamics of the reach more accurately (Fig. 3). While at the normal self-selected speed the difference between the algorithms is small, for the slow and fast speeds, where the dynamics deviate from average, there i s a clear advantage to the time warping model. 5 Figure 3: Hand positions and predictions of the KFT and EKFT for sample reaches at A slow, B normal and C fast speeds. Note the different time scales between reaches. The models were first trained using data from all speeds (Fig. 4A). The EKFT was 1.8% more accurate on average (p<0.01), and the effect was significant at the slow (1.9%, p<0.05) and the fast (2.8%, p<0.01), but not at the normal (p=0.3) speed. We also trained the models from data using only reaches at the self-selected normal speed, as we wanted to see if there was enough variation to effectively train the EKFT (Fig. 4B). Interestingly, the performance of the EKFT was reduced by only 0.6%, and the KFT by 1.1%. The difference in performance between the EKFT and KFT was even more pronounced on aver age (2.3%, p<0.001), and for the slow and fast speeds (3.6 and 4.1%, both p< 0.0001). At the normal speed, the algorithms again were not statistically different (p=0.6). This result demonstrates that the EKFT is a practical option for a real BMI system, as it is not necessary to greatly vary the speeds while collecting training data for the model to be effective over a wide range of intended speeds. Explicitly incorporating speed information into the trajectory model helps decoding, by modeling the natural variation in volitional speed. Figure 4: Mean and standard error of EKFT and KFT accuracy at the different subjectselected speeds. Models were trained on reaches at A all speeds and B just normal speed reaches. Asterisks indicate statistically significant differences between the algorithms. 5 Mi xtu res of Target s So far, we have assumed that the targets of our reaches are perfectly known. In a real-world system, there will be uncertainty about the intended target of the reach. However, in typical applications there are a small number of possible objectives. Here we address this situation. Drawing on the recent literature, we use a mixture model to consider each of the possible targets [11, 13]. We condition the posterior probability for the state on the N possible targets, T: (6) 6 Using Bayes' Rule, this equation becomes: (7) As we are dealing with a mixture model, we perform the Kalman filter recursion for each possible target, xT, and our solution is a weighted sum of the outputs. The weights are proportional to the prior for that target, , and the likelihood of the model given that target . is independent of the target and does not need to be calculated. We tested mixtures of both algorithms, the mKFT and mEKFT, with real uncert ain priors obtained from eye-tracking in the one-second period preceding movement. As the targets were situated on two planes, the three-dimensional location of the eye gaze was found by projecting its direction onto those planes. The first, middle and last eye samples were selected, and all other samples were assigned to a group according to which of the three was closest. The mean and variance of these three groups were used to initialize three Kalman filters in the mixture model. The priors of the three groups were assigned proportional to the number of samples in them. If the subject looks at multiple positions prior to reaching, this method ensures with a high probability that the correct target was accounted for in one of the filters in the mixture. We also compared the MTM approach of Yu et al. [13], where a different KF model was generated for each target, and a mixture is performed over these models. This approach explicitly captures the dynamics of stereotypical reaches to specific targets. Given perfect target information, it would reduce to the STM described above. Priors for the MTM were found by assigning each valid eye sample to its closest two targets, and weighting the models proportional to the number of samples assigned to the corresponding target, divided by its distance from the mean of those samples. We tried other ways of assigning priors and the one presented gave the best results. We calculated the reduction in decoding quality when instead of perfect priors we provide eye-movement based noisy priors (Fig. 5). The accuracies of the mEKFT, the mKFT and the MTM were only degraded by 0.8, 1.9 and 2.1% respectively, compared to the perfect prior situation. The mEKFT was still close to 10% better than the KF. The mixture model framework is effective in accounting for uncertain priors. Figure 5: Mean and standard errors of accuracy for algorithms with perfect priors, and uncertain priors with full and partial training set. The asterisk indicates a statistically significant effects between the two training types, where real priors are used. Here, only reaches at normal speed were used to train the models, as this is a more realistic training set for a BMI application. This accounts for the degraded performance of the MTM with perfect priors relative to the STM from above (Fig. 2). With even more stereotyped training data for each target, the MTM doesn't generalize as well to new speeds. 7 We also wanted to know if the algorithms could generalize to new targets. In a real application, the available training data will generally not span the entire useable worksp ace. We compared the algorithms where reaches to all targets except the one being tested had been used to train the models. The performance of the MTM was significantly de graded unsurprisingly, as it was designed for reaches to a set of known targets. Performance of the mKFT and mEKFT degraded by about 1%, but not significantly (both p>0.7), demonstrating that the continuous approach to target information is preferable when the target could be anywhere in space, not just at locations for which training data is available. 6 Di scu ssi on and concl u si on s The goal of this work was to design a trajectory model that would improve decoding for BMIs with an application to reaching. We incorporated two features that prominently influence the dynamics of natural reach: the movement speed and the target location. Our approach is appropriate where uncertain target information is available. The model generalizes well to new regions of the workspace for which there is no training data, and across a broad range of reaching dynamics to widely spaced targets in three dimensions. The advantages over linear models in decoding precision we report here could be equally obtained using mixtures over many targets and speeds. While mixture models [11, 13] could allow for slow versus fast movements and any number of potential targets, this strategy will generally require many mixture components. Such an approach would require a lot more training data, as we have shown that it does not generalize well. It would also be run-time intensive which is problematic for prosthetic devices that rely on low power controllers. In contrast, the algorithm introduced here only takes a small amount of additional run-time in comparison to the standard KF approach. The EKF is only marginally slower than the standard KF and the algorithm will not generally need to consider more than 3 mixture components assuming the subject fixates the target within the second pre ceding the reach. In this paper we assumed that subjects always would fixate a reach target – along with other non-targets. While this is close to the way humans usually coordinate eyes and reaches [15], there might be cases where people do not fixate a reach target. Our approach could be easily extended to deal with such situations by adding a dummy mixture component that all ows the description of movements to any target. As an alternative to mixture approaches, a system can explicitly estimate the target position in the state vector [9]. This approach, however, would not straightforwardly allow for the rich target information available; we look at the target but also at other locations, strongly suggesting mixture distributions. A combination of the two approaches could further improve decoding quality. We could both estimate speed and target position for the EKFT in a continuous manner while retaining the mixture over target priors. We believe that the issues that we have addressed here are almost universal. Virtually all types of movements are executed at varying speed. A probabilistic distribution for a small number of action candidates may also be expected in most BMI applications – after all there are usually only a small number of actions that make sense in a given environment. While this work is presented in the context of decoding human reaching, it may be applied to a wide range of BMI applications including lower limb prosthetic devices and human computer interactions, as well as different signal sources such as electrode grid recordings and electroencephalograms. The increased user control in conveying their intended movements would significantly improve the functionality of a neuroprosthetic device. A c k n o w l e d g e me n t s T h e a u t h o r s t h a n k T. H a s w e l l , E . K r e p k o v i c h , a n d V. Ravichandran for assistance with experiments. This work was funded by the NSF Program in Cyber-Physical Systems. R e f e re n c e s [1] L.R. Hochberg, M.D. Serruya, G.M. Friehs, J.A. Mukand, M. Saleh, A.H. Caplan, A. Branner, D. 8 [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] Chen, R.D. Penn, and J.P. Donoghue, “Neuronal ensemble control of prosthetic devices by a human with tetraplegia,” Nature, vol. 442, 2006, pp. 164–171. W. Wu, Y. Gao, E. Bienenstock, J.P. Donoghue, and M.J. Black, “Bayesian population decoding of motor cortical activity using a Kalman filter,” Neural Computation, vol. 18, 2006, pp. 80–118. W. Wu, M.J. Black, Y. Gao, E. Bienenstock, M. Serruya, A. Shaikhouni, and J.P. Donoghue, “Neural decoding of cursor motion using a Kalman filter,” Advances in Neural Information Processing Systems 15: Proceedings of the 2002 Conference, 2003, p. 133. R.E. Kalman, “A new approach to linear filtering and prediction problems,” Journal of basic Engineering, vol. 82, 1960, pp. 35–45. G.G. Scandaroli, G.A. Borges, J.Y. Ishihara, M.H. Terra, A.F.D. Rocha, and F.A.D.O. Nascimento, “Estimation of foot orientation with respect to ground for an above knee robotic prosthesis,” Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems, St. Louis, MO, USA: IEEE Press, 2009, pp. 1112-1117. I. Cikajlo, Z. Matjačić, and T. Bajd, “Efficient FES triggering applying Kalman filter during sensory supported treadmill walking,” Journal of Medical Engineering & Technology, vol. 32, 2008, pp. 133144. S. Kim, J.D. Simeral, L.R. Hochberg, J.P. Donoghue, and M.J. Black, “Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia,” Journal of Neural Engineering, vol. 5, 2008, pp. 455-476. L. Srinivasan, U.T. Eden, A.S. Willsky, and E.N. Brown, “A state-space analysis for reconstruction of goal-directed movements using neural signals,” Neural computation, vol. 18, 2006, pp. 2465–2494. G.H. Mulliken, S. Musallam, and R.A. Andersen, “Decoding trajectories from posterior parietal cortex ensembles,” Journal of Neuroscience, vol. 28, 2008, p. 12913. W. Wu, J.E. Kulkarni, N.G. Hatsopoulos, and L. Paninski, “Neural Decoding of Hand Motion Using a Linear State-Space Model With Hidden States,” IEEE Transactions on neural systems and rehabilitation engineering, vol. 17, 2009, p. 1. J.E. Kulkarni and L. Paninski, “State-space decoding of goal-directed movements,” IEEE Signal Processing Magazine, vol. 25, 2008, p. 78. C. Kemere and T. Meng, “Optimal estimation of feed-forward-controlled linear systems,” IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005. Proceedings.(ICASSP'05), 2005. B.M. Yu, C. Kemere, G. Santhanam, A. Afshar, S.I. Ryu, T.H. Meng, M. Sahani, and K.V. Shenoy, “Mixture of trajectory models for neural decoding of goal-directed movements,” Journal of neurophysiology, vol. 97, 2007, p. 3763. N. Hatsopoulos, J. Joshi, and J.G. O'Leary, “Decoding continuous and discrete motor behaviors using motor and premotor cortical ensembles,” Journal of neurophysiology, vol. 92, 2004, p. 1165. R.S. Johansson, G. Westling, A. Backstrom, and J.R. Flanagan, “Eye-hand coordination in object manipulation,” Journal of Neuroscience, vol. 21, 2001, p. 6917. G. Wu, F.C. van der Helm, H.E.J. Veeger, M. Makhsous, P. Van Roy, C. Anglin, J. Nagels, A.R. Karduna, and K. McQuade, “ISB recommendation on definitions of joint coordinate systems of various joints for the reporting of human joint motion–Part II: shoulder, elbow, wrist and hand,” Journal of biomechanics, vol. 38, 2005, pp. 981–992. D. Simon, Optimal state estimation: Kalman, H [infinity] and nonlinear approaches, John Wiley and Sons, 2006. Z. Ghahramani and G.E. Hinton, “Parameter estimation for linear dynamical systems,” University of Toronto technical report CRG-TR-96-2, vol. 6, 1996. 9

3 0.70998752 29 nips-2010-An Approximate Inference Approach to Temporal Optimization in Optimal Control

Author: Konrad Rawlik, Marc Toussaint, Sethu Vijayakumar

Abstract: Algorithms based on iterative local approximations present a practical approach to optimal control in robotic systems. However, they generally require the temporal parameters (for e.g. the movement duration or the time point of reaching an intermediate goal) to be specified a priori. Here, we present a methodology that is capable of jointly optimizing the temporal parameters in addition to the control command profiles. The presented approach is based on a Bayesian canonical time formulation of the optimal control problem, with the temporal mapping from canonical to real time parametrised by an additional control variable. An approximate EM algorithm is derived that efficiently optimizes both the movement duration and control commands offering, for the first time, a practical approach to tackling generic via point problems in a systematic way under the optimal control framework. The proposed approach, which is applicable to plants with non-linear dynamics as well as arbitrary state dependent and quadratic control costs, is evaluated on realistic simulations of a redundant robotic plant.

4 0.50552785 171 nips-2010-Movement extraction by detecting dynamics switches and repetitions

Author: Silvia Chiappa, Jan R. Peters

Abstract: Many time-series such as human movement data consist of a sequence of basic actions, e.g., forehands and backhands in tennis. Automatically extracting and characterizing such actions is an important problem for a variety of different applications. In this paper, we present a probabilistic segmentation approach in which an observed time-series is modeled as a concatenation of segments corresponding to different basic actions. Each segment is generated through a noisy transformation of one of a few hidden trajectories representing different types of movement, with possible time re-scaling. We analyze three different approximation methods for dealing with model intractability, and demonstrate how the proposed approach can successfully segment table tennis movements recorded using a robot arm as haptic input device. 1

5 0.41972613 19 nips-2010-A rational decision making framework for inhibitory control

Author: Pradeep Shenoy, Angela J. Yu, Rajesh P. Rao

Abstract: Intelligent agents are often faced with the need to choose actions with uncertain consequences, and to modify those actions according to ongoing sensory processing and changing task demands. The requisite ability to dynamically modify or cancel planned actions is known as inhibitory control in psychology. We formalize inhibitory control as a rational decision-making problem, and apply to it to the classical stop-signal task. Using Bayesian inference and stochastic control tools, we show that the optimal policy systematically depends on various parameters of the problem, such as the relative costs of different action choices, the noise level of sensory inputs, and the dynamics of changing environmental demands. Our normative model accounts for a range of behavioral data in humans and animals in the stop-signal task, suggesting that the brain implements statistically optimal, dynamically adaptive, and reward-sensitive decision-making in the context of inhibitory control problems. 1

6 0.41418773 97 nips-2010-Functional Geometry Alignment and Localization of Brain Areas

7 0.39307433 44 nips-2010-Brain covariance selection: better individual functional connectivity models using population prior

8 0.38159874 248 nips-2010-Sparse Inverse Covariance Selection via Alternating Linearization Methods

9 0.37383932 123 nips-2010-Individualized ROI Optimization via Maximization of Group-wise Consistency of Structural and Functional Profiles

10 0.3724184 34 nips-2010-Attractor Dynamics with Synaptic Depression

11 0.36964819 95 nips-2010-Feature Transitions with Saccadic Search: Size, Color, and Orientation Are Not Alike

12 0.36229885 127 nips-2010-Inferring Stimulus Selectivity from the Spatial Structure of Neural Network Dynamics

13 0.34813225 128 nips-2010-Infinite Relational Modeling of Functional Connectivity in Resting State fMRI

14 0.33397612 50 nips-2010-Constructing Skill Trees for Reinforcement Learning Agents from Demonstration Trajectories

15 0.32898244 156 nips-2010-Learning to combine foveal glimpses with a third-order Boltzmann machine

16 0.32004809 200 nips-2010-Over-complete representations on recurrent neural networks can support persistent percepts

17 0.31181541 249 nips-2010-Spatial and anatomical regularization of SVM for brain image analysis

18 0.29884407 147 nips-2010-Learning Multiple Tasks with a Sparse Matrix-Normal Penalty

19 0.29624152 111 nips-2010-Hallucinations in Charles Bonnet Syndrome Induced by Homeostasis: a Deep Boltzmann Machine Model

20 0.29501837 157 nips-2010-Learning to localise sounds with spiking neural networks


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(13, 0.04), (17, 0.015), (27, 0.119), (30, 0.032), (35, 0.035), (45, 0.214), (50, 0.03), (52, 0.025), (58, 0.292), (60, 0.02), (77, 0.04), (90, 0.033)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.82672721 57 nips-2010-Decoding Ipsilateral Finger Movements from ECoG Signals in Humans

Author: Yuzong Liu, Mohit Sharma, Charles Gaona, Jonathan Breshears, Jarod Roland, Zachary Freudenburg, Eric Leuthardt, Kilian Q. Weinberger

Abstract: Several motor related Brain Computer Interfaces (BCIs) have been developed over the years that use activity decoded from the contralateral hemisphere to operate devices. Contralateral primary motor cortex is also the region most severely affected by hemispheric stroke. Recent studies have identified ipsilateral cortical activity in planning of motor movements and its potential implications for a stroke relevant BCI. The most fundamental functional loss after a hemispheric stroke is the loss of fine motor control of the hand. Thus, whether ipsilateral cortex encodes finger movements is critical to the potential feasibility of BCI approaches in the future. This study uses ipsilateral cortical signals from humans (using ECoG) to decode finger movements. We demonstrate, for the first time, successful finger movement detection using machine learning algorithms. Our results show high decoding accuracies in all cases which are always above chance. We also show that significant accuracies can be achieved with the use of only a fraction of all the features recorded and that these core features are consistent with previous physiological findings. The results of this study have substantial implications for advancing neuroprosthetic approaches to stroke populations not currently amenable to existing BCI techniques. 1

2 0.75799686 150 nips-2010-Learning concept graphs from text with stick-breaking priors

Author: America Chambers, Padhraic Smyth, Mark Steyvers

Abstract: We present a generative probabilistic model for learning general graph structures, which we term concept graphs, from text. Concept graphs provide a visual summary of the thematic content of a collection of documents—a task that is difficult to accomplish using only keyword search. The proposed model can learn different types of concept graph structures and is capable of utilizing partial prior knowledge about graph structure as well as labeled documents. We describe a generative model that is based on a stick-breaking process for graphs, and a Markov Chain Monte Carlo inference procedure. Experiments on simulated data show that the model can recover known graph structure when learning in both unsupervised and semi-supervised modes. We also show that the proposed model is competitive in terms of empirical log likelihood with existing structure-based topic models (hPAM and hLDA) on real-world text data sets. Finally, we illustrate the application of the model to the problem of updating Wikipedia category graphs. 1

3 0.66092616 240 nips-2010-Simultaneous Object Detection and Ranking with Weak Supervision

Author: Matthew Blaschko, Andrea Vedaldi, Andrew Zisserman

Abstract: A standard approach to learning object category detectors is to provide strong supervision in the form of a region of interest (ROI) specifying each instance of the object in the training images [17]. In this work are goal is to learn from heterogeneous labels, in which some images are only weakly supervised, specifying only the presence or absence of the object or a weak indication of object location, whilst others are fully annotated. To this end we develop a discriminative learning approach and make two contributions: (i) we propose a structured output formulation for weakly annotated images where full annotations are treated as latent variables; and (ii) we propose to optimize a ranking objective function, allowing our method to more effectively use negatively labeled images to improve detection average precision performance. The method is demonstrated on the benchmark INRIA pedestrian detection dataset of Dalal and Triggs [14] and the PASCAL VOC dataset [17], and it is shown that for a significant proportion of weakly supervised images the performance achieved is very similar to the fully supervised (state of the art) results. 1

4 0.66011268 98 nips-2010-Functional form of motion priors in human motion perception

Author: Hongjing Lu, Tungyou Lin, Alan Lee, Luminita Vese, Alan L. Yuille

Abstract: It has been speculated that the human motion system combines noisy measurements with prior expectations in an optimal, or rational, manner. The basic goal of our work is to discover experimentally which prior distribution is used. More specifically, we seek to infer the functional form of the motion prior from the performance of human subjects on motion estimation tasks. We restricted ourselves to priors which combine three terms for motion slowness, first-order smoothness, and second-order smoothness. We focused on two functional forms for prior distributions: L2-norm and L1-norm regularization corresponding to the Gaussian and Laplace distributions respectively. In our first experimental session we estimate the weights of the three terms for each functional form to maximize the fit to human performance. We then measured human performance for motion tasks and found that we obtained better fit for the L1-norm (Laplace) than for the L2-norm (Gaussian). We note that the L1-norm is also a better fit to the statistics of motion in natural environments. In addition, we found large weights for the second-order smoothness term, indicating the importance of high-order smoothness compared to slowness and lower-order smoothness. To validate our results further, we used the best fit models using the L1-norm to predict human performance in a second session with different experimental setups. Our results showed excellent agreement between human performance and model prediction – ranging from 3% to 8% for five human subjects over ten experimental conditions – and give further support that the human visual system uses an L1-norm (Laplace) prior.

5 0.65820014 21 nips-2010-Accounting for network effects in neuronal responses using L1 regularized point process models

Author: Ryan Kelly, Matthew Smith, Robert Kass, Tai S. Lee

Abstract: Activity of a neuron, even in the early sensory areas, is not simply a function of its local receptive field or tuning properties, but depends on global context of the stimulus, as well as the neural context. This suggests the activity of the surrounding neurons and global brain states can exert considerable influence on the activity of a neuron. In this paper we implemented an L1 regularized point process model to assess the contribution of multiple factors to the firing rate of many individual units recorded simultaneously from V1 with a 96-electrode “Utah” array. We found that the spikes of surrounding neurons indeed provide strong predictions of a neuron’s response, in addition to the neuron’s receptive field transfer function. We also found that the same spikes could be accounted for with the local field potentials, a surrogate measure of global network states. This work shows that accounting for network fluctuations can improve estimates of single trial firing rate and stimulus-response transfer functions. 1

6 0.65782893 44 nips-2010-Brain covariance selection: better individual functional connectivity models using population prior

7 0.65428174 161 nips-2010-Linear readout from a neural population with partial correlation data

8 0.65342581 17 nips-2010-A biologically plausible network for the computation of orientation dominance

9 0.65193617 268 nips-2010-The Neural Costs of Optimal Control

10 0.64908326 6 nips-2010-A Discriminative Latent Model of Image Region and Object Tag Correspondence

11 0.64895886 194 nips-2010-Online Learning for Latent Dirichlet Allocation

12 0.64629292 20 nips-2010-A unified model of short-range and long-range motion perception

13 0.64501685 200 nips-2010-Over-complete representations on recurrent neural networks can support persistent percepts

14 0.64329344 131 nips-2010-Joint Analysis of Time-Evolving Binary Matrices and Associated Documents

15 0.64220548 109 nips-2010-Group Sparse Coding with a Laplacian Scale Mixture Prior

16 0.64095074 55 nips-2010-Cross Species Expression Analysis using a Dirichlet Process Mixture Model with Latent Matchings

17 0.64076751 81 nips-2010-Evaluating neuronal codes for inference using Fisher information

18 0.64015496 56 nips-2010-Deciphering subsampled data: adaptive compressive sampling as a principle of brain communication

19 0.64000446 249 nips-2010-Spatial and anatomical regularization of SVM for brain image analysis

20 0.63952792 123 nips-2010-Individualized ROI Optimization via Maximization of Group-wise Consistency of Structural and Functional Profiles