nips nips2004 nips2004-56 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Pradeep Shenoy, Rajesh P. Rao
Abstract: We describe an approach to building brain-computer interfaces (BCI) based on graphical models for probabilistic inference and learning. We show how a dynamic Bayesian network (DBN) can be used to infer probability distributions over brain- and body-states during planning and execution of actions. The DBN is learned directly from observed data and allows measured signals such as EEG and EMG to be interpreted in terms of internal states such as intent to move, preparatory activity, and movement execution. Unlike traditional classification-based approaches to BCI, the proposed approach (1) allows continuous tracking and prediction of internal states over time, and (2) generates control signals based on an entire probability distribution over states rather than binary yes/no decisions. We present preliminary results of brain- and body-state estimation using simultaneous EEG and EMG signals recorded during a self-paced left/right hand movement task. 1
Reference: text
sentIndex sentText sentNum sentScore
1 edu Abstract We describe an approach to building brain-computer interfaces (BCI) based on graphical models for probabilistic inference and learning. [sent-7, score-0.128]
2 We show how a dynamic Bayesian network (DBN) can be used to infer probability distributions over brain- and body-states during planning and execution of actions. [sent-8, score-0.058]
3 The DBN is learned directly from observed data and allows measured signals such as EEG and EMG to be interpreted in terms of internal states such as intent to move, preparatory activity, and movement execution. [sent-9, score-0.523]
4 Unlike traditional classification-based approaches to BCI, the proposed approach (1) allows continuous tracking and prediction of internal states over time, and (2) generates control signals based on an entire probability distribution over states rather than binary yes/no decisions. [sent-10, score-0.315]
5 We present preliminary results of brain- and body-state estimation using simultaneous EEG and EMG signals recorded during a self-paced left/right hand movement task. [sent-11, score-0.372]
6 Several researchers have demonstrated the feasibility of using EEG signals as a non-invasive medium for building human BCIs [1, 2, 3, 4, 5] (see also [6] and articles in the same issue). [sent-13, score-0.084]
7 A central theme in much of this research is the postulation of a discrete brain state that the user maintains while performing one of a set of physical or imagined actions. [sent-14, score-0.319]
8 The goal is to decode the hidden brain state from the observable EEG signal, and to use the decoded state to control a robot or a cursor on a computer screen. [sent-15, score-0.577]
9 , [1, 2, 4]) have utilized classification methods applied to time slices of EEG data to discriminate between a small set of brain states (e. [sent-18, score-0.359]
10 These methods typically involve various forms of preprocessing (such as band-pass filtering or temporal smoothing) as well as feature extraction on time slices known to contain one of the chosen set of brain states. [sent-21, score-0.234]
11 As a result, it is difficult to have a continuous estimate of the brain state and to associate an uncertainty with the current estimate. [sent-27, score-0.319]
12 In this paper, we propose a new framework for BCI based on probabilistic graphical models [7] that overcomes some of the limitations of classification-based approaches to BCI. [sent-28, score-0.057]
13 We model the dynamics of hidden brain- and body-states using a Dynamic Bayesian Network (DBN) that is learned directly from EEG and EMG data. [sent-29, score-0.085]
14 We show how a DBN can be used to infer probability distributions over hidden state variables, where the state variables correspond to brain states useful for BCI (such as “Intention to move left hand”, “Left hand in motion”, etc). [sent-30, score-0.75]
15 Using a DBN gives us several advantages in addition to providing a continuous probabilistic estimate of brain state. [sent-31, score-0.171]
16 First, it allows us to explicitly model the hidden causal structure and dependencies between different brain states. [sent-32, score-0.255]
17 Second, it facilitates the integration of information from multiple modalities such as EEG and EMG signals, allowing, for example, EEG-derived estimates to be bootstrapped from EMG-derived estimates. [sent-33, score-0.05]
18 In addition, learning a dynamic graphical model for time-varying data such as EEG allows other useful operations such as prediction, filling in of missing data, and smoothing of state estimates using information from future data points. [sent-34, score-0.261]
19 These capabilities are difficult to obtain while working exclusively in the frequency domain or using whole slices of the data (or its features) for training classifiers. [sent-35, score-0.082]
20 We illustrate our approach in a simple Left versus Right hand movement task and present preliminary results showing supervised learning and Bayesian inference of hidden state for a dataset containing simultaneous EEG and EMG recordings. [sent-36, score-0.522]
21 2 The DBN Framework We study the problem of modeling spontaneous movement of the left/right arm using EEG and EMG signals. [sent-37, score-0.285]
22 It is well known that EEG signals show a slow potential drift prior to spontaneous motor activity. [sent-38, score-0.168]
23 In particular, the BP related to movement of left versus right arm shows a strong lateral asymmetry. [sent-40, score-0.377]
24 This allows one to not only estimate the intent to move prior to actual movement, but also distinguish between left and right movements. [sent-41, score-0.195]
25 Previous approaches [1, 2] have utilized BP signals in classification-based BCI protocols based on synchronization cues that identify points of movement onset. [sent-42, score-0.324]
26 In our case, the challenge was to model the structure of BPs and related movement signals using the states of the DBN, and to recognize actions without explicit synchronization cues. [sent-43, score-0.449]
27 Figure 1 shows the complete DBN (referred to as Nf ull in this paper) used to model the leftright hand movement task. [sent-44, score-0.286]
28 The hidden state Bt in Figure 1(a) tracks the higher-level brain state over time and generates the hidden EEG and EMG states Et and Mt respectively. [sent-45, score-0.721]
29 These hidden states in turn generate the observed EEG and EMG signals. [sent-46, score-0.17]
30 The dashed arrows indicate that the hidden states make transitions over time. [sent-47, score-0.231]
31 As shown in Figure 1(b), the state Bt is intended to model the high-level intention of the subject. [sent-48, score-0.21]
32 The figure shows both the values Bt can take as well the constraints on the transition between values. [sent-49, score-0.057]
33 The actual probabilities of the allowed transitions are learned from data. [sent-50, score-0.103]
34 The hidden states Et and Mt are intended to model the temporal structure of the EEG and EMG signals, which are generated using a mixture of Gaussians conditioned on E t and Mt respectively. [sent-51, score-0.191]
35 In the same way as the values of Bt are customized for our particular experiment, we would like the state transitions of Et and Mt to also reflect their respective constraints. [sent-52, score-0.209]
36 We use the models shown in Figure 2 for allowed transitions of the states M t and Et respectively. [sent-55, score-0.188]
37 The dotted arrows represent transitions to a state at the next time step. [sent-57, score-0.209]
38 (b) The transition graph for the brain state Bt . [sent-58, score-0.376]
39 The probability of each allowed transition is learned from input data. [sent-59, score-0.099]
40 of three chains of states (labeled (1), (2), and (3)), representing the rest state, a left-hand action and a right-hand action respectively. [sent-60, score-0.342]
41 In each chain, the state Mt in each time step either retains its old value with a given probability (self-pointing arrow) or transitions to the next state value in that particular chain. [sent-61, score-0.357]
42 The transition graph of Figure 2(b) shows similar constraints on the EEG, except that the left and right action chains are further partitioned into intent, action, and post-action subgroups of states, since each of these components are discernible from the BP in EEG (but not from EMG) signals. [sent-62, score-0.268]
43 (a) The EMG state transitions between its values mi are constrained to be in one of three chains: the chains model (1) rest, (2) left arm movement, and (3) right arm movement. [sent-64, score-0.461]
44 (b) In the EEG state transition graph, the left and right movement chains are further divided into state values encoding intent (LI/RI), movement (LM/RM), and post movement (LPM/RPM). [sent-65, score-1.239]
45 1 Experiments and Results Data Collection and Processing The task: The subject pressed two distinct keys on a keyboard with the left hand or right hand at random at a self-initiated pace. [sent-67, score-0.18]
46 We recorded 8 EEG channels around the motor area of cortex (C3, Cz, C4, FC1, FC2, CP1, CP2, Pz) using averaged ear electrodes as reference, and 2 differential pairs of EMG (one on each arm). [sent-68, score-0.193]
47 Data was recorded at 2048Hz for a period of 20 minutes, with the movements being separated by approximately 3-4s. [sent-69, score-0.087]
48 5 1 Figure 3: Movement-related potential drift recorded during the hand-movement task: The two plots show the EEG signals averaged over all trials from the motor-related channels C3 and C4 for left (left panel) and right hand movement (right panel). [sent-74, score-0.596]
49 The EMG channels were converted to RMS values computed over windows for an effective sampling rate of 128Hz. [sent-78, score-0.09]
50 Data Analysis: The recorded data were first analyzed in the traditional manner by averaging across all trials. [sent-79, score-0.061]
51 Figure 3 shows the average of EEG channels C3 and C4 for left and right hand movement actions respectively. [sent-80, score-0.468]
52 As can be seen, the averages for both channels are different for the two classes. [sent-81, score-0.09]
53 Furthermore, there is a slow potential drift preceding the action and a return to the baseline potential after the action is performed. [sent-82, score-0.2]
54 Previous researchers [1] have classified EEG data over a window leading up to the instant of action with high accuracy (over 90%) into left or right movement classes. [sent-83, score-0.434]
55 Thus, there appears to be a reliable amount of information in the EEG signal for at least discriminating between left versus right movements. [sent-84, score-0.159]
56 Data Evaluation using SVMs: To obtain a baseline and to evaluate the quality of our recorded data, we tested the performance of linear support vector machines (SVMs) on classifying our EEG data into left and right movement classes. [sent-85, score-0.357]
57 5 seconds before each movement were concatenated from all EEG channels and used for classification. [sent-88, score-0.314]
58 We performed hyper-parameter selection using leave-one-out crossvalidation on 15 minutes of data and obtained an error of 15% on the remaining 5 minutes of data. [sent-89, score-0.076]
59 Such an error rate is comparable to those obtained in previous studies on similar tasks, suggesting that the recorded data contains sufficient movement-related information to be tested in experiments involving DBNs. [sent-90, score-0.061]
60 GMTK provides support for expressing constraints on state transitions (as described in Section 2). [sent-92, score-0.209]
61 We constructed a supervisory signal from the recorded key-presses as follows: A period of 100ms around each keystroke was labeled “motor action” for the appropriate hand. [sent-94, score-0.203]
62 This signal was used to train the network Nemg in a supervised manner. [sent-95, score-0.068]
63 To generate a supervisory signal for the network Neeg , or the full combined network Nf ull (Figure 1), we added prefixes and postfixes of 150ms each to each action in this signal, and labeled them “preparatory” and “post-movement” activity respectively. [sent-96, score-0.325]
64 Thus, we can use partial (EEG only) or full evidence in the inference step to obtain probability distributions over brain state. [sent-98, score-0.212]
65 2 Learning and Inference with EMG Our first step is to learn the simpler model Nemg that has only the hidden Mt state and the observed EMG signal. [sent-101, score-0.212]
66 This is to test inference using the EMG signal alone. [sent-102, score-0.087]
67 We used 15 minutes of EMG data to train our simplified model, and then tested it on the remaining 5 minutes of data. [sent-104, score-0.076]
68 In other words, the maximum a posteriori (MAP) sequence of values for hidden states was computed. [sent-106, score-0.17]
69 Figure 4 shows a 100s slice of data containing 2 channels of EMG, and the predicted hidden EMG state Mt . [sent-107, score-0.324]
70 The states 0, 1 and 2 correspond to “no action”, left, and right actions respectively. [sent-108, score-0.188]
71 In the shown figure, the state Mt successfully captures not only all the obvious arm movements but also the actions that are obscured by noise. [sent-109, score-0.297]
72 3 Learning the EEG Model We used the supervisory signal described earlier to learn the corresponding EEG model Neeg . [sent-111, score-0.142]
73 Note that the brain-state can be inferred from the hidden EEG state Et directly, since the state space is appropriately partitioned as shown in Figure 2(b). [sent-112, score-0.379]
74 Figure 5 shows the result of inference on the learned model Neeg using only the EEG signals as evidence. [sent-113, score-0.125]
75 The figure shows a subset of the EEG channels (C3,Cz,C4), the supervisory signal, and the predicted brain state Bt (the MAP estimate). [sent-114, score-0.527]
76 The figure shows that many of the instances of action (but not all) are correctly identified by the model. [sent-115, score-0.079]
77 Our model gives us at each time instant a MAP-estimated state sequence that best describes the past, and the probability associated with that state sequence. [sent-116, score-0.334]
78 This gives us, at each time instant, a measure of how likely each brain state Bt is, with reference to the others. [sent-117, score-0.338]
79 For convenience, we can use the probability associated with the REST state (see Figure 1) as reference. [sent-118, score-0.148]
80 Figure 6 shows a graphical illustration of this instantaneous time estimate. [sent-119, score-0.082]
81 The plotted graphs are, in order, the supervisory signal (i. [sent-120, score-0.142]
82 , the “ground truth value”) and the instantaneous measures of likelihood of intention/movement/post-movement states for the left and right hand respectively. [sent-122, score-0.267]
83 We see that the true hand movements are correctly inferred in a surprisingly large number of cases (log likelihood ratio crosses 0). [sent-124, score-0.089]
84 In summary, our graphical models Nemg and Neeg have shown promising results in correctly identifying movement onset from EMG and EEG signals respectively. [sent-126, score-0.345]
85 Ongoing work is focused on improving accuracy by using features extracted from EEG, and inference using both EEG and EMG in Nf ull (the full model). [sent-127, score-0.079]
86 The states 0,1,2 correspond to “no action”, left, and right actions respectively. [sent-129, score-0.188]
87 Our model correctly identifies the obscured spikes in the noisy right EMG channel 4 Discussion and Conclusion We have shown that dynamic Bayesian networks (DBNs) can be used to model the transitions between brain- and muscle-states as a subject performs a motor task. [sent-130, score-0.208]
88 In particular, a two-level hierarchical network was proposed for simultaneously estimating higher-level brain state and lower-level EEG and EMG states in a left/right hand movement task. [sent-131, score-0.695]
89 The results demonstrate that for a self-paced movement task, hidden brain states useful for BCI such as intention to move the left or right hand can be decoded from a DBN learned directly from EEG and EMG data. [sent-132, score-0.787]
90 Previous work on BCIs can be grouped into two broad classes: self-regulatory BCIs and BCIs based on detecting brain state. [sent-133, score-0.171]
91 Self-regulatory BCIs rely on training the user to regulate certain features of the EEG, such as cortical positivity [10], or oscillatory activity (the µ rhythm, see [5]), in order to control, for example, a cursor on a display. [sent-134, score-0.045]
92 The approach presented in this paper falls in the second class of BCIs, those based on detecting brain states [1, 2, 3, 4]. [sent-135, score-0.277]
93 However, rather than employing classification methods, we use probabilistic graphical models for inferring brain state and learning the transition probabilities between brain states. [sent-136, score-0.604]
94 Successfully learning a dynamic graphical model as suggested in this paper offers several advantages over traditional classification-based schemes for BCI. [sent-137, score-0.093]
95 It allows one to explicitly model the hidden causal structure and dependencies between different brain states. [sent-138, score-0.255]
96 State 0 is the rest state, states 1 through 3 represent left hand movement, and 4 through 6 represent right hand movement (see Figure 1(b)). [sent-140, score-0.528]
97 Our current efforts are focused on investigating methods for learning dynamic graphical models for motor tasks of varying complexity and using these models to build robust, probabilistic BCI systems. [sent-143, score-0.135]
98 Improving transfer rates in brain computer interfacing: a case study. [sent-172, score-0.171]
99 The measure shown is the log ratio of the instantaneous MAP estimate for the relevant state and the estimate for the rest state. [sent-184, score-0.211]
100 The graphical models toolkit: An open source software system for speech and time-series processing. [sent-203, score-0.057]
wordName wordTfidf (topN-words)
[('eeg', 0.572), ('emg', 0.552), ('dbn', 0.205), ('movement', 0.204), ('brain', 0.171), ('state', 0.148), ('mt', 0.13), ('bci', 0.125), ('states', 0.106), ('supervisory', 0.096), ('movt', 0.095), ('channels', 0.09), ('bt', 0.088), ('intent', 0.082), ('neeg', 0.079), ('action', 0.079), ('bcis', 0.075), ('hidden', 0.064), ('signals', 0.063), ('nemg', 0.063), ('slices', 0.063), ('transitions', 0.061), ('recorded', 0.061), ('arm', 0.06), ('post', 0.06), ('transition', 0.057), ('graphical', 0.057), ('left', 0.048), ('cz', 0.047), ('signal', 0.046), ('right', 0.044), ('hand', 0.044), ('motor', 0.042), ('drift', 0.042), ('intention', 0.041), ('inference', 0.041), ('chains', 0.04), ('chain', 0.039), ('instant', 0.038), ('rest', 0.038), ('actions', 0.038), ('minutes', 0.038), ('nf', 0.038), ('synchronization', 0.038), ('ull', 0.038), ('dynamic', 0.036), ('bp', 0.034), ('bereitschaftspotential', 0.032), ('eegt', 0.032), ('emgt', 0.032), ('engg', 0.032), ('gmtk', 0.032), ('mq', 0.032), ('rehab', 0.032), ('interfaces', 0.03), ('bayesian', 0.029), ('gure', 0.028), ('pre', 0.028), ('preparatory', 0.027), ('blankertz', 0.027), ('curio', 0.027), ('wolpaw', 0.027), ('et', 0.027), ('movements', 0.026), ('instantaneous', 0.025), ('toolkit', 0.025), ('obscured', 0.025), ('bootstrapped', 0.025), ('modalities', 0.025), ('mp', 0.023), ('cursor', 0.023), ('decoded', 0.023), ('predicted', 0.022), ('classi', 0.022), ('activity', 0.022), ('xes', 0.022), ('seattle', 0.022), ('network', 0.022), ('learned', 0.021), ('researchers', 0.021), ('versus', 0.021), ('allowed', 0.021), ('intended', 0.021), ('trans', 0.021), ('onset', 0.021), ('lling', 0.021), ('spontaneous', 0.021), ('move', 0.021), ('generates', 0.02), ('seconds', 0.02), ('causal', 0.02), ('rao', 0.02), ('smoothing', 0.02), ('internal', 0.02), ('svms', 0.019), ('inferred', 0.019), ('reference', 0.019), ('utilized', 0.019), ('wa', 0.019), ('exclusively', 0.019)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999988 56 nips-2004-Dynamic Bayesian Networks for Brain-Computer Interfaces
Author: Pradeep Shenoy, Rajesh P. Rao
Abstract: We describe an approach to building brain-computer interfaces (BCI) based on graphical models for probabilistic inference and learning. We show how a dynamic Bayesian network (DBN) can be used to infer probability distributions over brain- and body-states during planning and execution of actions. The DBN is learned directly from observed data and allows measured signals such as EEG and EMG to be interpreted in terms of internal states such as intent to move, preparatory activity, and movement execution. Unlike traditional classification-based approaches to BCI, the proposed approach (1) allows continuous tracking and prediction of internal states over time, and (2) generates control signals based on an entire probability distribution over states rather than binary yes/no decisions. We present preliminary results of brain- and body-state estimation using simultaneous EEG and EMG signals recorded during a self-paced left/right hand movement task. 1
2 0.24459934 20 nips-2004-An Auditory Paradigm for Brain-Computer Interfaces
Author: N. J. Hill, Thomas N. Lal, Karin Bierig, Niels Birbaumer, Bernhard Schölkopf
Abstract: Motivated by the particular problems involved in communicating with “locked-in” paralysed patients, we aim to develop a braincomputer interface that uses auditory stimuli. We describe a paradigm that allows a user to make a binary decision by focusing attention on one of two concurrent auditory stimulus sequences. Using Support Vector Machine classification and Recursive Channel Elimination on the independent components of averaged eventrelated potentials, we show that an untrained user’s EEG data can be classified with an encouragingly high level of accuracy. This suggests that it is possible for users to modulate EEG signals in a single trial by the conscious direction of attention, well enough to be useful in BCI. 1
3 0.22400904 117 nips-2004-Methods Towards Invasive Human Brain Computer Interfaces
Author: Thomas N. Lal, Thilo Hinterberger, Guido Widman, Michael Schröder, N. J. Hill, Wolfgang Rosenstiel, Christian E. Elger, Niels Birbaumer, Bernhard Schölkopf
Abstract: During the last ten years there has been growing interest in the development of Brain Computer Interfaces (BCIs). The field has mainly been driven by the needs of completely paralyzed patients to communicate. With a few exceptions, most human BCIs are based on extracranial electroencephalography (EEG). However, reported bit rates are still low. One reason for this is the low signal-to-noise ratio of the EEG [16]. We are currently investigating if BCIs based on electrocorticography (ECoG) are a viable alternative. In this paper we present the method and examples of intracranial EEG recordings of three epilepsy patients with electrode grids placed on the motor cortex. The patients were asked to repeatedly imagine movements of two kinds, e.g., tongue or finger movements. We analyze the classifiability of the data using Support Vector Machines (SVMs) [18, 21] and Recursive Channel Elimination (RCE) [11]. 1
4 0.12812091 12 nips-2004-A Temporal Kernel-Based Model for Tracking Hand Movements from Neural Activities
Author: Lavi Shpigelman, Koby Crammer, Rony Paz, Eilon Vaadia, Yoram Singer
Abstract: We devise and experiment with a dynamical kernel-based system for tracking hand movements from neural activity. The state of the system corresponds to the hand location, velocity, and acceleration, while the system’s input are the instantaneous spike rates. The system’s state dynamics is defined as a combination of a linear mapping from the previous estimated state and a kernel-based mapping tailored for modeling neural activities. In contrast to generative models, the activity-to-state mapping is learned using discriminative methods by minimizing a noise-robust loss function. We use this approach to predict hand trajectories on the basis of neural activity in motor cortex of behaving monkeys and find that the proposed approach is more accurate than both a static approach based on support vector regression and the Kalman filter. 1
5 0.091048352 76 nips-2004-Hierarchical Bayesian Inference in Networks of Spiking Neurons
Author: Rajesh P. Rao
Abstract: There is growing evidence from psychophysical and neurophysiological studies that the brain utilizes Bayesian principles for inference and decision making. An important open question is how Bayesian inference for arbitrary graphical models can be implemented in networks of spiking neurons. In this paper, we show that recurrent networks of noisy integrate-and-fire neurons can perform approximate Bayesian inference for dynamic and hierarchical graphical models. The membrane potential dynamics of neurons is used to implement belief propagation in the log domain. The spiking probability of a neuron is shown to approximate the posterior probability of the preferred state encoded by the neuron, given past inputs. We illustrate the model using two examples: (1) a motion detection network in which the spiking probability of a direction-selective neuron becomes proportional to the posterior probability of motion in a preferred direction, and (2) a two-level hierarchical network that produces attentional effects similar to those observed in visual cortical areas V2 and V4. The hierarchical model offers a new Bayesian interpretation of attentional modulation in V2 and V4. 1
6 0.07773561 155 nips-2004-Responding to Modalities with Different Latencies
7 0.075335532 124 nips-2004-Multiple Alignment of Continuous Time Series
8 0.062253166 64 nips-2004-Experts in a Markov Decision Process
9 0.051956609 28 nips-2004-Bayesian inference in spiking neurons
10 0.049086947 47 nips-2004-Contextual Models for Object Detection Using Boosted Random Fields
11 0.047806226 174 nips-2004-Spike Sorting: Bayesian Clustering of Non-Stationary Data
12 0.046984777 13 nips-2004-A Three Tiered Approach for Articulated Object Action Modeling and Recognition
13 0.046467744 26 nips-2004-At the Edge of Chaos: Real-time Computations and Self-Organized Criticality in Recurrent Neural Networks
14 0.045244116 147 nips-2004-Planning for Markov Decision Processes with Sparse Stochasticity
15 0.043589141 206 nips-2004-Worst-Case Analysis of Selective Sampling for Linear-Threshold Algorithms
16 0.040223319 33 nips-2004-Brain Inspired Reinforcement Learning
17 0.039639626 6 nips-2004-A Hidden Markov Model for de Novo Peptide Sequencing
18 0.039550364 139 nips-2004-Optimal Aggregation of Classifiers and Boosting Maps in Functional Magnetic Resonance Imaging
19 0.038142592 203 nips-2004-Validity Estimates for Loopy Belief Propagation on Binary Real-world Networks
20 0.037960254 102 nips-2004-Learning first-order Markov models for control
topicId topicWeight
[(0, -0.127), (1, -0.087), (2, 0.066), (3, -0.089), (4, -0.066), (5, 0.035), (6, 0.192), (7, 0.01), (8, 0.025), (9, -0.125), (10, -0.048), (11, 0.033), (12, -0.042), (13, -0.107), (14, 0.31), (15, -0.036), (16, 0.092), (17, -0.12), (18, -0.164), (19, 0.095), (20, -0.022), (21, -0.106), (22, -0.078), (23, 0.189), (24, -0.007), (25, 0.171), (26, 0.057), (27, 0.153), (28, -0.141), (29, 0.025), (30, -0.013), (31, -0.005), (32, 0.016), (33, -0.046), (34, -0.079), (35, -0.051), (36, -0.036), (37, 0.007), (38, -0.18), (39, 0.05), (40, -0.069), (41, 0.13), (42, 0.007), (43, 0.026), (44, -0.032), (45, 0.082), (46, -0.054), (47, -0.101), (48, 0.026), (49, 0.045)]
simIndex simValue paperId paperTitle
same-paper 1 0.95040751 56 nips-2004-Dynamic Bayesian Networks for Brain-Computer Interfaces
Author: Pradeep Shenoy, Rajesh P. Rao
Abstract: We describe an approach to building brain-computer interfaces (BCI) based on graphical models for probabilistic inference and learning. We show how a dynamic Bayesian network (DBN) can be used to infer probability distributions over brain- and body-states during planning and execution of actions. The DBN is learned directly from observed data and allows measured signals such as EEG and EMG to be interpreted in terms of internal states such as intent to move, preparatory activity, and movement execution. Unlike traditional classification-based approaches to BCI, the proposed approach (1) allows continuous tracking and prediction of internal states over time, and (2) generates control signals based on an entire probability distribution over states rather than binary yes/no decisions. We present preliminary results of brain- and body-state estimation using simultaneous EEG and EMG signals recorded during a self-paced left/right hand movement task. 1
2 0.81573576 117 nips-2004-Methods Towards Invasive Human Brain Computer Interfaces
Author: Thomas N. Lal, Thilo Hinterberger, Guido Widman, Michael Schröder, N. J. Hill, Wolfgang Rosenstiel, Christian E. Elger, Niels Birbaumer, Bernhard Schölkopf
Abstract: During the last ten years there has been growing interest in the development of Brain Computer Interfaces (BCIs). The field has mainly been driven by the needs of completely paralyzed patients to communicate. With a few exceptions, most human BCIs are based on extracranial electroencephalography (EEG). However, reported bit rates are still low. One reason for this is the low signal-to-noise ratio of the EEG [16]. We are currently investigating if BCIs based on electrocorticography (ECoG) are a viable alternative. In this paper we present the method and examples of intracranial EEG recordings of three epilepsy patients with electrode grids placed on the motor cortex. The patients were asked to repeatedly imagine movements of two kinds, e.g., tongue or finger movements. We analyze the classifiability of the data using Support Vector Machines (SVMs) [18, 21] and Recursive Channel Elimination (RCE) [11]. 1
3 0.75753915 20 nips-2004-An Auditory Paradigm for Brain-Computer Interfaces
Author: N. J. Hill, Thomas N. Lal, Karin Bierig, Niels Birbaumer, Bernhard Schölkopf
Abstract: Motivated by the particular problems involved in communicating with “locked-in” paralysed patients, we aim to develop a braincomputer interface that uses auditory stimuli. We describe a paradigm that allows a user to make a binary decision by focusing attention on one of two concurrent auditory stimulus sequences. Using Support Vector Machine classification and Recursive Channel Elimination on the independent components of averaged eventrelated potentials, we show that an untrained user’s EEG data can be classified with an encouragingly high level of accuracy. This suggests that it is possible for users to modulate EEG signals in a single trial by the conscious direction of attention, well enough to be useful in BCI. 1
4 0.45809305 12 nips-2004-A Temporal Kernel-Based Model for Tracking Hand Movements from Neural Activities
Author: Lavi Shpigelman, Koby Crammer, Rony Paz, Eilon Vaadia, Yoram Singer
Abstract: We devise and experiment with a dynamical kernel-based system for tracking hand movements from neural activity. The state of the system corresponds to the hand location, velocity, and acceleration, while the system’s input are the instantaneous spike rates. The system’s state dynamics is defined as a combination of a linear mapping from the previous estimated state and a kernel-based mapping tailored for modeling neural activities. In contrast to generative models, the activity-to-state mapping is learned using discriminative methods by minimizing a noise-robust loss function. We use this approach to predict hand trajectories on the basis of neural activity in motor cortex of behaving monkeys and find that the proposed approach is more accurate than both a static approach based on support vector regression and the Kalman filter. 1
5 0.34807226 155 nips-2004-Responding to Modalities with Different Latencies
Author: Fredrik Bissmarck, Hiroyuki Nakahara, Kenji Doya, Okihide Hikosaka
Abstract: Motor control depends on sensory feedback in multiple modalities with different latencies. In this paper we consider within the framework of reinforcement learning how different sensory modalities can be combined and selected for real-time, optimal movement control. We propose an actor-critic architecture with multiple modules, whose output are combined using a softmax function. We tested our architecture in a simulation of a sequential reaching task. Reaching was initially guided by visual feedback with a long latency. Our learning scheme allowed the agent to utilize the somatosensory feedback with shorter latency when the hand is near the experienced trajectory. In simulations with different latencies for visual and somatosensory feedback, we found that the agent depended more on feedback with shorter latency. 1
6 0.28241223 29 nips-2004-Beat Tracking the Graphical Model Way
7 0.27388528 147 nips-2004-Planning for Markov Decision Processes with Sparse Stochasticity
8 0.26787415 159 nips-2004-Schema Learning: Experience-Based Construction of Predictive Action Models
9 0.25750026 74 nips-2004-Harmonising Chorales by Probabilistic Inference
10 0.25294471 6 nips-2004-A Hidden Markov Model for de Novo Peptide Sequencing
11 0.24700189 120 nips-2004-Modeling Conversational Dynamics as a Mixed-Memory Markov Process
12 0.23983236 124 nips-2004-Multiple Alignment of Continuous Time Series
13 0.21652012 76 nips-2004-Hierarchical Bayesian Inference in Networks of Spiking Neurons
14 0.20953526 190 nips-2004-The Rescorla-Wagner Algorithm and Maximum Likelihood Estimation of Causal Parameters
15 0.20047864 86 nips-2004-Instance-Specific Bayesian Model Averaging for Classification
16 0.19633114 109 nips-2004-Mass Meta-analysis in Talairach Space
17 0.1933247 95 nips-2004-Large-Scale Prediction of Disulphide Bond Connectivity
18 0.18545708 139 nips-2004-Optimal Aggregation of Classifiers and Boosting Maps in Functional Magnetic Resonance Imaging
19 0.18415184 102 nips-2004-Learning first-order Markov models for control
20 0.18302038 38 nips-2004-Co-Validation: Using Model Disagreement on Unlabeled Data to Validate Classification Algorithms
topicId topicWeight
[(13, 0.079), (15, 0.097), (18, 0.045), (26, 0.035), (31, 0.02), (33, 0.195), (35, 0.018), (39, 0.013), (50, 0.029), (81, 0.011), (82, 0.039), (96, 0.296)]
simIndex simValue paperId paperTitle
same-paper 1 0.7821629 56 nips-2004-Dynamic Bayesian Networks for Brain-Computer Interfaces
Author: Pradeep Shenoy, Rajesh P. Rao
Abstract: We describe an approach to building brain-computer interfaces (BCI) based on graphical models for probabilistic inference and learning. We show how a dynamic Bayesian network (DBN) can be used to infer probability distributions over brain- and body-states during planning and execution of actions. The DBN is learned directly from observed data and allows measured signals such as EEG and EMG to be interpreted in terms of internal states such as intent to move, preparatory activity, and movement execution. Unlike traditional classification-based approaches to BCI, the proposed approach (1) allows continuous tracking and prediction of internal states over time, and (2) generates control signals based on an entire probability distribution over states rather than binary yes/no decisions. We present preliminary results of brain- and body-state estimation using simultaneous EEG and EMG signals recorded during a self-paced left/right hand movement task. 1
2 0.76792181 15 nips-2004-Active Learning for Anomaly and Rare-Category Detection
Author: Dan Pelleg, Andrew W. Moore
Abstract: We introduce a novel active-learning scenario in which a user wants to work with a learning algorithm to identify useful anomalies. These are distinguished from the traditional statistical definition of anomalies as outliers or merely ill-modeled points. Our distinction is that the usefulness of anomalies is categorized subjectively by the user. We make two additional assumptions. First, there exist extremely few useful anomalies to be hunted down within a massive dataset. Second, both useful and useless anomalies may sometimes exist within tiny classes of similar anomalies. The challenge is thus to identify “rare category” records in an unlabeled noisy set with help (in the form of class labels) from a human expert who has a small budget of datapoints that they are prepared to categorize. We propose a technique to meet this challenge, which assumes a mixture model fit to the data, but otherwise makes no assumptions on the particular form of the mixture components. This property promises wide applicability in real-life scenarios and for various statistical models. We give an overview of several alternative methods, highlighting their strengths and weaknesses, and conclude with a detailed empirical analysis. We show that our method can quickly zoom in on an anomaly set containing a few tens of points in a dataset of hundreds of thousands. 1
3 0.61300373 131 nips-2004-Non-Local Manifold Tangent Learning
Author: Yoshua Bengio, Martin Monperrus
Abstract: We claim and present arguments to the effect that a large class of manifold learning algorithms that are essentially local and can be framed as kernel learning algorithms will suffer from the curse of dimensionality, at the dimension of the true underlying manifold. This observation suggests to explore non-local manifold learning algorithms which attempt to discover shared structure in the tangent planes at different positions. A criterion for such an algorithm is proposed and experiments estimating a tangent plane prediction function are presented, showing its advantages with respect to local manifold learning algorithms: it is able to generalize very far from training data (on learning handwritten character image rotations), where a local non-parametric method fails. 1
4 0.6100387 102 nips-2004-Learning first-order Markov models for control
Author: Pieter Abbeel, Andrew Y. Ng
Abstract: First-order Markov models have been successfully applied to many problems, for example in modeling sequential data using Markov chains, and modeling control problems using the Markov decision processes (MDP) formalism. If a first-order Markov model’s parameters are estimated from data, the standard maximum likelihood estimator considers only the first-order (single-step) transitions. But for many problems, the firstorder conditional independence assumptions are not satisfied, and as a result the higher order transition probabilities may be poorly approximated. Motivated by the problem of learning an MDP’s parameters for control, we propose an algorithm for learning a first-order Markov model that explicitly takes into account higher order interactions during training. Our algorithm uses an optimization criterion different from maximum likelihood, and allows us to learn models that capture longer range effects, but without giving up the benefits of using first-order Markov models. Our experimental results also show the new algorithm outperforming conventional maximum likelihood estimation in a number of control problems where the MDP’s parameters are estimated from data. 1
5 0.60975772 204 nips-2004-Variational Minimax Estimation of Discrete Distributions under KL Loss
Author: Liam Paninski
Abstract: We develop a family of upper and lower bounds on the worst-case expected KL loss for estimating a discrete distribution on a finite number m of points, given N i.i.d. samples. Our upper bounds are approximationtheoretic, similar to recent bounds for estimating discrete entropy; the lower bounds are Bayesian, based on averages of the KL loss under Dirichlet distributions. The upper bounds are convex in their parameters and thus can be minimized by descent methods to provide estimators with low worst-case error; the lower bounds are indexed by a one-dimensional parameter and are thus easily maximized. Asymptotic analysis of the bounds demonstrates the uniform KL-consistency of a wide class of estimators as c = N/m → ∞ (no matter how slowly), and shows that no estimator is consistent for c bounded (in contrast to entropy estimation). Moreover, the bounds are asymptotically tight as c → 0 or ∞, and are shown numerically to be tight within a factor of two for all c. Finally, in the sparse-data limit c → 0, we find that the Dirichlet-Bayes (add-constant) estimator with parameter scaling like −c log(c) optimizes both the upper and lower bounds, suggesting an optimal choice of the “add-constant” parameter in this regime.
6 0.60939783 207 nips-2004-ℓ₀-norm Minimization for Basis Selection
7 0.60915619 3 nips-2004-A Feature Selection Algorithm Based on the Global Minimization of a Generalization Error Bound
8 0.60901356 44 nips-2004-Conditional Random Fields for Object Recognition
9 0.60811275 31 nips-2004-Blind One-microphone Speech Separation: A Spectral Learning Approach
10 0.6069544 77 nips-2004-Hierarchical Clustering of a Mixture Model
11 0.60599029 174 nips-2004-Spike Sorting: Bayesian Clustering of Non-Stationary Data
12 0.60529071 86 nips-2004-Instance-Specific Bayesian Model Averaging for Classification
13 0.60502911 127 nips-2004-Neighbourhood Components Analysis
14 0.60438848 2 nips-2004-A Direct Formulation for Sparse PCA Using Semidefinite Programming
15 0.6043185 99 nips-2004-Learning Hyper-Features for Visual Identification
16 0.6041218 11 nips-2004-A Second Order Cone programming Formulation for Classifying Missing Data
17 0.6038487 161 nips-2004-Self-Tuning Spectral Clustering
18 0.6034472 64 nips-2004-Experts in a Markov Decision Process
19 0.60312819 124 nips-2004-Multiple Alignment of Continuous Time Series
20 0.60308993 45 nips-2004-Confidence Intervals for the Area Under the ROC Curve