nips nips2002 nips2002-108 knowledge-graph by maker-knowledge-mining

108 nips-2002-Improving Transfer Rates in Brain Computer Interfacing: A Case Study


Source: pdf

Author: Peter Meinicke, Matthias Kaper, Florian Hoppe, Manfred Heumann, Helge Ritter

Abstract: In this paper we present results of a study on brain computer interfacing. We adopted an approach of Farwell & Donchin [4], which we tried to improve in several aspects. The main objective was to improve the transfer rates based on offline analysis of EEG-data but within a more realistic setup closer to an online realization than in the original studies. The objective was achieved along two different tracks: on the one hand we used state-of-the-art machine learning techniques for signal classification and on the other hand we augmented the data space by using more electrodes for the interface. For the classification task we utilized SVMs and, as motivated by recent findings on the learning of discriminative densities, we accumulated the values of the classification function in order to combine several classifications, which finally lead to significantly improved rates as compared with techniques applied in the original work. In combination with the data space augmentation, we achieved competitive transfer rates at an average of 50.5 bits/min and with a maximum of 84.7 bits/min.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 The main objective was to improve the transfer rates based on offline analysis of EEG-data but within a more realistic setup closer to an online realization than in the original studies. [sent-5, score-0.498]

2 The objective was achieved along two different tracks: on the one hand we used state-of-the-art machine learning techniques for signal classification and on the other hand we augmented the data space by using more electrodes for the interface. [sent-6, score-0.272]

3 In combination with the data space augmentation, we achieved competitive transfer rates at an average of 50. [sent-8, score-0.464]

4 Besides the clinical application, developing such a brain-computer interface (BCI) is in itself an exciting goal as indicated by a growing research interest in this field. [sent-14, score-0.096]

5 Several EEG-based techniques have been proposed for realization of BCIs (see [6, 12], for an overview). [sent-15, score-0.083]

6 In the first approach, participants are trained to control their EEG frequency pattern for binary decisions. [sent-17, score-0.114]

7 Imaginations of movements, resulting in the “Bereitschaftspotential” over sensorimotor cortex areas, are used to transmit information in the device of Pfurtscheller   ¡ Figure 1: Stimulusmatrix with one column highlighted. [sent-22, score-0.06]

8 [2] applied sophisticated methods for data-analysis to this approach and reached fast transfer rates of 23 bits/min when classifying brain signals preceding overt muscle activity. [sent-26, score-0.552]

9 It is rather slow (<6 bits/min) and requires intensively trained participants but is in practical use. [sent-33, score-0.146]

10 Farwell & Donchin [4, 3, 10] developed a BCI-System by utilizing specific positive deflections (P300) in EEG-signals accompanying rare events (as discussed in detail below). [sent-35, score-0.085]

11 For BCIs, it is very desirable to have fast transfer rates. [sent-37, score-0.276]

12 In our own studies, we therefore tried to accelerate the fourth approach by using state-of-the-art machine learning techniques and fusing data from different electrodes for data-analysis. [sent-38, score-0.308]

13 For that purpose we utilized the basic setup of Farwell & Donchin (referred to as F&D;) [4] who used the well-studied P300-Component to create a BCI-system. [sent-39, score-0.109]

14 People were instructed to focus on one symbol in the matrix, and mentally count its highlightings. [sent-42, score-0.267]

15 From EEG-research it is known, that counting a rare specific event (oddballstimulus) in a series of background stimuli evokes a P300 for the oddball stimulus. [sent-43, score-0.061]

16 Hence, highlighting the attended symbol in the 6 6-matrix should result in a P300, a characteristic positive deflection with a latency of around 300ms in the EEG-signal. [sent-44, score-0.213]

17 It is therefore possible to infer the selected symbol by detecting the P300 in EEG-signals. [sent-45, score-0.196]

18 For identification of the right column and row associated with a P300, Farwell & Donchin used the model-based techniques Area and Peak picking (both described in section 2) to detect the P300. [sent-48, score-0.162]

19 Using SWDA in a later study [3] resulted in transfer rates between 4. [sent-50, score-0.464]

20 8 symbols per minute at an accuracy of 80% with a temporal distance of 125ms between two highlightings. [sent-52, score-0.075]

21 In our work reported here we could improve several aspects of the F&D-approach; by utilizing very recent machine learning techniques and a larger number of EEG-electrodes. [sent-53, score-0.102]

22 First of all, we could increase the transfer rate by using Support Vector Machines (SVM) [11] for classification. [sent-54, score-0.248]

23 Inspired by a recent approach to learning of discriminative densities [7] we utilized the values of the SVM classification function as a measure of confidence which we accumulate over certain classifications in order to speed up the transfer rate. [sent-55, score-0.396]

24 In addition, we enhanced classification rates by augmenting the data-space. [sent-56, score-0.184]

25 While Farwell & Donchin employed only data from a single electrode for classification, we used the data from 10 electrodes simultaneously. [sent-57, score-0.323]

26 2 Methods In the following we describe the techniques used for acquisition, preprocessing and analysis of the EEG-data. [sent-58, score-0.05]

27 The experimental setup was the following: participants were seated in front of a computer screen presenting the matrix (see Fig. [sent-61, score-0.147]

28 EEG-data were recorded with 10 Ag/AgCl electrodes at positions of the extended international 10-20 system (Fz, Cz, Pz, C3, C4, P3, P4, Oz, OL, OR 1 ) sampled at 200Hz and low-pass filtered at 30Hz. [sent-63, score-0.222]

29 The participants had to perform a certain number of trials. [sent-64, score-0.114]

30 For the duration of a trial, they were instructed to focus their attention on a target symbol specified by the program, to mentally count the highlightings of the target symbol, and to avoid any body movement (especially eye moves and blinks). [sent-65, score-0.439]

31 Each trial is subdivided into a certain number of subtrials. [sent-66, score-0.054]

32 For different BCI-setups, the time between stimulus onsets, the interstimulus interval (ISI), was either 150, 300 or 500ms, while a highlighting always lasts 150ms. [sent-70, score-0.117]

33 To each stimulus correspondes an epoch, a time frame of 600ms after stimulus onset 2 During this interval a P300 should be evoked if the stimulus contains the target symbol. [sent-71, score-0.296]

34 There is no pause between subtrials, but between trials. [sent-72, score-0.047]

35 During the pause, the participants had time to focus on the next target symbol, before they initiated the next trial. [sent-73, score-0.2]

36 The target symbol was chosen randomly from the available set of symbols and was presented by the program in order to create a data set of labelled EEG-signals for the subsequent offline analysis. [sent-74, score-0.327]

37 To compensate for slow drifts of the DC potential, in a first step the linear trend of the raw data in each electrode over the duration of a trial was eliminated. [sent-76, score-0.219]

38 This was separately done for each electrode taking the data of all trials into account. [sent-78, score-0.182]

39 Test- and trainingsets were created by choosing the data according to one symbol as testset, and the data of the other symbols as trainingset in a crossvalidation scheme. [sent-80, score-0.321]

40 The task of classifying a subtrial for the identification of a target symbol has to be distinguished from the classification of a single epoch for detection of a signal, correlated with oddball-stimuli, which we briefly refer to as a “P300 component” in a simplified manner in the following. [sent-81, score-0.77]

41 In case of using a subtrial to select a symbol, two P300 components have to be detected within epochs: one corresponding to a row-, another to a column-stimulus. [sent-82, score-0.299]

42 The detection algorithm works on the data of an epoch and has to compute a score which reflects the presence of a P300 within that epoch. [sent-83, score-0.22]

43 Therefore, 12 epochs have to be evaluated for the selection of one target symbol. [sent-84, score-0.258]

44 For the P300-detection, we utilized two model-based methods which had been proposed by F&D;, and one completely data-driven method based on Support Vector Machines (SVMs) [11]. [sent-85, score-0.076]

45 For training of the classifiers, we built up a sets of epochs containing an equal number of positive and negative examples, i. [sent-86, score-0.172]

46 time course model−based methods trial subtrial 1 subtrial 2 subtrial 3 stimulus onsets epoch of 600ms Figure 2: Trials, subtrials and epochs in the course of time (left). [sent-91, score-1.761]

47 Area calculates surface in the P300-window, Peak picking calculates differences between peaks. [sent-93, score-0.158]

48 The first model-based method uses as its score as shown in Fig. [sent-94, score-0.048]

49 Hyperparameters of the model-based methods were the boundaries picking method”, of the P300-window. [sent-96, score-0.082]

50 They were selected regarding the average of epochs containing the P300 by taking the boundaries of the largest area. [sent-97, score-0.202]

51 When using SVMs, it is not clear what measure to take as the score of an epoch. [sent-101, score-0.048]

52 However, a recent approach to learning of discriminative densities [7] suggests an interpretation of the usual discrimination function for SVMs with positive kernels in terms of scaled density differences. [sent-103, score-0.072]

53 This finding provides us with a well-motivated score of an epoch: with as the data vector of an epoch and as the corresponding class label which is positive/negative for epochs with/without target stimulus the SVM-score is computed as ¤   ¨ ¦ ©§¥ ¥ B @ CA0 ¤  ¤ ( 3 1 0 ( ' % # ! [sent-104, score-0.548]

54 Because EEG-data possess a very poor signal-to-noise ratio (SNR), identification of the target symbol from a single subtrial is usually not reliable enough to achieve a reasonable classification rate. [sent-108, score-0.551]

55 Therefore, several subtrials have to be combined for classification, slowing down the transfer rate. [sent-109, score-0.63]

56 Thus, an important goal is to decrease the amount of subtrials which have to be combined for a satisfactory classification rate. [sent-110, score-0.382]

57 Therefore, we tested a method for certain -combinations of subtrials in the following way: different series of successive subtrials were taken out of a test set and the corresponding single classifications were combined as explained below. [sent-112, score-0.763]

58 Thereby, the test series contained only subtrials belonging to identical symbols and these were combined in their original temporal order3. [sent-113, score-0.485]

59 In contrast, Farwell & Donchin randomly chose samples from a test set, built from subtrials taken from different trials and belonging to different symbols. [sent-114, score-0.402]

60 Based on the data of subtrials, one has to choose a row and a column in order to identify the target symbol, i. [sent-118, score-0.116]

61 Therefore, in a first step, the single scores 4 of the epoch corresponding to the stimulus associated to the -th row of the -th subtrial were summed up to the total score . [sent-121, score-0.619]

62 Equivalent steps were performed to choose the target column. [sent-123, score-0.086]

63 Based on these decisions the target symbol was finally selected in accordance to the presented matrix. [sent-124, score-0.282]

64 Second, further single electrodes were taken as input source. [sent-128, score-0.19]

65 This revealed information about interesting scalp positions to record a P300 and on the other hand indicated which channels may contain a useful signal. [sent-129, score-0.143]

66 Third, the SVM classification rate with respect to epochs was improved by increasing the data-space. [sent-130, score-0.172]

67 Therefore, the input vector for the classifier was extended by combining data from the same epoch but from different electrodes. [sent-131, score-0.172]

68 These tests indicated that the best classification rates could be achieved using as detection method an SVM with all ten electrodes as input sources. [sent-132, score-0.52]

69 Since the results of the first three steps were established based on the data of one initial experiment with only one participant, we evaluated the generality of these techniques by testing different subjects and BCI parameters. [sent-133, score-0.05]

70 Finally, the BCI performance in terms of attainable communication rates is estimated from these analyses. [sent-134, score-0.184]

71 Method comparison using the Pz electrode as input source. [sent-135, score-0.133]

72 All four methods were applied to the data of one initial experiment with an ISI of 500ms and 3 subtrials per trial. [sent-136, score-0.353]

73 Figure 3 presents the classification rates of up to 10 subtrials. [sent-137, score-0.184]

74 The SVM method achieved best performance, its epoch classification rate was 76. [sent-138, score-0.204]

75 0) in a 10-fold crossvalidation with about 380 subtrials samples in the training sets, and about 40 in the test sets. [sent-140, score-0.433]

76 Of each subtrial in the training set, 4 epochs (2 with, 2 without a P300) were taken as training samples, whereas all 12 epochs of the subtrials of the test set were classified. [sent-141, score-0.996]

77 For each training set, hyperparameters were selected by another 3-fold crossvalidation on this set. [sent-142, score-0.137]

78 3 For a higher number of subtrial combinations, subtrials from different trials had to be combined. [sent-143, score-0.701]

79 However, real-world-application of this BCI don’t require such combinations with respect to the finally achieved transfer rates reported in section 3. [sent-144, score-0.464]

80 Figure 3: (left) Method comparison on the Pz electrode: The three techniques were applied to the data of the initial experiment. [sent-146, score-0.05]

81 The results of the Peak picking and SVM method are shown in Figure 3. [sent-151, score-0.082]

82 The SVM is able to extract useful information from all ten electrodes, whereas the Peak picking performance varies for different scalp positions. [sent-152, score-0.18]

83 Especially, the electrodes over the visual cortex areas OZ, OR and OL are useless for the model-based techniques, as the same characteristics are revealed by tests with the Area method. [sent-153, score-0.255]

84 While Farwell & Donchin used only one electrode for data-analysis, we extended the data-space by using larger numbers of electrodes. [sent-155, score-0.133]

85 We calculated classification rates for Pz alone, three, seven, and ten electrodes. [sent-156, score-0.235]

86 A signal correlated with oddball-stimuli was classified at rates of 76. [sent-157, score-0.184]

87 These rates were calculated with 850 positive and 850 negative epoch samples and a 3-fold crossvalidation. [sent-162, score-0.356]

88 Applying data-space augmentation for classification to infer symbols in the matrix results in the classification rates depicted in Figure 3 (right) for an ISI of 500ms. [sent-164, score-0.306]

89 Using ten electrodes simultaneously, combined in one data vector, outperforms lower-dimensional data-spaces. [sent-165, score-0.27]

90 Figure 5: Mean-classification rates (left) and transfer rates (right) for different ISIs. [sent-166, score-0.616]

91 Note that a subtrial takes a specific amount of time. [sent-168, score-0.299]

92 Therefore, the time dependend transfer rates are decreasing with the number of subtrials. [sent-169, score-0.432]

93 Means, best and worst classification rates are presented in Figure 5, as well as average and best transfer rates. [sent-174, score-0.432]

94 the probability for classification, and the  Using an ISI of 300ms results in slower transfer rates than using an ISI of 150ms. [sent-176, score-0.432]

95 The latter ISI results on the average in classifying a symbol after 5. [sent-177, score-0.213]

96 The poorest performer needs 9s to reach this criterion, the best performer achieves an accuracy of 95. [sent-179, score-0.108]

97 4 Conclusion With an application of the data-driven SVM-method to classification of single-channel EEG-signals, we could improve transfer rates as compared with model-based techniques. [sent-185, score-0.432]

98 Furthermore, by increasing the number of EEG-channels, even higher classification and transfer rates could be achieved. [sent-186, score-0.432]

99 This resulted in high transfer rates with a maximum of 84. [sent-188, score-0.464]

100 Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. [sent-227, score-0.194]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('subtrials', 0.353), ('subtrial', 0.299), ('transfer', 0.248), ('farwell', 0.245), ('classi', 0.201), ('electrodes', 0.19), ('donchin', 0.189), ('rates', 0.184), ('isi', 0.181), ('epoch', 0.172), ('epochs', 0.172), ('symbol', 0.166), ('pz', 0.142), ('electrode', 0.133), ('participants', 0.114), ('bci', 0.108), ('cation', 0.105), ('svm', 0.089), ('target', 0.086), ('picking', 0.082), ('oz', 0.082), ('swda', 0.082), ('ol', 0.08), ('crossvalidation', 0.08), ('utilized', 0.076), ('symbols', 0.075), ('birbaumer', 0.071), ('bler', 0.071), ('kotchoubey', 0.071), ('pfurtscheller', 0.071), ('bielefeld', 0.071), ('stimulus', 0.07), ('clinical', 0.065), ('peak', 0.065), ('device', 0.06), ('svms', 0.057), ('fz', 0.054), ('mentally', 0.054), ('performer', 0.054), ('prosthesis', 0.054), ('rehabilitation', 0.054), ('trial', 0.054), ('utilizing', 0.052), ('ten', 0.051), ('ine', 0.05), ('techniques', 0.05), ('trials', 0.049), ('score', 0.048), ('bcis', 0.047), ('augmentation', 0.047), ('blankertz', 0.047), ('ghanayim', 0.047), ('hinterberger', 0.047), ('instructed', 0.047), ('perelmouter', 0.047), ('scalp', 0.047), ('taub', 0.047), ('wolpaw', 0.047), ('helge', 0.047), ('meinicke', 0.047), ('twellmann', 0.047), ('cz', 0.047), ('highlighting', 0.047), ('participant', 0.047), ('pause', 0.047), ('classifying', 0.047), ('brain', 0.045), ('cations', 0.045), ('onsets', 0.043), ('mental', 0.043), ('people', 0.041), ('eeg', 0.04), ('penalties', 0.04), ('accelerate', 0.04), ('area', 0.039), ('calculates', 0.038), ('discriminative', 0.037), ('nally', 0.036), ('dence', 0.036), ('densities', 0.035), ('realization', 0.033), ('revealed', 0.033), ('highlighted', 0.033), ('rare', 0.033), ('setup', 0.033), ('tests', 0.032), ('positions', 0.032), ('resulted', 0.032), ('achieved', 0.032), ('slow', 0.032), ('indicated', 0.031), ('neurophysiology', 0.031), ('selected', 0.03), ('row', 0.03), ('combined', 0.029), ('realize', 0.029), ('tried', 0.028), ('series', 0.028), ('fast', 0.028), ('hyperparameters', 0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000004 108 nips-2002-Improving Transfer Rates in Brain Computer Interfacing: A Case Study

Author: Peter Meinicke, Matthias Kaper, Florian Hoppe, Manfred Heumann, Helge Ritter

Abstract: In this paper we present results of a study on brain computer interfacing. We adopted an approach of Farwell & Donchin [4], which we tried to improve in several aspects. The main objective was to improve the transfer rates based on offline analysis of EEG-data but within a more realistic setup closer to an online realization than in the original studies. The objective was achieved along two different tracks: on the one hand we used state-of-the-art machine learning techniques for signal classification and on the other hand we augmented the data space by using more electrodes for the interface. For the classification task we utilized SVMs and, as motivated by recent findings on the learning of discriminative densities, we accumulated the values of the classification function in order to combine several classifications, which finally lead to significantly improved rates as compared with techniques applied in the original work. In combination with the data space augmentation, we achieved competitive transfer rates at an average of 50.5 bits/min and with a maximum of 84.7 bits/min.

2 0.18494698 55 nips-2002-Combining Features for BCI

Author: Guido Dornhege, Benjamin Blankertz, Gabriel Curio, Klaus-Robert Müller

Abstract: Recently, interest is growing to develop an effective communication interface connecting the human brain to a computer, the ’Brain-Computer Interface’ (BCI). One motivation of BCI research is to provide a new communication channel substituting normal motor output in patients with severe neuromuscular disabilities. In the last decade, various neurophysiological cortical processes, such as slow potential shifts, movement related potentials (MRPs) or event-related desynchronization (ERD) of spontaneous EEG rhythms, were shown to be suitable for BCI, and, consequently, different independent approaches of extracting BCI-relevant EEG-features for single-trial analysis are under investigation. Here, we present and systematically compare several concepts for combining such EEG-features to improve the single-trial classification. Feature combinations are evaluated on movement imagination experiments with 3 subjects where EEG-features are based on either MRPs or ERD, or both. Those combination methods that incorporate the assumption that the single EEG-features are physiologically mutually independent outperform the plain method of ’adding’ evidence where the single-feature vectors are simply concatenated. These results strengthen the hypothesis that MRP and ERD reflect at least partially independent aspects of cortical processes and open a new perspective to boost BCI effectiveness.

3 0.14211817 68 nips-2002-Discriminative Densities from Maximum Contrast Estimation

Author: Peter Meinicke, Thorsten Twellmann, Helge Ritter

Abstract: We propose a framework for classifier design based on discriminative densities for representation of the differences of the class-conditional distributions in a way that is optimal for classification. The densities are selected from a parametrized set by constrained maximization of some objective function which measures the average (bounded) difference, i.e. the contrast between discriminative densities. We show that maximization of the contrast is equivalent to minimization of an approximation of the Bayes risk. Therefore using suitable classes of probability density functions, the resulting maximum contrast classifiers (MCCs) can approximate the Bayes rule for the general multiclass case. In particular for a certain parametrization of the density functions we obtain MCCs which have the same functional form as the well-known Support Vector Machines (SVMs). We show that MCC-training in general requires some nonlinear optimization but under certain conditions the problem is concave and can be tackled by a single linear program. We indicate the close relation between SVM- and MCC-training and in particular we show that Linear Programming Machines can be viewed as an approximate realization of MCCs. In the experiments on benchmark data sets, the MCC shows a competitive classification performance.

4 0.12010867 88 nips-2002-Feature Selection and Classification on Matrix Data: From Large Margins to Small Covering Numbers

Author: Sepp Hochreiter, Klaus Obermayer

Abstract: We investigate the problem of learning a classification task for datasets which are described by matrices. Rows and columns of these matrices correspond to objects, where row and column objects may belong to different sets, and the entries in the matrix express the relationships between them. We interpret the matrix elements as being produced by an unknown kernel which operates on object pairs and we show that - under mild assumptions - these kernels correspond to dot products in some (unknown) feature space. Minimizing a bound for the generalization error of a linear classifier which has been obtained using covering numbers we derive an objective function for model selection according to the principle of structural risk minimization. The new objective function has the advantage that it allows the analysis of matrices which are not positive definite, and not even symmetric or square. We then consider the case that row objects are interpreted as features. We suggest an additional constraint, which imposes sparseness on the row objects and show, that the method can then be used for feature selection. Finally, we apply this method to data obtained from DNA microarrays, where “column” objects correspond to samples, “row” objects correspond to genes and matrix elements correspond to expression levels. Benchmarks are conducted using standard one-gene classification and support vector machines and K-nearest neighbors after standard feature selection. Our new method extracts a sparse set of genes and provides superior classification results. 1

5 0.1179772 59 nips-2002-Constraint Classification for Multiclass Classification and Ranking

Author: Sariel Har-Peled, Dan Roth, Dav Zimak

Abstract: The constraint classification framework captures many flavors of multiclass classification including winner-take-all multiclass classification, multilabel classification and ranking. We present a meta-algorithm for learning in this framework that learns via a single linear classifier in high dimension. We discuss distribution independent as well as margin-based generalization bounds and present empirical and theoretical evidence showing that constraint classification benefits over existing methods of multiclass classification.

6 0.11606421 24 nips-2002-Adaptive Scaling for Feature Selection in SVMs

7 0.10026047 86 nips-2002-Fast Sparse Gaussian Process Methods: The Informative Vector Machine

8 0.098140053 196 nips-2002-The RA Scanner: Prediction of Rheumatoid Joint Inflammation Based on Laser Imaging

9 0.09132643 52 nips-2002-Cluster Kernels for Semi-Supervised Learning

10 0.091211818 92 nips-2002-FloatBoost Learning for Classification

11 0.087295197 45 nips-2002-Boosted Dyadic Kernel Discriminants

12 0.084304564 187 nips-2002-Spikernels: Embedding Spiking Neurons in Inner-Product Spaces

13 0.078745782 19 nips-2002-Adapting Codes and Embeddings for Polychotomies

14 0.078416735 128 nips-2002-Learning a Forward Model of a Reflex

15 0.075437225 72 nips-2002-Dyadic Classification Trees via Structural Risk Minimization

16 0.073225446 62 nips-2002-Coulomb Classifiers: Generalizing Support Vector Machines via an Analogy to Electrostatic Systems

17 0.073218293 21 nips-2002-Adaptive Classification by Variational Kalman Filtering

18 0.069718018 145 nips-2002-Mismatch String Kernels for SVM Protein Classification

19 0.067137122 120 nips-2002-Kernel Design Using Boosting

20 0.063914888 28 nips-2002-An Information Theoretic Approach to the Functional Classification of Neurons


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.202), (1, -0.031), (2, 0.096), (3, -0.053), (4, 0.158), (5, -0.052), (6, -0.034), (7, -0.092), (8, 0.097), (9, 0.037), (10, -0.1), (11, 0.132), (12, 0.048), (13, 0.046), (14, 0.107), (15, -0.124), (16, -0.011), (17, 0.061), (18, -0.022), (19, -0.037), (20, 0.023), (21, -0.027), (22, 0.051), (23, -0.032), (24, 0.006), (25, 0.001), (26, -0.01), (27, -0.107), (28, -0.053), (29, -0.03), (30, -0.047), (31, -0.029), (32, 0.014), (33, -0.054), (34, 0.067), (35, 0.031), (36, 0.016), (37, -0.007), (38, -0.037), (39, 0.008), (40, -0.075), (41, -0.001), (42, 0.061), (43, 0.048), (44, 0.033), (45, 0.056), (46, 0.105), (47, -0.008), (48, -0.062), (49, -0.2)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93294358 108 nips-2002-Improving Transfer Rates in Brain Computer Interfacing: A Case Study

Author: Peter Meinicke, Matthias Kaper, Florian Hoppe, Manfred Heumann, Helge Ritter

Abstract: In this paper we present results of a study on brain computer interfacing. We adopted an approach of Farwell & Donchin [4], which we tried to improve in several aspects. The main objective was to improve the transfer rates based on offline analysis of EEG-data but within a more realistic setup closer to an online realization than in the original studies. The objective was achieved along two different tracks: on the one hand we used state-of-the-art machine learning techniques for signal classification and on the other hand we augmented the data space by using more electrodes for the interface. For the classification task we utilized SVMs and, as motivated by recent findings on the learning of discriminative densities, we accumulated the values of the classification function in order to combine several classifications, which finally lead to significantly improved rates as compared with techniques applied in the original work. In combination with the data space augmentation, we achieved competitive transfer rates at an average of 50.5 bits/min and with a maximum of 84.7 bits/min.

2 0.72725862 55 nips-2002-Combining Features for BCI

Author: Guido Dornhege, Benjamin Blankertz, Gabriel Curio, Klaus-Robert Müller

Abstract: Recently, interest is growing to develop an effective communication interface connecting the human brain to a computer, the ’Brain-Computer Interface’ (BCI). One motivation of BCI research is to provide a new communication channel substituting normal motor output in patients with severe neuromuscular disabilities. In the last decade, various neurophysiological cortical processes, such as slow potential shifts, movement related potentials (MRPs) or event-related desynchronization (ERD) of spontaneous EEG rhythms, were shown to be suitable for BCI, and, consequently, different independent approaches of extracting BCI-relevant EEG-features for single-trial analysis are under investigation. Here, we present and systematically compare several concepts for combining such EEG-features to improve the single-trial classification. Feature combinations are evaluated on movement imagination experiments with 3 subjects where EEG-features are based on either MRPs or ERD, or both. Those combination methods that incorporate the assumption that the single EEG-features are physiologically mutually independent outperform the plain method of ’adding’ evidence where the single-feature vectors are simply concatenated. These results strengthen the hypothesis that MRP and ERD reflect at least partially independent aspects of cortical processes and open a new perspective to boost BCI effectiveness.

3 0.71664965 68 nips-2002-Discriminative Densities from Maximum Contrast Estimation

Author: Peter Meinicke, Thorsten Twellmann, Helge Ritter

Abstract: We propose a framework for classifier design based on discriminative densities for representation of the differences of the class-conditional distributions in a way that is optimal for classification. The densities are selected from a parametrized set by constrained maximization of some objective function which measures the average (bounded) difference, i.e. the contrast between discriminative densities. We show that maximization of the contrast is equivalent to minimization of an approximation of the Bayes risk. Therefore using suitable classes of probability density functions, the resulting maximum contrast classifiers (MCCs) can approximate the Bayes rule for the general multiclass case. In particular for a certain parametrization of the density functions we obtain MCCs which have the same functional form as the well-known Support Vector Machines (SVMs). We show that MCC-training in general requires some nonlinear optimization but under certain conditions the problem is concave and can be tackled by a single linear program. We indicate the close relation between SVM- and MCC-training and in particular we show that Linear Programming Machines can be viewed as an approximate realization of MCCs. In the experiments on benchmark data sets, the MCC shows a competitive classification performance.

4 0.64429861 196 nips-2002-The RA Scanner: Prediction of Rheumatoid Joint Inflammation Based on Laser Imaging

Author: Anton Schwaighofer, Volker Tresp, Peter Mayer, Alexander K. Scheel, Gerhard A. Müller

Abstract: We describe the RA scanner, a novel system for the examination of patients suffering from rheumatoid arthritis. The RA scanner is based on a novel laser-based imaging technique which is sensitive to the optical characteristics of finger joint tissue. Based on the laser images, finger joints are classified according to whether the inflammatory status has improved or worsened. To perform the classification task, various linear and kernel-based systems were implemented and their performances were compared. Special emphasis was put on measures to reliably perform parameter tuning and evaluation, since only a very small data set was available. Based on the results presented in this paper, it was concluded that the RA scanner permits a reliable classification of pathological finger joints, thus paving the way for a further development from prototype to product stage.

5 0.63759291 62 nips-2002-Coulomb Classifiers: Generalizing Support Vector Machines via an Analogy to Electrostatic Systems

Author: Sepp Hochreiter, Michael C. Mozer, Klaus Obermayer

Abstract: We introduce a family of classifiers based on a physical analogy to an electrostatic system of charged conductors. The family, called Coulomb classifiers, includes the two best-known support-vector machines (SVMs), the ν–SVM and the C–SVM. In the electrostatics analogy, a training example corresponds to a charged conductor at a given location in space, the classification function corresponds to the electrostatic potential function, and the training objective function corresponds to the Coulomb energy. The electrostatic framework provides not only a novel interpretation of existing algorithms and their interrelationships, but it suggests a variety of new methods for SVMs including kernels that bridge the gap between polynomial and radial-basis functions, objective functions that do not require positive-definite kernels, regularization techniques that allow for the construction of an optimal classifier in Minkowski space. Based on the framework, we propose novel SVMs and perform simulation studies to show that they are comparable or superior to standard SVMs. The experiments include classification tasks on data which are represented in terms of their pairwise proximities, where a Coulomb Classifier outperformed standard SVMs. 1

6 0.62008893 59 nips-2002-Constraint Classification for Multiclass Classification and Ranking

7 0.5376094 92 nips-2002-FloatBoost Learning for Classification

8 0.53703082 45 nips-2002-Boosted Dyadic Kernel Discriminants

9 0.52401537 109 nips-2002-Improving a Page Classifier with Anchor Extraction and Link Analysis

10 0.51700205 24 nips-2002-Adaptive Scaling for Feature Selection in SVMs

11 0.50551331 88 nips-2002-Feature Selection and Classification on Matrix Data: From Large Margins to Small Covering Numbers

12 0.50000888 72 nips-2002-Dyadic Classification Trees via Structural Risk Minimization

13 0.4963595 86 nips-2002-Fast Sparse Gaussian Process Methods: The Informative Vector Machine

14 0.41289562 67 nips-2002-Discriminative Binaural Sound Localization

15 0.40058693 128 nips-2002-Learning a Forward Model of a Reflex

16 0.38716647 162 nips-2002-Parametric Mixture Models for Multi-Labeled Text

17 0.38193184 21 nips-2002-Adaptive Classification by Variational Kalman Filtering

18 0.37677777 149 nips-2002-Multiclass Learning by Probabilistic Embeddings

19 0.37294698 16 nips-2002-A Prototype for Automatic Recognition of Spontaneous Facial Actions

20 0.35392779 145 nips-2002-Mismatch String Kernels for SVM Protein Classification


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(6, 0.02), (11, 0.011), (23, 0.504), (42, 0.047), (54, 0.086), (55, 0.037), (68, 0.011), (74, 0.06), (87, 0.011), (92, 0.025), (98, 0.104)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.87522757 26 nips-2002-An Estimation-Theoretic Framework for the Presentation of Multiple Stimuli

Author: Christian W. Eurich

Abstract: A framework is introduced for assessing the encoding accuracy and the discriminational ability of a population of neurons upon simultaneous presentation of multiple stimuli. Minimal square estimation errors are obtained from a Fisher information analysis in an abstract compound space comprising the features of all stimuli. Even for the simplest case of linear superposition of responses and Gaussian tuning, the symmetries in the compound space are very different from those in the case of a single stimulus. The analysis allows for a quantitative description of attentional effects and can be extended to include neural nonlinearities such as nonclassical receptive fields. 1

same-paper 2 0.85983109 108 nips-2002-Improving Transfer Rates in Brain Computer Interfacing: A Case Study

Author: Peter Meinicke, Matthias Kaper, Florian Hoppe, Manfred Heumann, Helge Ritter

Abstract: In this paper we present results of a study on brain computer interfacing. We adopted an approach of Farwell & Donchin [4], which we tried to improve in several aspects. The main objective was to improve the transfer rates based on offline analysis of EEG-data but within a more realistic setup closer to an online realization than in the original studies. The objective was achieved along two different tracks: on the one hand we used state-of-the-art machine learning techniques for signal classification and on the other hand we augmented the data space by using more electrodes for the interface. For the classification task we utilized SVMs and, as motivated by recent findings on the learning of discriminative densities, we accumulated the values of the classification function in order to combine several classifications, which finally lead to significantly improved rates as compared with techniques applied in the original work. In combination with the data space augmentation, we achieved competitive transfer rates at an average of 50.5 bits/min and with a maximum of 84.7 bits/min.

3 0.85355592 153 nips-2002-Neural Decoding of Cursor Motion Using a Kalman Filter

Author: W Wu, M. J. Black, Y. Gao, M. Serruya, A. Shaikhouni, J. P. Donoghue, Elie Bienenstock

Abstract: The direct neural control of external devices such as computer displays or prosthetic limbs requires the accurate decoding of neural activity representing continuous movement. We develop a real-time control system using the spiking activity of approximately 40 neurons recorded with an electrode array implanted in the arm area of primary motor cortex. In contrast to previous work, we develop a control-theoretic approach that explicitly models the motion of the hand and the probabilistic relationship between this motion and the mean firing rates of the cells in 70 bins. We focus on a realistic cursor control task in which the subject must move a cursor to “hit” randomly placed targets on a computer monitor. Encoding and decoding of the neural data is achieved with a Kalman filter which has a number of advantages over previous linear filtering techniques. In particular, the Kalman filter reconstructions of hand trajectories in off-line experiments are more accurate than previously reported results and the model provides insights into the nature of the neural coding of movement. ¨ ©§

4 0.67422551 33 nips-2002-Approximate Linear Programming for Average-Cost Dynamic Programming

Author: Benjamin V. Roy, Daniela D. Farias

Abstract: This paper extends our earlier analysis on approximate linear programming as an approach to approximating the cost-to-go function in a discounted-cost dynamic program [6]. In this paper, we consider the average-cost criterion and a version of approximate linear programming that generates approximations to the optimal average cost and differential cost function. We demonstrate that a naive version of approximate linear programming prioritizes approximation of the optimal average cost and that this may not be well-aligned with the objective of deriving a policy with low average cost. For that, the algorithm should aim at producing a good approximation of the differential cost function. We propose a twophase variant of approximate linear programming that allows for external control of the relative accuracy of the approximation of the differential cost function over different portions of the state space via state-relevance weights. Performance bounds suggest that the new algorithm is compatible with the objective of optimizing performance and provide guidance on appropriate choices for state-relevance weights.

5 0.49625188 55 nips-2002-Combining Features for BCI

Author: Guido Dornhege, Benjamin Blankertz, Gabriel Curio, Klaus-Robert Müller

Abstract: Recently, interest is growing to develop an effective communication interface connecting the human brain to a computer, the ’Brain-Computer Interface’ (BCI). One motivation of BCI research is to provide a new communication channel substituting normal motor output in patients with severe neuromuscular disabilities. In the last decade, various neurophysiological cortical processes, such as slow potential shifts, movement related potentials (MRPs) or event-related desynchronization (ERD) of spontaneous EEG rhythms, were shown to be suitable for BCI, and, consequently, different independent approaches of extracting BCI-relevant EEG-features for single-trial analysis are under investigation. Here, we present and systematically compare several concepts for combining such EEG-features to improve the single-trial classification. Feature combinations are evaluated on movement imagination experiments with 3 subjects where EEG-features are based on either MRPs or ERD, or both. Those combination methods that incorporate the assumption that the single EEG-features are physiologically mutually independent outperform the plain method of ’adding’ evidence where the single-feature vectors are simply concatenated. These results strengthen the hypothesis that MRP and ERD reflect at least partially independent aspects of cortical processes and open a new perspective to boost BCI effectiveness.

6 0.48232734 148 nips-2002-Morton-Style Factorial Coding of Color in Primary Visual Cortex

7 0.46209237 44 nips-2002-Binary Tuning is Optimal for Neural Rate Coding with High Temporal Resolution

8 0.45419288 5 nips-2002-A Digital Antennal Lobe for Pattern Equalization: Analysis and Design

9 0.44857937 123 nips-2002-Learning Attractor Landscapes for Learning Motor Primitives

10 0.44630063 184 nips-2002-Spectro-Temporal Receptive Fields of Subthreshold Responses in Auditory Cortex

11 0.44140658 3 nips-2002-A Convergent Form of Approximate Policy Iteration

12 0.43862343 155 nips-2002-Nonparametric Representation of Policies and Value Functions: A Trajectory-Based Approach

13 0.43086684 199 nips-2002-Timing and Partial Observability in the Dopamine System

14 0.429748 180 nips-2002-Selectivity and Metaplasticity in a Unified Calcium-Dependent Model

15 0.42524371 141 nips-2002-Maximally Informative Dimensions: Analyzing Neural Responses to Natural Signals

16 0.42351452 43 nips-2002-Binary Coding in Auditory Cortex

17 0.42303547 82 nips-2002-Exponential Family PCA for Belief Compression in POMDPs

18 0.40846288 28 nips-2002-An Information Theoretic Approach to the Functional Classification of Neurons

19 0.4064185 187 nips-2002-Spikernels: Embedding Spiking Neurons in Inner-Product Spaces

20 0.40576431 134 nips-2002-Learning to Take Concurrent Actions