nips nips2006 nips2006-141 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Konrad P. Körding, Joshua B. Tenenbaum, Reza Shadmehr
Abstract: Our motor system changes due to causes that span multiple timescales. For example, muscle response can change because of fatigue, a condition where the disturbance has a fast timescale or because of disease where the disturbance is much slower. Here we hypothesize that the nervous system adapts in a way that reflects the temporal properties of such potential disturbances. According to a Bayesian formulation of this idea, movement error results in a credit assignment problem: what timescale is responsible for this disturbance? The adaptation schedule influences the behavior of the optimal learner, changing estimates at different timescales as well as the uncertainty. A system that adapts in this way predicts many properties observed in saccadic gain adaptation. It well predicts the timecourses of motor adaptation in cases of partial sensory deprivation and reversals of the adaptation direction.
Reference: text
sentIndex sentText sentNum sentScore
1 Multiple timescales and uncertainty in motor adaptation ¨ Konrad P. [sent-1, score-0.925]
2 edu Abstract Our motor system changes due to causes that span multiple timescales. [sent-8, score-0.342]
3 For example, muscle response can change because of fatigue, a condition where the disturbance has a fast timescale or because of disease where the disturbance is much slower. [sent-9, score-0.792]
4 According to a Bayesian formulation of this idea, movement error results in a credit assignment problem: what timescale is responsible for this disturbance? [sent-11, score-0.3]
5 The adaptation schedule influences the behavior of the optimal learner, changing estimates at different timescales as well as the uncertainty. [sent-12, score-0.67]
6 A system that adapts in this way predicts many properties observed in saccadic gain adaptation. [sent-13, score-0.642]
7 It well predicts the timecourses of motor adaptation in cases of partial sensory deprivation and reversals of the adaptation direction. [sent-14, score-0.896]
8 For that reason, without adaptation any changes in the properties of the oculomotor plant would lead to inaccurate saccades [2]. [sent-17, score-0.768]
9 Motor gain is the ratio of actual and desired movement distances. [sent-18, score-0.29]
10 If the motor gain decreases to below one then the nervous system must send a stronger command to produce a movement of the same size. [sent-19, score-0.671]
11 Indeed, it has been observed that if saccades overshoot the target, the gain tends to decrease and if they undershoot, the gain tends to increase. [sent-20, score-0.689]
12 The saccadic jump paradigm [3] is often used to probe such adaptation [4]: while the subject moves its eyes towards a target, the target is moved. [sent-21, score-0.769]
13 This is not distinguishable to the subject from a change in the properties of the oculomotor plant [5]. [sent-22, score-0.314]
14 Using this paradigm it is possible to probe the mechanism that is normally used to adapt to ongoing changes of the oculomotor plant. [sent-23, score-0.28]
15 1 Disturbances to the motor plant Properties of the oculomotor plant may change due to a variety of disturbances, such as various kinds of fatigue and disease. [sent-25, score-0.725]
16 Here we model each disturbance as a random walk with a characteristic timescale (Figures 1A and B) over which the disturbance is expected to go away. [sent-27, score-0.635]
17 It seems plausible that disturbances that have a short timescale tend to be more variable than those that have a long timescale, and we choose: σ τ = c/τ where c is one of the two free parameters of our model. [sent-34, score-0.461]
18 05 which we estimated from the spread of saccade gains over typical periods of 200 saccades and c = 0. [sent-39, score-0.532]
19 3 Inference Given this explicit model, Bayesian statistics allows deriving an optimal adaptation strategy. [sent-43, score-0.268]
20 Because the contribution of each timescale can never be known precisely, the Bayesian learner represents what it knows as a probability distribution. [sent-49, score-0.386]
21 And so the learner represents what it knows about the contribution of each timescale as a best estimate, but also keeps a measure of uncertainty around this estimate (Fig 1C). [sent-51, score-0.479]
22 Any point along the +0% gain line is a point where the fast and slow timescale cancel each other. [sent-52, score-0.708]
23 Every timestep the system starts with its belief that it has from the previous timestep (sketched in yellow) and combines this with information from the current saccade (sketched in blue) to come up with a new estimate (sketched in red). [sent-56, score-0.489]
24 (1) When time passes, disturbances can be expected to get smaller but at the same time our uncertainty about them increases. [sent-58, score-0.272]
25 Normally the adaptation mechanism is responding to the small drifts that happen to the oculomotor plant and the estimate from the saccade is largely overlapping with the prior belief and with the new belief. [sent-61, score-0.911]
26 When the light is turned off the estimate of each of the B disturbance d τ3 0 -45 -45 error gain error gain -0. [sent-62, score-0.71]
27 disease) fast disturbance [%] A 0 45 slow disturbance [%] 1. [sent-70, score-0.547]
28 9 prior belief 4000 0 evidence from saccade time [saccades] D Normal saccades In the dark Saccadic jump +30 saccades Washout Reversal Figure 1: A generative model for changes in the motor plant and the corresponding optimal inference. [sent-72, score-1.277]
29 B) An example of a system with two timescales (fast and slow), and the resulting gain. [sent-76, score-0.418]
30 The system observes a saccade with an error of +30%. [sent-80, score-0.41]
31 Here we simulated a saccade on every 10th time step of the model. [sent-87, score-0.323]
32 Only in the darkness case saccades 1 3 and 50 are shown. [sent-89, score-0.387]
33 In a gain increase paradigm, initially most of the error is associated with the fast perturbations. [sent-91, score-0.363]
34 After 30 saccades in the gain increase paradigm, most of the error is associated with slow perturbations. [sent-92, score-0.545]
35 Washout trials that follow gain increase do not return the system to a naive state. [sent-93, score-0.359]
36 Gain decrease following gain increase training will mostly affect the fast system. [sent-95, score-0.363]
37 06 800 400 0 Day 1 Day 2 F Day 5 Bayesian adaptation 1200 0. [sent-98, score-0.268]
38 Starting at saccade number 0 the target is jumping 30% short of the target giving the impression of muscles that are too strong. [sent-108, score-0.397]
39 The gain then decreases until the manipulation is ended at saccade number 1400. [sent-109, score-0.574]
40 Normal saccadic gain change paradigm as in Figure 2, however now the monkey spends its nights without vision and the paradigm is continued for many days. [sent-113, score-0.791]
41 E) Comparison of the saccadic gain change timecourses obtained by fitting an exponential. [sent-115, score-0.637]
42 F) the same figure as in E) for the Bayesian learner disturbances slowly creeps towards zero. [sent-116, score-0.363]
43 In the saccadic jump paradigm the error is much larger than it would be during normal life and this is first interpreted by the learner as a fast change and as it persists progressively interpreted as a slow change. [sent-118, score-0.85]
44 When the saccadic jumps ends then the fast timescale goes negative fast and the slow timescale slowly approaches zero. [sent-119, score-1.129]
45 In a reversal setting the fast timescale becomes very negative and the slow timescale goes towards zero. [sent-120, score-0.759]
46 Already with two timescales the optimal learner can thus exhibit a large number of interesting properties. [sent-121, score-0.479]
47 1 Saccadic gain adaptation In an impressive range of experiments started by Mclaughlin [3], investigators have examined how monkeys adapt their saccadic gain. [sent-123, score-0.917]
48 Figure 2A shows how the gain changes over time so that saccades progressively become more precise. [sent-124, score-0.549]
49 The rate of adaptation typically starts fast and then progressively gets slower. [sent-125, score-0.421]
50 This is a classic pattern that is reflected in numerous motor adaptation paradigms [8, 9]. [sent-126, score-0.499]
51 Fast timescale disturbances are assumed to increase and decrease faster than slow timescale disturbances. [sent-128, score-0.858]
52 Between trials, the estimates of the fast disturbances decay fast, but this decay is smaller in the slower timescales. [sent-132, score-0.402]
53 If the gain change is maintained, the relative contribution of the fast timescales diminishes in comparison to the slow timescales (Fig. [sent-133, score-1.18]
54 As fast timescales adapt fast but decay fast as well and slow timescales adapt and decay slowly, this implies that the gain change is driven by progressively slower timescales resulting in the transition from initial fast adapting to a progressively slower adapting. [sent-135, score-2.143]
55 2 Saccadic gain adaptation after sensory deprivation The effects of a wide range of timescales and uncertainty about the causes of changes of the oculomotor plant will largely be hidden if experiments are of a relatively short duration and no uncertainty is produced. [sent-137, score-1.431]
56 However, in a recent experiment Robinson et al analyzed saccadic gain adaptation [7] in a way that allowed insight into many timescales as well as insight into the way the nervous system deals with uncertainty. [sent-138, score-1.413]
57 The monkey adapted for about 1500 saccades every day for 21 consecutive days. [sent-140, score-0.404]
58 Because of the long duration many different timescales are involved in this process. [sent-141, score-0.354]
59 (1) There are several timescales during adaptation: there is a fast (100 saccades) and a slow (10 days) timescale. [sent-147, score-0.527]
60 During the breaks that are paired with darkness the system is decaying back to a gain of zero, as predicted by the model. [sent-150, score-0.515]
61 The finding that the Bayesian learner seems to change faster than the monkey may be related to the context being somewhat different than in the Hopp and Fuchs experiment. [sent-154, score-0.277]
62 The system seems to represent uncertainty and clearly represents the way the motor plant is expected to change in the absence of feedback. [sent-155, score-0.561]
63 It has been proposed that the nervous system may use a set of integrators where one is learning fast and the other is learning slowly [10, 11]. [sent-156, score-0.277]
64 3 Gain adaptation with reversals Kojima et al [12] reported a host of surprising behavioral results during saccade adaptation. [sent-160, score-0.72]
65 In these experiments the adaptation direction was changed 3 times. [sent-161, score-0.268]
66 The saccadic gain was initially increased, then decreased until it reached unity, and finally increased again (Figure 3A). [sent-162, score-0.584]
67 The saccadic gain increased faster during the second gain-up session than during the first(Figure 3B). [sent-163, score-0.66]
68 The Bayesian learner shows a similar phenomenon and provides a rationale: At the end of the first gain-up session for the Bayesian learner, most of the gain change is associated with a slow timescale (Figure 3C). [sent-165, score-0.845]
69 In the subsequent gain-down session, errors produce rapid changes in the fast timescales so that by the time the gain estimate reaches unity, the fast and slow timescales have opposite estimates. [sent-166, score-1.313]
70 In the subsequent gain-up session, the rate of re-adaptation is faster than initial adaptation because the fast timescales decay upwards in between trials (Figure 3D). [sent-168, score-0.796]
71 After about 100 saccades the speed gain from the low frequencies is over and is turned into a slowed increase due to the decreased error term. [sent-169, score-0.461]
72 In a second experiment, Kojima et al [12] found that saccade gains could change despite the fact that the animal was provided with no feedback to guide its performance. [sent-170, score-0.502]
73 When they come out of the dark their gain had spontaneously increased (Figure 3E). [sent-173, score-0.322]
74 However, the estimates are still affected by their timescales of change: the estimate moves up fast along the fast timescales but slowly along the slow timescales. [sent-176, score-1.075]
75 At the start of the darkness period there is a positive upward and a negative downward disturbance inferred by the system (Figure 1C, reversal). [sent-177, score-0.451]
76 Consequently, by the end of the dark period, the estimate has become gain-up, the gain learned in the initial session. [sent-178, score-0.317]
77 Updating without feedback leads the system to infer unobserved dynamics of the oculomotor plant and these dynamics lead to the observed changes. [sent-180, score-0.356]
78 A) The gain is first adapted up until it reaches about 1. [sent-192, score-0.3]
79 Once the gain reaches 1 again it is adapted up with a positive target jump again. [sent-195, score-0.415]
80 B) The speed of adaptation is compared between the first adaptation and the second positive adaptation. [sent-197, score-0.536]
81 The gain used by the monkey is changing during this interval. [sent-201, score-0.353]
82 3 Discussion Traditional models of adaptation simply change motor commands to reduce prediction errors [13]. [sent-203, score-0.57]
83 (1) The system represents its knowledge of the properties of the motor system at different timescales and explicitly models how these disturbances evolve over time. [sent-205, score-0.913]
84 (3) It formulates the computational aim of adaptation in terms of optimally predicting ongoing changes in the properties of the motor plant. [sent-207, score-0.567]
85 Two timescales had been proposed in the context of connectionist learning theory [11]. [sent-210, score-0.354]
86 [10] proposed a model where the motor system responds to error with two systems: one that is highly sensitive to error but rapidly forgets and another that has poor sensitivity to error but has strong retention. [sent-212, score-0.295]
87 Even the earliest studies of oculomotor adaptation realized that the objective of adaptation is to allow precise movement with a relentlessly changing motor plant [3]. [sent-214, score-1.099]
88 Multi timescale adaptation and learning is a near universal phenomenon [14, 8, 16, 17]. [sent-216, score-0.558]
89 Multiscale learning in cognitive systems may be a result of a system that has originally evolved to deal with ever changing motor problems. [sent-220, score-0.322]
90 Multiscale adaptation can also be seen in the way visual neurons adapt to changing visual stimuli [16]. [sent-221, score-0.373]
91 The phenomenon of spontaneous recovery in classical conditioning [19, 20] is largely equivalent to the findings of Kojima et al [12] and can also be explained within the Bayesian multiscale learner framework. [sent-222, score-0.355]
92 The presented model obviously does not explain all known effects in motor or even saccadic gain adaptation. [sent-223, score-0.819]
93 Moreover it seems that adaptation speed of monkeys can be very different on one day than the other and from one experimental setting to the other (e. [sent-225, score-0.442]
94 An important question for further enquiry is how the nervous system solves problems that require multiple timescale adaptation. [sent-231, score-0.411]
95 The characteristics and neuronal substrate of saccadic eye movement plasticity. [sent-246, score-0.388]
96 Illusory shifts in visual direction accompany adaptation of saccadic eye movements. [sent-263, score-0.639]
97 Distinct short-term and long-term adaptation to reduce saccade size in monkey. [sent-276, score-0.591]
98 Interacting adaptive processes with different timescales underlie short-term motor learning. [sent-296, score-0.607]
99 Memory of learning facilitates saccadic adaptation in the monkey. [sent-307, score-0.575]
100 Learning to learn- optimal adjustment of the rate at which the motor system adapts. [sent-381, score-0.315]
wordName wordTfidf (topN-words)
[('timescales', 0.354), ('saccade', 0.323), ('saccadic', 0.307), ('adaptation', 0.268), ('timescale', 0.261), ('gain', 0.251), ('motor', 0.231), ('disturbances', 0.2), ('darkness', 0.2), ('disturbance', 0.187), ('saccades', 0.187), ('plant', 0.146), ('learner', 0.125), ('oculomotor', 0.12), ('day', 0.117), ('kojima', 0.108), ('hopp', 0.092), ('fast', 0.089), ('nervous', 0.086), ('slow', 0.084), ('jump', 0.078), ('fuchs', 0.077), ('monkey', 0.075), ('uncertainty', 0.072), ('reversal', 0.064), ('progressively', 0.064), ('system', 0.064), ('al', 0.059), ('multiscale', 0.058), ('monkeys', 0.057), ('paradigm', 0.055), ('bayesian', 0.055), ('psychol', 0.053), ('robinson', 0.049), ('change', 0.048), ('session', 0.047), ('changes', 0.047), ('bahrick', 0.046), ('timecourse', 0.046), ('washout', 0.046), ('dark', 0.045), ('sketched', 0.043), ('eye', 0.042), ('deprivation', 0.04), ('movement', 0.039), ('slowly', 0.038), ('target', 0.037), ('reprinted', 0.037), ('decay', 0.035), ('fatigue', 0.034), ('adapt', 0.034), ('belief', 0.033), ('yellow', 0.032), ('adapting', 0.032), ('sensory', 0.031), ('spontaneous', 0.031), ('behav', 0.031), ('sacca', 0.031), ('timecourses', 0.031), ('effects', 0.03), ('ga', 0.029), ('recovery', 0.029), ('faster', 0.029), ('phenomenon', 0.029), ('smith', 0.028), ('reversals', 0.027), ('reza', 0.027), ('changing', 0.027), ('increased', 0.026), ('feedback', 0.026), ('affected', 0.025), ('kalman', 0.025), ('adapted', 0.025), ('movements', 0.025), ('erlbaum', 0.024), ('spacing', 0.024), ('probe', 0.024), ('timestep', 0.024), ('konrad', 0.024), ('neurophysiol', 0.024), ('reaches', 0.024), ('et', 0.024), ('increase', 0.023), ('des', 0.023), ('commands', 0.023), ('cancel', 0.023), ('observes', 0.023), ('adaptive', 0.022), ('visual', 0.022), ('slower', 0.022), ('gains', 0.022), ('neurosci', 0.021), ('estimates', 0.021), ('estimate', 0.021), ('optimally', 0.021), ('trials', 0.021), ('adapts', 0.02), ('adjustment', 0.02), ('disease', 0.02), ('behavioral', 0.019)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999976 141 nips-2006-Multiple timescales and uncertainty in motor adaptation
Author: Konrad P. Körding, Joshua B. Tenenbaum, Reza Shadmehr
Abstract: Our motor system changes due to causes that span multiple timescales. For example, muscle response can change because of fatigue, a condition where the disturbance has a fast timescale or because of disease where the disturbance is much slower. Here we hypothesize that the nervous system adapts in a way that reflects the temporal properties of such potential disturbances. According to a Bayesian formulation of this idea, movement error results in a credit assignment problem: what timescale is responsible for this disturbance? The adaptation schedule influences the behavior of the optimal learner, changing estimates at different timescales as well as the uncertainty. A system that adapts in this way predicts many properties observed in saccadic gain adaptation. It well predicts the timecourses of motor adaptation in cases of partial sensory deprivation and reversals of the adaptation direction.
2 0.11578614 148 nips-2006-Nonlinear physically-based models for decoding motor-cortical population activity
Author: Gregory Shakhnarovich, Sung-phil Kim, Michael J. Black
Abstract: Neural motor prostheses (NMPs) require the accurate decoding of motor cortical population activity for the control of an artificial motor system. Previous work on cortical decoding for NMPs has focused on the recovery of hand kinematics. Human NMPs however may require the control of computer cursors or robotic devices with very different physical and dynamical properties. Here we show that the firing rates of cells in the primary motor cortex of non-human primates can be used to control the parameters of an artificial physical system exhibiting realistic dynamics. The model represents 2D hand motion in terms of a point mass connected to a system of idealized springs. The nonlinear spring coefficients are estimated from the firing rates of neurons in the motor cortex. We evaluate linear and a nonlinear decoding algorithms using neural recordings from two monkeys performing two different tasks. We found that the decoded spring coefficients produced accurate hand trajectories compared with state-of-the-art methods for direct decoding of hand kinematics. Furthermore, using a physically-based system produced decoded movements that were more “natural” in that their frequency spectrum more closely matched that of natural hand movements. 1
3 0.098806024 22 nips-2006-Adaptive Spatial Filters with predefined Region of Interest for EEG based Brain-Computer-Interfaces
Author: Moritz Grosse-wentrup, Klaus Gramann, Martin Buss
Abstract: The performance of EEG-based Brain-Computer-Interfaces (BCIs) critically depends on the extraction of features from the EEG carrying information relevant for the classification of different mental states. For BCIs employing imaginary movements of different limbs, the method of Common Spatial Patterns (CSP) has been shown to achieve excellent classification results. The CSP-algorithm however suffers from a lack of robustness, requiring training data without artifacts for good performance. To overcome this lack of robustness, we propose an adaptive spatial filter that replaces the training data in the CSP approach by a-priori information. More specifically, we design an adaptive spatial filter that maximizes the ratio of the variance of the electric field originating in a predefined region of interest (ROI) and the overall variance of the measured EEG. Since it is known that the component of the EEG used for discriminating imaginary movements originates in the motor cortex, we design two adaptive spatial filters with the ROIs centered in the hand areas of the left and right motor cortex. We then use these to classify EEG data recorded during imaginary movements of the right and left hand of three subjects, and show that the adaptive spatial filters outperform the CSP-algorithm, enabling classification rates of up to 94.7 % without artifact rejection. 1
4 0.098750532 49 nips-2006-Causal inference in sensorimotor integration
Author: Konrad P. Körding, Joshua B. Tenenbaum
Abstract: Many recent studies analyze how data from different modalities can be combined. Often this is modeled as a system that optimally combines several sources of information about the same variable. However, it has long been realized that this information combining depends on the interpretation of the data. Two cues that are perceived by different modalities can have different causal relationships: (1) They can both have the same cause, in this case we should fully integrate both cues into a joint estimate. (2) They can have distinct causes, in which case information should be processed independently. In many cases we will not know if there is one joint cause or two independent causes that are responsible for the cues. Here we model this situation as a Bayesian estimation problem. We are thus able to explain some experiments on visual auditory cue combination as well as some experiments on visual proprioceptive cue integration. Our analysis shows that the problem solved by people when they combine cues to produce a movement is much more complicated than is usually assumed, because they need to infer the causal structure that is underlying their sensory experience.
5 0.069807142 33 nips-2006-Analysis of Representations for Domain Adaptation
Author: Shai Ben-David, John Blitzer, Koby Crammer, Fernando Pereira
Abstract: Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. In many situations, though, we have labeled training data for a source domain, and we wish to learn a classifier which performs well on a target domain with a different distribution. Under what conditions can we adapt a classifier trained on the source domain for use in the target domain? Intuitively, a good feature representation is a crucial factor in the success of domain adaptation. We formalize this intuition theoretically with a generalization bound for domain adaption. Our theory illustrates the tradeoffs inherent in designing a representation for domain adaptation and gives a new justification for a recently proposed model. It also points toward a promising new model for domain adaptation: one which explicitly minimizes the difference between the source and target domains, while at the same time maximizing the margin of the training set. 1
6 0.06763909 79 nips-2006-Fast Iterative Kernel PCA
7 0.06449911 114 nips-2006-Learning Time-Intensity Profiles of Human Activity using Non-Parametric Bayesian Models
8 0.061435282 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency
9 0.057640396 74 nips-2006-Efficient Structure Learning of Markov Networks using $L 1$-Regularization
10 0.053543113 192 nips-2006-Theory and Dynamics of Perceptual Bistability
11 0.051794656 154 nips-2006-Optimal Change-Detection and Spiking Neurons
12 0.05085193 155 nips-2006-Optimal Single-Class Classification Strategies
13 0.050518848 24 nips-2006-Aggregating Classification Accuracy across Time: Application to Single Trial EEG
14 0.050405171 71 nips-2006-Effects of Stress and Genotype on Meta-parameter Dynamics in Reinforcement Learning
15 0.050175328 18 nips-2006-A selective attention multi--chip system with dynamic synapses and spiking neurons
16 0.044116627 168 nips-2006-Reducing Calibration Time For Brain-Computer Interfaces: A Clustering Approach
17 0.043035854 165 nips-2006-Real-time adaptive information-theoretic optimization of neurophysiology experiments
18 0.038414214 1 nips-2006-A Bayesian Approach to Diffusion Models of Decision-Making and Response Time
19 0.037089452 126 nips-2006-Logistic Regression for Single Trial EEG Classification
20 0.03708427 41 nips-2006-Bayesian Ensemble Learning
topicId topicWeight
[(0, -0.108), (1, -0.066), (2, 0.026), (3, -0.051), (4, -0.109), (5, -0.062), (6, 0.03), (7, 0.026), (8, -0.001), (9, -0.047), (10, 0.014), (11, 0.015), (12, -0.035), (13, -0.008), (14, -0.039), (15, -0.046), (16, -0.007), (17, -0.005), (18, -0.087), (19, -0.008), (20, -0.006), (21, 0.014), (22, -0.05), (23, -0.005), (24, -0.096), (25, 0.097), (26, -0.051), (27, 0.013), (28, -0.054), (29, -0.009), (30, -0.095), (31, -0.22), (32, 0.187), (33, -0.052), (34, -0.235), (35, 0.015), (36, -0.021), (37, -0.187), (38, 0.074), (39, -0.017), (40, -0.053), (41, -0.055), (42, -0.109), (43, -0.1), (44, -0.009), (45, 0.055), (46, -0.034), (47, -0.005), (48, -0.013), (49, 0.067)]
simIndex simValue paperId paperTitle
same-paper 1 0.97137141 141 nips-2006-Multiple timescales and uncertainty in motor adaptation
Author: Konrad P. Körding, Joshua B. Tenenbaum, Reza Shadmehr
Abstract: Our motor system changes due to causes that span multiple timescales. For example, muscle response can change because of fatigue, a condition where the disturbance has a fast timescale or because of disease where the disturbance is much slower. Here we hypothesize that the nervous system adapts in a way that reflects the temporal properties of such potential disturbances. According to a Bayesian formulation of this idea, movement error results in a credit assignment problem: what timescale is responsible for this disturbance? The adaptation schedule influences the behavior of the optimal learner, changing estimates at different timescales as well as the uncertainty. A system that adapts in this way predicts many properties observed in saccadic gain adaptation. It well predicts the timecourses of motor adaptation in cases of partial sensory deprivation and reversals of the adaptation direction.
2 0.75067902 49 nips-2006-Causal inference in sensorimotor integration
Author: Konrad P. Körding, Joshua B. Tenenbaum
Abstract: Many recent studies analyze how data from different modalities can be combined. Often this is modeled as a system that optimally combines several sources of information about the same variable. However, it has long been realized that this information combining depends on the interpretation of the data. Two cues that are perceived by different modalities can have different causal relationships: (1) They can both have the same cause, in this case we should fully integrate both cues into a joint estimate. (2) They can have distinct causes, in which case information should be processed independently. In many cases we will not know if there is one joint cause or two independent causes that are responsible for the cues. Here we model this situation as a Bayesian estimation problem. We are thus able to explain some experiments on visual auditory cue combination as well as some experiments on visual proprioceptive cue integration. Our analysis shows that the problem solved by people when they combine cues to produce a movement is much more complicated than is usually assumed, because they need to infer the causal structure that is underlying their sensory experience.
3 0.67244971 148 nips-2006-Nonlinear physically-based models for decoding motor-cortical population activity
Author: Gregory Shakhnarovich, Sung-phil Kim, Michael J. Black
Abstract: Neural motor prostheses (NMPs) require the accurate decoding of motor cortical population activity for the control of an artificial motor system. Previous work on cortical decoding for NMPs has focused on the recovery of hand kinematics. Human NMPs however may require the control of computer cursors or robotic devices with very different physical and dynamical properties. Here we show that the firing rates of cells in the primary motor cortex of non-human primates can be used to control the parameters of an artificial physical system exhibiting realistic dynamics. The model represents 2D hand motion in terms of a point mass connected to a system of idealized springs. The nonlinear spring coefficients are estimated from the firing rates of neurons in the motor cortex. We evaluate linear and a nonlinear decoding algorithms using neural recordings from two monkeys performing two different tasks. We found that the decoded spring coefficients produced accurate hand trajectories compared with state-of-the-art methods for direct decoding of hand kinematics. Furthermore, using a physically-based system produced decoded movements that were more “natural” in that their frequency spectrum more closely matched that of natural hand movements. 1
4 0.41656867 71 nips-2006-Effects of Stress and Genotype on Meta-parameter Dynamics in Reinforcement Learning
Author: Gediminas Lukšys, Jérémie Knüsel, Denis Sheynikhovich, Carmen Sandi, Wulfram Gerstner
Abstract: Stress and genetic background regulate different aspects of behavioral learning through the action of stress hormones and neuromodulators. In reinforcement learning (RL) models, meta-parameters such as learning rate, future reward discount factor, and exploitation-exploration factor, control learning dynamics and performance. They are hypothesized to be related to neuromodulatory levels in the brain. We found that many aspects of animal learning and performance can be described by simple RL models using dynamic control of the meta-parameters. To study the effects of stress and genotype, we carried out 5-hole-box light conditioning and Morris water maze experiments with C57BL/6 and DBA/2 mouse strains. The animals were exposed to different kinds of stress to evaluate its effects on immediate performance as well as on long-term memory. Then, we used RL models to simulate their behavior. For each experimental session, we estimated a set of model meta-parameters that produced the best fit between the model and the animal performance. The dynamics of several estimated meta-parameters were qualitatively similar for the two simulated experiments, and with statistically significant differences between different genetic strains and stress conditions. 1
5 0.39836881 155 nips-2006-Optimal Single-Class Classification Strategies
Author: Ran El-Yaniv, Mordechai Nisenson
Abstract: We consider single-class classification (SCC) as a two-person game between the learner and an adversary. In this game the target distribution is completely known to the learner and the learner’s goal is to construct a classifier capable of guaranteeing a given tolerance for the false-positive error while minimizing the false negative error. We identify both “hard” and “soft” optimal classification strategies for different types of games and demonstrate that soft classification can provide a significant advantage. Our optimal strategies and bounds provide worst-case lower bounds for standard, finite-sample SCC and also motivate new approaches to solving SCC.
6 0.39791867 22 nips-2006-Adaptive Spatial Filters with predefined Region of Interest for EEG based Brain-Computer-Interfaces
7 0.38262224 107 nips-2006-Large Margin Multi-channel Analog-to-Digital Conversion with Applications to Neural Prosthesis
8 0.38188934 114 nips-2006-Learning Time-Intensity Profiles of Human Activity using Non-Parametric Bayesian Models
9 0.36894503 24 nips-2006-Aggregating Classification Accuracy across Time: Application to Single Trial EEG
10 0.33696181 192 nips-2006-Theory and Dynamics of Perceptual Bistability
11 0.29628605 53 nips-2006-Combining causal and similarity-based reasoning
12 0.29226527 33 nips-2006-Analysis of Representations for Domain Adaptation
13 0.2668193 202 nips-2006-iLSTD: Eligibility Traces and Convergence Analysis
14 0.25981498 25 nips-2006-An Application of Reinforcement Learning to Aerobatic Helicopter Flight
15 0.2576803 79 nips-2006-Fast Iterative Kernel PCA
16 0.25162446 189 nips-2006-Temporal dynamics of information content carried by neurons in the primary visual cortex
17 0.25136423 58 nips-2006-Context Effects in Category Learning: An Investigation of Four Probabilistic Models
18 0.23902388 1 nips-2006-A Bayesian Approach to Diffusion Models of Decision-Making and Response Time
19 0.23760071 40 nips-2006-Bayesian Detection of Infrequent Differences in Sets of Time Series with Shared Structure
20 0.23705442 41 nips-2006-Bayesian Ensemble Learning
topicId topicWeight
[(1, 0.075), (3, 0.025), (7, 0.051), (9, 0.039), (20, 0.027), (22, 0.028), (25, 0.01), (44, 0.058), (47, 0.41), (57, 0.074), (65, 0.028), (69, 0.026), (71, 0.033), (82, 0.017), (90, 0.016)]
simIndex simValue paperId paperTitle
same-paper 1 0.82005793 141 nips-2006-Multiple timescales and uncertainty in motor adaptation
Author: Konrad P. Körding, Joshua B. Tenenbaum, Reza Shadmehr
Abstract: Our motor system changes due to causes that span multiple timescales. For example, muscle response can change because of fatigue, a condition where the disturbance has a fast timescale or because of disease where the disturbance is much slower. Here we hypothesize that the nervous system adapts in a way that reflects the temporal properties of such potential disturbances. According to a Bayesian formulation of this idea, movement error results in a credit assignment problem: what timescale is responsible for this disturbance? The adaptation schedule influences the behavior of the optimal learner, changing estimates at different timescales as well as the uncertainty. A system that adapts in this way predicts many properties observed in saccadic gain adaptation. It well predicts the timecourses of motor adaptation in cases of partial sensory deprivation and reversals of the adaptation direction.
2 0.74993932 148 nips-2006-Nonlinear physically-based models for decoding motor-cortical population activity
Author: Gregory Shakhnarovich, Sung-phil Kim, Michael J. Black
Abstract: Neural motor prostheses (NMPs) require the accurate decoding of motor cortical population activity for the control of an artificial motor system. Previous work on cortical decoding for NMPs has focused on the recovery of hand kinematics. Human NMPs however may require the control of computer cursors or robotic devices with very different physical and dynamical properties. Here we show that the firing rates of cells in the primary motor cortex of non-human primates can be used to control the parameters of an artificial physical system exhibiting realistic dynamics. The model represents 2D hand motion in terms of a point mass connected to a system of idealized springs. The nonlinear spring coefficients are estimated from the firing rates of neurons in the motor cortex. We evaluate linear and a nonlinear decoding algorithms using neural recordings from two monkeys performing two different tasks. We found that the decoded spring coefficients produced accurate hand trajectories compared with state-of-the-art methods for direct decoding of hand kinematics. Furthermore, using a physically-based system produced decoded movements that were more “natural” in that their frequency spectrum more closely matched that of natural hand movements. 1
3 0.34157097 49 nips-2006-Causal inference in sensorimotor integration
Author: Konrad P. Körding, Joshua B. Tenenbaum
Abstract: Many recent studies analyze how data from different modalities can be combined. Often this is modeled as a system that optimally combines several sources of information about the same variable. However, it has long been realized that this information combining depends on the interpretation of the data. Two cues that are perceived by different modalities can have different causal relationships: (1) They can both have the same cause, in this case we should fully integrate both cues into a joint estimate. (2) They can have distinct causes, in which case information should be processed independently. In many cases we will not know if there is one joint cause or two independent causes that are responsible for the cues. Here we model this situation as a Bayesian estimation problem. We are thus able to explain some experiments on visual auditory cue combination as well as some experiments on visual proprioceptive cue integration. Our analysis shows that the problem solved by people when they combine cues to produce a movement is much more complicated than is usually assumed, because they need to infer the causal structure that is underlying their sensory experience.
4 0.3361671 154 nips-2006-Optimal Change-Detection and Spiking Neurons
Author: Angela J. Yu
Abstract: Survival in a non-stationary, potentially adversarial environment requires animals to detect sensory changes rapidly yet accurately, two oft competing desiderata. Neurons subserving such detections are faced with the corresponding challenge to discern “real” changes in inputs as quickly as possible, while ignoring noisy fluctuations. Mathematically, this is an example of a change-detection problem that is actively researched in the controlled stochastic processes community. In this paper, we utilize sophisticated tools developed in that community to formalize an instantiation of the problem faced by the nervous system, and characterize the Bayes-optimal decision policy under certain assumptions. We will derive from this optimal strategy an information accumulation and decision process that remarkably resembles the dynamics of a leaky integrate-and-fire neuron. This correspondence suggests that neurons are optimized for tracking input changes, and sheds new light on the computational import of intracellular properties such as resting membrane potential, voltage-dependent conductance, and post-spike reset voltage. We also explore the influence that factors such as timing, uncertainty, neuromodulation, and reward should and do have on neuronal dynamics and sensitivity, as the optimal decision strategy depends critically on these factors. 1
5 0.33261073 187 nips-2006-Temporal Coding using the Response Properties of Spiking Neurons
Author: Thomas Voegtlin
Abstract: In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an auto-encoder neural network using this rule. 1
6 0.33239496 8 nips-2006-A Nonparametric Approach to Bottom-Up Visual Saliency
7 0.3297497 167 nips-2006-Recursive ICA
8 0.32917523 32 nips-2006-Analysis of Empirical Bayesian Methods for Neuroelectromagnetic Source Localization
9 0.32559776 34 nips-2006-Approximate Correspondences in High Dimensions
10 0.32527134 112 nips-2006-Learning Nonparametric Models for Probabilistic Imitation
11 0.32483104 118 nips-2006-Learning to Model Spatial Dependency: Semi-Supervised Discriminative Random Fields
12 0.32427287 175 nips-2006-Simplifying Mixture Models through Function Approximation
13 0.32379726 3 nips-2006-A Complexity-Distortion Approach to Joint Pattern Alignment
14 0.32379356 65 nips-2006-Denoising and Dimension Reduction in Feature Space
15 0.32297409 42 nips-2006-Bayesian Image Super-resolution, Continued
16 0.32217565 51 nips-2006-Clustering Under Prior Knowledge with Application to Image Segmentation
17 0.32125703 158 nips-2006-PG-means: learning the number of clusters in data
18 0.32117999 72 nips-2006-Efficient Learning of Sparse Representations with an Energy-Based Model
19 0.31995761 76 nips-2006-Emergence of conjunctive visual features by quadratic independent component analysis
20 0.31971699 160 nips-2006-Part-based Probabilistic Point Matching using Equivalence Constraints