nips nips2007 nips2007-133 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Ben Williams, Marc Toussaint, Amos J. Storkey
Abstract: Biological movement is built up of sub-blocks or motion primitives. Such primitives provide a compact representation of movement which is also desirable in robotic control applications. We analyse handwriting data to gain a better understanding of primitives and their timings in biological movements. Inference of the shape and the timing of primitives can be done using a factorial HMM based model, allowing the handwriting to be represented in primitive timing space. This representation provides a distribution of spikes corresponding to the primitive activations, which can also be modelled using HMM architectures. We show how the coupling of the low level primitive model, and the higher level timing model during inference can produce good reconstructions of handwriting, with shared primitives for all characters modelled. This coupled model also captures the variance profile of the dataset which is accounted for by spike timing jitter. The timing code provides a compact representation of the movement while generating a movement without an explicit timing model produces a scribbling style of output. 1
Reference: text
sentIndex sentText sentNum sentScore
1 Modelling motion primitives and their timing in biologically executed movements Ben H Williams School of Informatics University of Edinburgh 5 Forrest Hill, EH1 2QL, UK ben. [sent-1, score-0.922]
2 uk Abstract Biological movement is built up of sub-blocks or motion primitives. [sent-10, score-0.238]
3 Such primitives provide a compact representation of movement which is also desirable in robotic control applications. [sent-11, score-0.86]
4 We analyse handwriting data to gain a better understanding of primitives and their timings in biological movements. [sent-12, score-0.899]
5 Inference of the shape and the timing of primitives can be done using a factorial HMM based model, allowing the handwriting to be represented in primitive timing space. [sent-13, score-1.76]
6 This representation provides a distribution of spikes corresponding to the primitive activations, which can also be modelled using HMM architectures. [sent-14, score-0.517]
7 We show how the coupling of the low level primitive model, and the higher level timing model during inference can produce good reconstructions of handwriting, with shared primitives for all characters modelled. [sent-15, score-1.475]
8 This coupled model also captures the variance profile of the dataset which is accounted for by spike timing jitter. [sent-16, score-0.429]
9 The timing code provides a compact representation of the movement while generating a movement without an explicit timing model produces a scribbling style of output. [sent-17, score-1.0]
10 There is much evidence to suggest that biological movement generation is based upon motor primitives, with discrete muscle synergies found in frog spines, (Bizzi et al. [sent-21, score-0.462]
11 , 2002), evidence of primitives being locally fixed (Kargo & Giszter, 2000), and modularity in human motor learning and adaption (Wolpert et al. [sent-24, score-0.702]
12 Compact forms of representation for any biologically produced data should therefore also be based upon primitive sub-blocks. [sent-26, score-0.445]
13 1 (A) (B) ¯t Figure 1: (A) A factorial HMM of a handwriting trajectory Yt . [sent-27, score-0.271]
14 The parameters λm indicate the probability of triggering a primitive in the mth factor at time t and are learnt for one specific character. [sent-28, score-0.589]
15 (B) A hierarchical generative model of handwriting where the random variable c indicates the currently written character and defines a distribution over random variables λm via a Markov t model over Gm . [sent-29, score-0.53]
16 There are several approaches to use this idea of motion primitives for more efficient robotic movement control. [sent-30, score-0.877]
17 , 2004) use non-linear attractor dynamics as a motion primitive and train them to generate motion that solves a specific task. [sent-33, score-0.626]
18 (Amit & Matari´, 2002) use a single attractor system and generate non-linear motion by c modulating the attractor point. [sent-34, score-0.211]
19 These approaches define a primitive as a segment of movement rather than understanding movement as a superposition of concurrent primitives. [sent-35, score-0.837]
20 The goal of analysing and better understanding biological data is to extract a generative model of complex movement based on concurrent primitives which may serve as an efficient representation for robotic movement control. [sent-36, score-1.182]
21 This is in contrast to previous studies of handwriting which usually focus on the problem of character classification rather than generation (Singer & Tishby, 1994; Hinton & Nair, 2005). [sent-37, score-0.394]
22 We investigate handwriting data and analyse whether it can be modelled as a superposition of sparsely activated motion primitives. [sent-38, score-0.376]
23 Just as piano music can (approximately) be modelled as a superposition of the sounds emitted by each key we follow the idea that biological movement is a superposition of pre-learnt motion primitives. [sent-41, score-0.518]
24 This implies that the whole movement can be compactly represented by the timing of each primitive in analogy to a score of music. [sent-42, score-0.846]
25 On the lower level a factorial Hidden Markov Model (fHMM, Ghahramani & Jordan, 1997) is used to model the output as a combination of signals emitted from independent primitives (each primitives corresponds to a factor in the fHMM). [sent-44, score-1.297]
26 On the higher level we formulate a model for the primitive timing dependent upon character class. [sent-45, score-0.966]
27 The same motion primitives are shared across characters, only their timings differ. [sent-46, score-0.71]
28 We train this model on handwriting data using an EM-algorithm and thereby infer the primitives and the primitive timings inherent in this data. [sent-47, score-1.252]
29 We find that the inferred timing posterior for a specific character is indeed a compact representation for the specific character which allows for a good reproduction of this character using the learnt primitives. [sent-48, score-1.17]
30 Further, using the timing model learnt on the higher level we can generate new movement – new samples of characters (in the same writing style as the data), and also scribblings that exhibit local similarity to written characters when the higher level timing control is omitted. [sent-49, score-1.181]
31 Finally in section 4 we present results on handwriting data recorded with a digitisation tablet, show the primitives and timing code we extract, and demonstrate how the learnt model can be used to generate new samples of characters. [sent-52, score-1.3]
32 2 2 Model Our analysis of primitives and primitive timings in handwriting is based on formulating a corresponding probabilistic generative model. [sent-53, score-1.295]
33 On the lower level (Figure 1(A)) we consider a factorial Hidden Markov Model (fHMM) where each factor produces the signal of a single primitive and the linear combination of factors generates the observed movement Yt . [sent-55, score-0.642]
34 It allows the learning and identification of primitives in the data but does not include a model of their timing. [sent-59, score-0.606]
35 In this paper we introduce the full generative model (Figure 1(B)) which includes a generative model for the primitive timing conditioned on the current character. [sent-60, score-0.92]
36 1 Modelling primitives in data Let M be the number of primitives we allow for. [sent-62, score-1.15]
37 We describe a primitive as a strongly constrained Markov process which remains in a zero state most of the time but with some ¯ probability λ ∈ [0, 1] enters the 1 state and then rigorously runs through all states 2, . [sent-63, score-0.403]
38 (1) P (St = b | St−1 = a, λm ) = t 1 for a = 0 and b = (a + 1) mod Km 0 otherwise ¯ This process is parameterised by the onset probability λm of the mth primitive at time t. [sent-71, score-0.535]
39 , WKm ) is what we call a primitive and – to stay in the analogy ¯ – can be compared to the sound of a piano key. [sent-76, score-0.462]
40 We will describe below how we learn the primitives m Ws and also adapt the primitive lengths Km using an EM-algorithm. [sent-78, score-0.978]
41 2 A timing model ¯ Considering the λ’s to be fixed parameters is not a suitable model of biological movement. [sent-80, score-0.378]
42 The usage and timing of primitives depends on the character that is written and the timing ¯ varies from character to character. [sent-81, score-1.557]
43 Our model takes a different approach to parameterise the primitive activations. [sent-83, score-0.463]
44 For instance, if a primitive is activated twice in the course of the movement we assume that there have been two signals (“spikes”) emitted from a higher level process which encode the activation times. [sent-84, score-0.682]
45 More formally, let c be a discrete random variable indicating the character to be written, see Figure 1(B). [sent-85, score-0.215]
46 We assume that for each primitive we have another Markovian process which generates a length-L sequence of states Gm ∈ {1, . [sent-86, score-0.403]
47 l l−1 (3) l=2 The states Gm encode which primitives are activated and how they are timed, as seen in l Figure 2(b). [sent-89, score-0.601]
48 (b) Scatter plot of the MAP onsets of a single t primitive for different samples of the same character ‘p’. [sent-99, score-0.681]
49 of a primitive at time t, which we call a “spike”. [sent-101, score-0.403]
50 We can observe at l most L spikes in one primitive, the spike times between different primitives are dependent, but we have a Markovian dependency between the presence and timing of spikes within a primitive. [sent-113, score-1.115]
51 The whole process is parameterised by the initial state distribution P (Gm | c), 1 m the transition probabilities P (Gm | Gm , c), the spike means µm and the variances σr . [sent-114, score-0.182]
52 l 3 Inference and learning In the experiments we will compare both the fHMM without the timing model (Figure 1(A)) and the full model including the timing model (Figure 1(B)). [sent-121, score-0.676]
53 5 5 Distance /mm (c) Figure 3: (a) Reconstruction of a character from a training dataset, using a subset of the primitives. [sent-168, score-0.215]
54 The posterior probability of primitive onset is shown on the left, highlighting why a spike timing representation is appropriate. [sent-170, score-0.882]
55 (c) Generative samples using a flat primitive onset prior, showing scribbling behaviour of uncoupled model. [sent-172, score-0.528]
56 In the full model, inference is an iterative process of inference in the timing model and inference in the fHMM. [sent-174, score-0.416]
57 We couple this iteration to inference in the timing model in both directions: In each iteration, m the posterior over St defines observation likelihoods for inference in the Markov models Gm . [sent-176, score-0.359]
58 Standard M-steps are then used t to train all parameters of the fHMM and the timing model. [sent-178, score-0.276]
59 In addition, we use heuristics to adapt the length Km of each primitive: we increase or decrease Km depending on whether the learnt primitive is significantly different to zero in the last time steps. [sent-179, score-0.554]
60 By this we mean that we take a trained model, use inference to compute the MAP spikes λ for a specific data sample, then we use these λ’s and the definition of our generative model (including the learnt primitives W ) to generate a trajectory which can be compared to the original data sample. [sent-182, score-0.991]
61 1 Results Primitive and timing analysis using the fHMM-only We first consider a data set of 300 handwritten ‘p’s recorded using an INTUOS 3 WACOM digitisation tablet http://www. [sent-185, score-0.334]
62 Our choice of parameter was M = 10 primitives and we initialised all Km = 20 and constrained them to be smaller than 100 throughout learning. [sent-192, score-0.575]
63 This clean posterior is the motivation for introducing a model of the spike timings as a compact representation 5 −15 −20 −25 −30 −35 −40 −45 −50 4 2 −20 −10 0 Distance /mm (a) 10 0 x 10 x position y position pressure 1. [sent-195, score-0.294]
64 (b) Histogram of the reconstruction error, which is 3-dimensional pen movement velocity space. [sent-204, score-0.303]
65 Equally the reconstruction (using the Viterbi aligned MAP spikes) shows the sufficiency of the spike code to generate the character. [sent-208, score-0.216]
66 Figure 3(b) shows the primitives W m (translated back into pen-space) that were learnt and implicitly used for the reconstruction of the ‘p’. [sent-209, score-0.798]
67 These primitives can be seen to represent typical parts of the ‘p’ character; the arrows in the reconstruction indicate when they are activated. [sent-210, score-0.647]
68 To show the importance of this spike timing information, we can demonstrate the effects of removing it. [sent-212, score-0.398]
69 When using the fHMM-only model as a generative model with ¯ the learnt stationary spike probabilities λm the result is a form of primitive babbling, as can be seen in Figure 3(c). [sent-213, score-0.834]
70 Since these scribblings are generated by random expression of the learnt primitives they locally resemble parts of the ‘p’ character. [sent-214, score-0.755]
71 The primitives generalise to other characters if the training dataset contained sufficient variation. [sent-215, score-0.65]
72 Further investigation has shown that 20 primitives learnt from 12 character types are sufficiently generalised to represent all remaining novel character types without further learning, by using a single E-step to fit the pre-learnt parameters to a novel dataset. [sent-216, score-1.156]
73 2 Generating new characters using the full generative model Next we trained the full model on the same ‘p’-dataset. [sent-218, score-0.273]
74 To the right we see the reconstruction errors in velocity space showing at many time points a perfect reconstruction was attained. [sent-220, score-0.185]
75 Since the full model includes a timing model it can also be run autonomously as a generative model for new character samples. [sent-221, score-0.727]
76 Figure 4(c) displays such new samples of the character ‘p’ generated by the learnt model. [sent-222, score-0.403]
77 As a more challenging problem we collected a data set of over 450 character samples of the letters a, b and c. [sent-223, score-0.252]
78 The full model includes the written character class as a random variable and can thus be trained on multi-character data sets. [sent-224, score-0.277]
79 Note that we restrict the total number of primitives to M = 10 which will require a sharing of primitives across characters. [sent-225, score-1.15]
80 Coupling the timing and the primitive model during learning has the effect of trying to learn primitives from data that are usually in the same place. [sent-229, score-1.285]
81 Thus, using the full model the inferred spikes are more compactly clustered at the Gaussian components due to the prior imposed from the timing model (the thick black lines correspond to Equation (4)). [sent-230, score-0.44]
82 (b) Reconstruction of dataset using 10 primitives learnt from the dataset in (a). [sent-232, score-0.726]
83 8 Time /ms (a) (b) Figure 6: (a) Scatter plot of primitive onset spikes for a single character type across all samples and primitives, showing the clustering of certain primitives in particular parts of a character. [sent-250, score-1.36]
84 The thick black lines displays the prior over λ’s imposed from the timing model via Equation (4). [sent-253, score-0.307]
85 Finally, we run the full model autonomously to generate new character samples, see Figure 5(c). [sent-254, score-0.337]
86 Here the character class, c is first sampled uniform randomly and then all learnt parameters are used to eventually sample a trajectory Yt . [sent-255, score-0.407]
87 5 Conclusions In this paper we have shown that it is possible to represent handwriting using a primitive based model. [sent-257, score-0.582]
88 The timing of activations is crucial to the accurate reproduction of the character. [sent-260, score-0.32]
89 With a small amount of timing variation, a distorted version of the original character is reproduced, whilst large (and coordinated) differences in the timing pattern produce different character types. [sent-261, score-0.982]
90 The spike code provides a compact representation of movement, unlike that which has previously been explored in the domain of robotic control. [sent-262, score-0.24]
91 We have proposed to use Markov processes conditioned on the character as a model for these spike emissions. [sent-263, score-0.368]
92 7 An assumption made in this work is that the primitives are learnt velocity profiles. [sent-267, score-0.767]
93 We have not included any feedback control systems in the primitive production, however the presence of low-level feedback, such as in a spring system (Hinton & Nair, 2005) or dynamic motor primitives (Ijspeert et al. [sent-268, score-1.156]
94 We make no assumptions about how the primitives are learnt in biology. [sent-271, score-0.726]
95 It would be interesting to study the evolution of the primitives during human learning of a new character set. [sent-272, score-0.79]
96 This could be related to a more accurate and efficient use of primitives already available. [sent-274, score-0.575]
97 However, it might also be the case that new primitives are learnt, or old ones adapted. [sent-275, score-0.575]
98 Shared and specific muscle synergies in natural motor behaviors. [sent-311, score-0.179]
99 Combinations of muscle synergies in the construction of a natural motor behavior. [sent-317, score-0.179]
100 A primitive based generative model to infer timing information in unpartitioned handwriting data. [sent-374, score-0.963]
wordName wordTfidf (topN-words)
[('primitives', 0.575), ('primitive', 0.403), ('timing', 0.276), ('gm', 0.234), ('character', 0.215), ('handwriting', 0.179), ('fhmm', 0.176), ('movement', 0.167), ('learnt', 0.151), ('spike', 0.122), ('bizzi', 0.103), ('motor', 0.1), ('erent', 0.078), ('characters', 0.075), ('di', 0.074), ('generative', 0.074), ('avella', 0.073), ('reconstruction', 0.072), ('motion', 0.071), ('spikes', 0.071), ('km', 0.069), ('robotic', 0.064), ('timings', 0.064), ('attractor', 0.059), ('piano', 0.059), ('onset', 0.059), ('superposition', 0.058), ('factorial', 0.051), ('schaal', 0.051), ('ijspeert', 0.051), ('wolpert', 0.051), ('reconstructions', 0.047), ('giszter', 0.044), ('reproduction', 0.044), ('saltiel', 0.044), ('synergies', 0.044), ('emitted', 0.044), ('distance', 0.042), ('velocity', 0.041), ('trajectory', 0.041), ('williams', 0.041), ('biological', 0.04), ('parameterised', 0.038), ('storkey', 0.038), ('autonomously', 0.038), ('nair', 0.038), ('samples', 0.037), ('yt', 0.036), ('muscle', 0.035), ('mth', 0.035), ('st', 0.033), ('compact', 0.032), ('model', 0.031), ('full', 0.031), ('cemgil', 0.029), ('digitisation', 0.029), ('frog', 0.029), ('kargo', 0.029), ('kawato', 0.029), ('matari', 0.029), ('nakanishi', 0.029), ('parameterise', 0.029), ('scribbling', 0.029), ('scribblings', 0.029), ('spring', 0.029), ('tablet', 0.029), ('markovian', 0.028), ('scatter', 0.028), ('et', 0.027), ('inference', 0.026), ('activated', 0.026), ('toussaint', 0.026), ('spinal', 0.026), ('onsets', 0.026), ('forrest', 0.026), ('ghahramani', 0.025), ('su', 0.024), ('markov', 0.023), ('amit', 0.023), ('emits', 0.023), ('pen', 0.023), ('pressure', 0.023), ('emit', 0.023), ('map', 0.022), ('probabilities', 0.022), ('generate', 0.022), ('hmm', 0.022), ('hinton', 0.022), ('feedback', 0.022), ('concurrent', 0.022), ('marc', 0.022), ('representation', 0.022), ('activation', 0.021), ('modelled', 0.021), ('level', 0.021), ('centred', 0.021), ('analyse', 0.021), ('edinburgh', 0.021), ('upon', 0.02), ('understanding', 0.02)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000001 133 nips-2007-Modelling motion primitives and their timing in biologically executed movements
Author: Ben Williams, Marc Toussaint, Amos J. Storkey
Abstract: Biological movement is built up of sub-blocks or motion primitives. Such primitives provide a compact representation of movement which is also desirable in robotic control applications. We analyse handwriting data to gain a better understanding of primitives and their timings in biological movements. Inference of the shape and the timing of primitives can be done using a factorial HMM based model, allowing the handwriting to be represented in primitive timing space. This representation provides a distribution of spikes corresponding to the primitive activations, which can also be modelled using HMM architectures. We show how the coupling of the low level primitive model, and the higher level timing model during inference can produce good reconstructions of handwriting, with shared primitives for all characters modelled. This coupled model also captures the variance profile of the dataset which is accounted for by spike timing jitter. The timing code provides a compact representation of the movement while generating a movement without an explicit timing model produces a scribbling style of output. 1
2 0.21768571 145 nips-2007-On Sparsity and Overcompleteness in Image Models
Author: Pietro Berkes, Richard Turner, Maneesh Sahani
Abstract: Computational models of visual cortex, and in particular those based on sparse coding, have enjoyed much recent attention. Despite this currency, the question of how sparse or how over-complete a sparse representation should be, has gone without principled answer. Here, we use Bayesian model-selection methods to address these questions for a sparse-coding model based on a Student-t prior. Having validated our methods on toy data, we find that natural images are indeed best modelled by extremely sparse distributions; although for the Student-t prior, the associated optimal basis size is only modestly over-complete. 1
3 0.16073851 103 nips-2007-Inferring Elapsed Time from Stochastic Neural Processes
Author: Misha Ahrens, Maneesh Sahani
Abstract: Many perceptual processes and neural computations, such as speech recognition, motor control and learning, depend on the ability to measure and mark the passage of time. However, the processes that make such temporal judgements possible are unknown. A number of different hypothetical mechanisms have been advanced, all of which depend on the known, temporally predictable evolution of a neural or psychological state, possibly through oscillations or the gradual decay of a memory trace. Alternatively, judgements of elapsed time might be based on observations of temporally structured, but stochastic processes. Such processes need not be specific to the sense of time; typical neural and sensory processes contain at least some statistical structure across a range of time scales. Here, we investigate the statistical properties of an estimator of elapsed time which is based on a simple family of stochastic process. 1
4 0.096388385 210 nips-2007-Unconstrained On-line Handwriting Recognition with Recurrent Neural Networks
Author: Alex Graves, Marcus Liwicki, Horst Bunke, Jürgen Schmidhuber, Santiago Fernández
Abstract: In online handwriting recognition the trajectory of the pen is recorded during writing. Although the trajectory provides a compact and complete representation of the written output, it is hard to transcribe directly, because each letter is spread over many pen locations. Most recognition systems therefore employ sophisticated preprocessing techniques to put the inputs into a more localised form. However these techniques require considerable human effort, and are specific to particular languages and alphabets. This paper describes a system capable of directly transcribing raw online handwriting data. The system consists of an advanced recurrent neural network with an output layer designed for sequence labelling, combined with a probabilistic language model. In experiments on an unconstrained online database, we record excellent results using either raw or preprocessed data, well outperforming a state-of-the-art HMM based system in both cases. 1
5 0.086447924 36 nips-2007-Better than least squares: comparison of objective functions for estimating linear-nonlinear models
Author: Tatyana Sharpee
Abstract: This paper compares a family of methods for characterizing neural feature selectivity with natural stimuli in the framework of the linear-nonlinear model. In this model, the neural firing rate is a nonlinear function of a small number of relevant stimulus components. The relevant stimulus dimensions can be found by maximizing one of the family of objective functions, R´ nyi divergences of different e orders [1, 2]. We show that maximizing one of them, R´ nyi divergence of ore der 2, is equivalent to least-square fitting of the linear-nonlinear model to neural data. Next, we derive reconstruction errors in relevant dimensions found by maximizing R´ nyi divergences of arbitrary order in the asymptotic limit of large spike e numbers. We find that the smallest errors are obtained with R´ nyi divergence of e order 1, also known as Kullback-Leibler divergence. This corresponds to finding relevant dimensions by maximizing mutual information [2]. We numerically test how these optimization schemes perform in the regime of low signal-to-noise ratio (small number of spikes and increasing neural noise) for model visual neurons. We find that optimization schemes based on either least square fitting or information maximization perform well even when number of spikes is small. Information maximization provides slightly, but significantly, better reconstructions than least square fitting. This makes the problem of finding relevant dimensions, together with the problem of lossy compression [3], one of examples where informationtheoretic measures are no more data limited than those derived from least squares. 1
6 0.075996891 104 nips-2007-Inferring Neural Firing Rates from Spike Trains Using Gaussian Processes
7 0.072535478 35 nips-2007-Bayesian binning beats approximate alternatives: estimating peri-stimulus time histograms
8 0.070181124 140 nips-2007-Neural characterization in partially observed populations of spiking neurons
9 0.063453466 17 nips-2007-A neural network implementing optimal state estimation based on dynamic spike train decoding
10 0.057700213 177 nips-2007-Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons
11 0.053672526 18 nips-2007-A probabilistic model for generating realistic lip movements from speech
12 0.049823612 33 nips-2007-Bayesian Inference for Spiking Neuron Models with a Sparsity Prior
13 0.047895283 197 nips-2007-The Infinite Markov Model
14 0.042324923 3 nips-2007-A Bayesian Model of Conditioned Perception
15 0.041135177 153 nips-2007-People Tracking with the Laplacian Eigenmaps Latent Variable Model
16 0.041113302 163 nips-2007-Receding Horizon Differential Dynamic Programming
17 0.040601194 74 nips-2007-EEG-Based Brain-Computer Interaction: Improved Accuracy by Automatic Single-Trial Error Detection
18 0.040582187 164 nips-2007-Receptive Fields without Spike-Triggering
19 0.039805949 173 nips-2007-Second Order Bilinear Discriminant Analysis for single trial EEG analysis
20 0.037664175 205 nips-2007-Theoretical Analysis of Learning with Reward-Modulated Spike-Timing-Dependent Plasticity
topicId topicWeight
[(0, -0.126), (1, 0.049), (2, 0.122), (3, -0.039), (4, -0.013), (5, 0.014), (6, -0.026), (7, 0.003), (8, -0.04), (9, -0.044), (10, 0.011), (11, 0.006), (12, -0.006), (13, 0.019), (14, 0.031), (15, 0.001), (16, -0.042), (17, 0.099), (18, 0.028), (19, 0.009), (20, -0.046), (21, -0.001), (22, 0.133), (23, 0.061), (24, 0.003), (25, -0.083), (26, 0.041), (27, 0.049), (28, -0.031), (29, -0.179), (30, 0.227), (31, -0.176), (32, 0.149), (33, 0.118), (34, -0.053), (35, -0.048), (36, -0.154), (37, -0.205), (38, -0.013), (39, -0.105), (40, 0.193), (41, 0.097), (42, -0.047), (43, 0.148), (44, -0.148), (45, -0.152), (46, 0.111), (47, 0.058), (48, -0.117), (49, 0.124)]
simIndex simValue paperId paperTitle
same-paper 1 0.95524216 133 nips-2007-Modelling motion primitives and their timing in biologically executed movements
Author: Ben Williams, Marc Toussaint, Amos J. Storkey
Abstract: Biological movement is built up of sub-blocks or motion primitives. Such primitives provide a compact representation of movement which is also desirable in robotic control applications. We analyse handwriting data to gain a better understanding of primitives and their timings in biological movements. Inference of the shape and the timing of primitives can be done using a factorial HMM based model, allowing the handwriting to be represented in primitive timing space. This representation provides a distribution of spikes corresponding to the primitive activations, which can also be modelled using HMM architectures. We show how the coupling of the low level primitive model, and the higher level timing model during inference can produce good reconstructions of handwriting, with shared primitives for all characters modelled. This coupled model also captures the variance profile of the dataset which is accounted for by spike timing jitter. The timing code provides a compact representation of the movement while generating a movement without an explicit timing model produces a scribbling style of output. 1
2 0.5144161 103 nips-2007-Inferring Elapsed Time from Stochastic Neural Processes
Author: Misha Ahrens, Maneesh Sahani
Abstract: Many perceptual processes and neural computations, such as speech recognition, motor control and learning, depend on the ability to measure and mark the passage of time. However, the processes that make such temporal judgements possible are unknown. A number of different hypothetical mechanisms have been advanced, all of which depend on the known, temporally predictable evolution of a neural or psychological state, possibly through oscillations or the gradual decay of a memory trace. Alternatively, judgements of elapsed time might be based on observations of temporally structured, but stochastic processes. Such processes need not be specific to the sense of time; typical neural and sensory processes contain at least some statistical structure across a range of time scales. Here, we investigate the statistical properties of an estimator of elapsed time which is based on a simple family of stochastic process. 1
3 0.50265473 145 nips-2007-On Sparsity and Overcompleteness in Image Models
Author: Pietro Berkes, Richard Turner, Maneesh Sahani
Abstract: Computational models of visual cortex, and in particular those based on sparse coding, have enjoyed much recent attention. Despite this currency, the question of how sparse or how over-complete a sparse representation should be, has gone without principled answer. Here, we use Bayesian model-selection methods to address these questions for a sparse-coding model based on a Student-t prior. Having validated our methods on toy data, we find that natural images are indeed best modelled by extremely sparse distributions; although for the Student-t prior, the associated optimal basis size is only modestly over-complete. 1
4 0.44954839 210 nips-2007-Unconstrained On-line Handwriting Recognition with Recurrent Neural Networks
Author: Alex Graves, Marcus Liwicki, Horst Bunke, Jürgen Schmidhuber, Santiago Fernández
Abstract: In online handwriting recognition the trajectory of the pen is recorded during writing. Although the trajectory provides a compact and complete representation of the written output, it is hard to transcribe directly, because each letter is spread over many pen locations. Most recognition systems therefore employ sophisticated preprocessing techniques to put the inputs into a more localised form. However these techniques require considerable human effort, and are specific to particular languages and alphabets. This paper describes a system capable of directly transcribing raw online handwriting data. The system consists of an advanced recurrent neural network with an output layer designed for sequence labelling, combined with a probabilistic language model. In experiments on an unconstrained online database, we record excellent results using either raw or preprocessed data, well outperforming a state-of-the-art HMM based system in both cases. 1
5 0.4174608 35 nips-2007-Bayesian binning beats approximate alternatives: estimating peri-stimulus time histograms
Author: Dominik Endres, Mike Oram, Johannes Schindelin, Peter Foldiak
Abstract: The peristimulus time histogram (PSTH) and its more continuous cousin, the spike density function (SDF) are staples in the analytic toolkit of neurophysiologists. The former is usually obtained by binning spike trains, whereas the standard method for the latter is smoothing with a Gaussian kernel. Selection of a bin width or a kernel size is often done in an relatively arbitrary fashion, even though there have been recent attempts to remedy this situation [1, 2]. We develop an exact Bayesian, generative model approach to estimating PSTHs and demonstate its superiority to competing methods. Further advantages of our scheme include automatic complexity control and error bars on its predictions. 1
6 0.35596868 36 nips-2007-Better than least squares: comparison of objective functions for estimating linear-nonlinear models
7 0.33103928 104 nips-2007-Inferring Neural Firing Rates from Spike Trains Using Gaussian Processes
8 0.30096045 3 nips-2007-A Bayesian Model of Conditioned Perception
9 0.27288011 17 nips-2007-A neural network implementing optimal state estimation based on dynamic spike train decoding
10 0.27006984 28 nips-2007-Augmented Functional Time Series Representation and Forecasting with Gaussian Processes
11 0.26446733 25 nips-2007-An in-silico Neural Model of Dynamic Routing through Neuronal Coherence
12 0.26433444 127 nips-2007-Measuring Neural Synchrony by Message Passing
13 0.25148579 9 nips-2007-A Probabilistic Approach to Language Change
14 0.24963984 130 nips-2007-Modeling Natural Sounds with Modulation Cascade Processes
15 0.24948606 163 nips-2007-Receding Horizon Differential Dynamic Programming
16 0.24583964 197 nips-2007-The Infinite Markov Model
17 0.2249773 18 nips-2007-A probabilistic model for generating realistic lip movements from speech
18 0.21467575 162 nips-2007-Random Sampling of States in Dynamic Programming
19 0.20225102 153 nips-2007-People Tracking with the Laplacian Eigenmaps Latent Variable Model
20 0.20121504 177 nips-2007-Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons
topicId topicWeight
[(5, 0.034), (13, 0.043), (16, 0.042), (18, 0.017), (19, 0.018), (21, 0.07), (31, 0.021), (34, 0.013), (35, 0.02), (47, 0.073), (83, 0.07), (85, 0.017), (87, 0.033), (90, 0.04), (97, 0.391)]
simIndex simValue paperId paperTitle
same-paper 1 0.7540251 133 nips-2007-Modelling motion primitives and their timing in biologically executed movements
Author: Ben Williams, Marc Toussaint, Amos J. Storkey
Abstract: Biological movement is built up of sub-blocks or motion primitives. Such primitives provide a compact representation of movement which is also desirable in robotic control applications. We analyse handwriting data to gain a better understanding of primitives and their timings in biological movements. Inference of the shape and the timing of primitives can be done using a factorial HMM based model, allowing the handwriting to be represented in primitive timing space. This representation provides a distribution of spikes corresponding to the primitive activations, which can also be modelled using HMM architectures. We show how the coupling of the low level primitive model, and the higher level timing model during inference can produce good reconstructions of handwriting, with shared primitives for all characters modelled. This coupled model also captures the variance profile of the dataset which is accounted for by spike timing jitter. The timing code provides a compact representation of the movement while generating a movement without an explicit timing model produces a scribbling style of output. 1
2 0.7361933 35 nips-2007-Bayesian binning beats approximate alternatives: estimating peri-stimulus time histograms
Author: Dominik Endres, Mike Oram, Johannes Schindelin, Peter Foldiak
Abstract: The peristimulus time histogram (PSTH) and its more continuous cousin, the spike density function (SDF) are staples in the analytic toolkit of neurophysiologists. The former is usually obtained by binning spike trains, whereas the standard method for the latter is smoothing with a Gaussian kernel. Selection of a bin width or a kernel size is often done in an relatively arbitrary fashion, even though there have been recent attempts to remedy this situation [1, 2]. We develop an exact Bayesian, generative model approach to estimating PSTHs and demonstate its superiority to competing methods. Further advantages of our scheme include automatic complexity control and error bars on its predictions. 1
3 0.65788686 202 nips-2007-The discriminant center-surround hypothesis for bottom-up saliency
Author: Dashan Gao, Vijay Mahadevan, Nuno Vasconcelos
Abstract: The classical hypothesis, that bottom-up saliency is a center-surround process, is combined with a more recent hypothesis that all saliency decisions are optimal in a decision-theoretic sense. The combined hypothesis is denoted as discriminant center-surround saliency, and the corresponding optimal saliency architecture is derived. This architecture equates the saliency of each image location to the discriminant power of a set of features with respect to the classification problem that opposes stimuli at center and surround, at that location. It is shown that the resulting saliency detector makes accurate quantitative predictions for various aspects of the psychophysics of human saliency, including non-linear properties beyond the reach of previous saliency models. Furthermore, it is shown that discriminant center-surround saliency can be easily generalized to various stimulus modalities (such as color, orientation and motion), and provides optimal solutions for many other saliency problems of interest for computer vision. Optimal solutions, under this hypothesis, are derived for a number of the former (including static natural images, dense motion fields, and even dynamic textures), and applied to a number of the latter (the prediction of human eye fixations, motion-based saliency in the presence of ego-motion, and motion-based saliency in the presence of highly dynamic backgrounds). In result, discriminant saliency is shown to predict eye fixations better than previous models, and produces background subtraction algorithms that outperform the state-of-the-art in computer vision. 1
4 0.34131441 93 nips-2007-GRIFT: A graphical model for inferring visual classification features from human data
Author: Michael Ross, Andrew Cohen
Abstract: This paper describes a new model for human visual classification that enables the recovery of image features that explain human subjects’ performance on different visual classification tasks. Unlike previous methods, this algorithm does not model their performance with a single linear classifier operating on raw image pixels. Instead, it represents classification as the combination of multiple feature detectors. This approach extracts more information about human visual classification than previous methods and provides a foundation for further exploration. 1
5 0.33588749 140 nips-2007-Neural characterization in partially observed populations of spiking neurons
Author: Jonathan W. Pillow, Peter E. Latham
Abstract: Point process encoding models provide powerful statistical methods for understanding the responses of neurons to sensory stimuli. Although these models have been successfully applied to neurons in the early sensory pathway, they have fared less well capturing the response properties of neurons in deeper brain areas, owing in part to the fact that they do not take into account multiple stages of processing. Here we introduce a new twist on the point-process modeling approach: we include unobserved as well as observed spiking neurons in a joint encoding model. The resulting model exhibits richer dynamics and more highly nonlinear response properties, making it more powerful and more flexible for fitting neural data. More importantly, it allows us to estimate connectivity patterns among neurons (both observed and unobserved), and may provide insight into how networks process sensory input. We formulate the estimation procedure using variational EM and the wake-sleep algorithm, and illustrate the model’s performance using a simulated example network consisting of two coupled neurons.
6 0.33568275 138 nips-2007-Near-Maximum Entropy Models for Binary Neural Representations of Natural Images
7 0.33507681 18 nips-2007-A probabilistic model for generating realistic lip movements from speech
8 0.33454973 104 nips-2007-Inferring Neural Firing Rates from Spike Trains Using Gaussian Processes
9 0.3344734 100 nips-2007-Hippocampal Contributions to Control: The Third Way
10 0.33413801 195 nips-2007-The Generalized FITC Approximation
11 0.33362994 164 nips-2007-Receptive Fields without Spike-Triggering
12 0.33340287 153 nips-2007-People Tracking with the Laplacian Eigenmaps Latent Variable Model
13 0.33306581 86 nips-2007-Exponential Family Predictive Representations of State
14 0.3318921 172 nips-2007-Scene Segmentation with CRFs Learned from Partially Labeled Images
15 0.33173779 94 nips-2007-Gaussian Process Models for Link Analysis and Transfer Learning
16 0.33159387 73 nips-2007-Distributed Inference for Latent Dirichlet Allocation
17 0.33016536 34 nips-2007-Bayesian Policy Learning with Trans-Dimensional MCMC
18 0.32985672 79 nips-2007-Efficient multiple hyperparameter learning for log-linear models
20 0.32965398 122 nips-2007-Locality and low-dimensions in the prediction of natural experience from fMRI