nips nips2008 nips2008-109 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Dominik Endres, Peter Foldiak
Abstract: We propose a novel application of Formal Concept Analysis (FCA) to neural decoding: instead of just trying to figure out which stimulus was presented, we demonstrate how to explore the semantic relationships in the neural representation of large sets of stimuli. FCA provides a way of displaying and interpreting such relationships via concept lattices. We explore the effects of neural code sparsity on the lattice. We then analyze neurophysiological data from high-level visual cortical area STSa, using an exact Bayesian approach to construct the formal context needed by FCA. Prominent features of the resulting concept lattices are discussed, including hierarchical face representation and indications for a product-of-experts code in real neurons. 1
Reference: text
sentIndex sentText sentNum sentScore
1 uk Abstract We propose a novel application of Formal Concept Analysis (FCA) to neural decoding: instead of just trying to figure out which stimulus was presented, we demonstrate how to explore the semantic relationships in the neural representation of large sets of stimuli. [sent-4, score-0.415]
2 FCA provides a way of displaying and interpreting such relationships via concept lattices. [sent-5, score-0.491]
3 We explore the effects of neural code sparsity on the lattice. [sent-6, score-0.242]
4 We then analyze neurophysiological data from high-level visual cortical area STSa, using an exact Bayesian approach to construct the formal context needed by FCA. [sent-7, score-0.254]
5 Prominent features of the resulting concept lattices are discussed, including hierarchical face representation and indications for a product-of-experts code in real neurons. [sent-8, score-0.792]
6 From an information-theoretic perspective, the patterns of activation of these neurons can be understood as the codewords comprising the neural code. [sent-10, score-0.318]
7 The neural code describes which pattern of activity corresponds to what information item. [sent-11, score-0.257]
8 We are interested in the (high-level) visual system, where such items may indicate the presence of a stimulus object or the value of some stimulus attribute, assuming that each time this item is represented the neural activity pattern will be the same or at least similar. [sent-12, score-0.642]
9 Neural decoding is the attempt to reconstruct the stimulus from the observed pattern of activation in a given population of neurons [1, 2, 3, 4]. [sent-13, score-0.535]
10 Popular decoding quality measures, such as Fisher’s linear discriminant [5] or mutual information [6] capture how accurately a stimulus can be determined from a neural activity pattern (e. [sent-14, score-0.397]
11 To explore the relationship between the representations of related items, F¨ ldi´ k [7] demonstrated o a that a sparse neural code can be interpreted as a graph (a kind of ”semantic net”). [sent-21, score-0.286]
12 The codewords can now be partially ordered under set inclusion: codeword A ≤ codeword B iff the set of active neurons of A is a subset of the active neurons of B. [sent-24, score-0.654]
13 In FCA, data from a binary relation (or formal context) is represented as a concept lattice. [sent-31, score-0.645]
14 Each concept has a set of formal objects as an extent and a set of formal attributes as an intent. [sent-32, score-0.856]
15 In our application, the stimuli are the formal objects, and the neurons are the formal attributes. [sent-33, score-0.686]
16 The FCA approach exploits the duality of extensional and intensional descriptions and allows to visually explore the data in lattice diagrams. [sent-34, score-0.262]
17 We give a short introduction to FCA in section 2 and demonstrate how the sparseness (or denseness) of the neural code affects the structure of the concept lattice in section 3. [sent-36, score-0.938]
18 Section 4 describes the generative classifier model which we use to build the formal context from the responses of neurons in the high-level visual cortex of monkeys. [sent-37, score-0.529]
19 Finally, we discuss the concept lattices so obtained in section 5. [sent-38, score-0.573]
20 2 Formal Concept Analysis Central to FCA[9] is the notion of the formal context K := (G, M, I), which is comprised of a set of formal objects G, a set of formal attributes M and a binary relation I ⊆ G×M between members of G and M . [sent-39, score-0.699]
21 monkeyFace monkeyHand humanFace spider n1 × n2 × × n3 × × concept 0 1 2 3 4 5 extent (stimuli) ALL spider humanFace monkeyFace monkeyFace monkeyHand monkeyFace NONE intent (neurons) NONE n3 n1 n2 n1 n2 ALL Table 1: Left: a simple example context, represented as a cross-table. [sent-45, score-0.688]
22 The objects (rows) are 4 visual stimuli, the attributes (columns) are 3 (hypothetical) neurons n1,n2,n3. [sent-46, score-0.358]
23 An ”x” in a cell indicates that a stimulus elicited a response from the corresponding neuron. [sent-47, score-0.346]
24 1 [9] A formal concept of the context K is a pair (A, B) with A ⊆ G, B ⊆ M such that A = B and B = A. [sent-59, score-0.589]
25 A is called the extent and B is the intent of the concept (A, B). [sent-60, score-0.539]
26 I B(K) denotes the set of all concepts of the context K. [sent-61, score-0.258]
27 In other words, given the relation I, (A, B) is a concept if A determines B and vice versa. [sent-62, score-0.441]
28 Table 1, right, lists all concepts of the context in table 1, left. [sent-64, score-0.283]
29 One can visualize the defining property of a concept as follows: if (A, B) is a concept, reorder the rows and columns of the cross table such that all objects in A are in adjacent rows, and all attributes in B are in adjacent columns. [sent-65, score-0.542]
30 It can be shown [8, 9] that I B(K) and the concept order form a complete lattice. [sent-72, score-0.403]
31 The concept lattice of the context in table 1, with full and reduced labeling, is shown in fig. [sent-73, score-0.747]
32 Full labeling means that a concept node is depicted with its full extent and intent. [sent-75, score-0.511]
33 A reduced labeled concept lattice shows an object only in the smallest (w. [sent-76, score-0.711]
34 2) concept of whose extent the object is a member. [sent-80, score-0.494]
35 This concept is called the object concept, or the concept that introduces the object. [sent-81, score-0.84]
36 Likewise, an attribute is shown only in the largest concept of whose intent the attribute is a member, the attribute concept, which introduces the attribute. [sent-82, score-0.695]
37 The closedness of extents and intents has an important consequence for neuroscientific applications. [sent-83, score-0.252]
38 However, the original concepts will be embedded as a substructure in the larger lattice, with their ordering relationships preserved. [sent-87, score-0.267]
39 Figure 1: Concept lattice computed from the context in table 1. [sent-88, score-0.306]
40 The number in the leftmost compartment is the concept number. [sent-93, score-0.471]
41 all members of extents and intents are listed in each concept node. [sent-97, score-0.704]
42 An object/attribute is only listed in the extent/intent of the smallest/largest concept that contains it. [sent-99, score-0.403]
43 Reduced labeling is very useful for drawing large concept lattices. [sent-100, score-0.454]
44 The lattice diagrams make the ordering relationship between the concepts graphically explicit: concept 3 contains all ”monkey-related” stimuli, concept 2 encompasses all ”faces”. [sent-101, score-1.281]
45 They have a common child, concept 4, which is the ”monkeyFace” concept. [sent-102, score-0.403]
46 The ”spider” concept (concept 1) is incomparable to any other concept except the top and the bottom of the lattice. [sent-103, score-0.806]
47 We will show (section 5) that the response patterns of real neurons can lead to similarly interpretable structures. [sent-105, score-0.241]
48 From a decoding perspective, a fully labeled concept shows those stimuli that have activated at least the set of neurons in the intent. [sent-106, score-0.927]
49 In contrast, the stimuli associated with a concept in reduced labeling will activate the set of neurons in the intent, but no others. [sent-107, score-0.896]
50 The fully labeled concepts show stimuli encoded by activity of the active neurons of the concept without knowledge of the firing state of the other neurons. [sent-108, score-1.098]
51 Reduced labels, on the other hand show those stimuli that elicited a response only from the neurons in the intent. [sent-109, score-0.503]
52 3 Concept lattices of local, sparse and dense codes One feature of neural codes which has attracted a considerable amount of interest is its sparseness. [sent-110, score-0.446]
53 In the case of a binary neural code, the sparseness of a codeword is inversely related to the fraction of active neurons. [sent-111, score-0.292]
54 The average sparseness across all codewords is the sparseness of the code [12, 13]. [sent-112, score-0.383]
55 each of 10 stimuli was associated with randomly drawn responses of 10 neurons, subject to the constraints that the code be perfectly decodable and that the sparseness of each codeword was equal to the sparseness of the code. [sent-121, score-0.737]
56 2 shows the contexts (represented as cross-tables) and the concept lattices of a local code (activity ratio 0. [sent-123, score-0.76]
57 Each context was built out of the responses of 10 (hypothetical) neurons to 10 stimuli. [sent-128, score-0.32]
58 In a local code, the response patters to different stimuli have no overlapping activations, hence the lattice representing this code is an antichain with top and bottom element added. [sent-130, score-0.766]
59 Each concept in the antichain introduces (at least) one stimulus and (at least) one neuron. [sent-131, score-0.621]
60 In contrast, a dense code results in a lot of concepts which introduce neither a stimulus nor a neuron. [sent-132, score-0.596]
61 The lattice of the dense code is also substantially longer than that of the sparse and local codes. [sent-133, score-0.484]
62 A dense code, even for a small number of stimuli, will give rise to a lot of concepts, because the neuron sets representing the stimuli are very probably going to have non-empty intersections. [sent-135, score-0.309]
63 These intersections are potentially the intents of concepts which are larger than those concepts that introduce the stimuli. [sent-136, score-0.542]
64 Determining these intents thus requires the observation of a large number of neurons, which is unappealing from a decoding perspective. [sent-139, score-0.236]
65 The local code does not have this drawback, but is hampered by a small encoding capacity (maximal number of concepts with non-empty extents): the concept lattice in fig. [sent-140, score-1.045]
66 4 Building a formal context from responses of high-level visual neurons To explore whether FCA is a suitable tool for interpreting real neural codes, we constructed formal contexts from the responses of high-level visual cortical cells in area STSa (part of the temporal lobe) of monkeys. [sent-143, score-1.135]
67 Briefly, spike trains were obtained from neurons within the upper and lower banks of the superior temporal sulcus (STSa) via standard extracellular recording techniques [22] from an awake and behaving monkey (Macaca mulatta) performing a fixation task. [sent-149, score-0.27]
68 The recorded firing patters were turned into distinct samples, each of which contained the spikes from −300 ms before to 600 ms after the stimulus onset with a temporal resolution of 1 ms. [sent-151, score-0.244]
69 The stimulus set consisted of 1704 images, containing color and black and white views of human and monkey head and body, animals, fruits, natural outdoor scenes, abstract drawings and cartoons. [sent-152, score-0.247]
70 A given cell was tested on a subset of 600 or 1200 of these stimuli, each stimulus was presented between 1-15 times. [sent-155, score-0.247]
71 in the RSVP condition) it is usually impossible to extract more than 1 bit of stimulus identity-related information from a spiketrain per stimulus [24]. [sent-161, score-0.358]
72 We do not suggest that real neurons have a binary activation function. [sent-162, score-0.233]
73 Furthermore, since not all cells were tested on all stimuli, we also had to select pairs of subsets of cells and stimuli such that all cells in a pair were tested on all stimuli. [sent-172, score-0.609]
74 Incidentally, this selection can also be accomplished with FCA, by determining the concepts of a context with gJm =”stimulus g was tested on cell m” and selecting those with a large number of stimuli × number of cells. [sent-173, score-0.554]
75 Two of these cell and stimulus subset pairs (”A”, containing 364 stimuli and 13 cells, and ”B”, containing 600 stimuli, 12 cells) were selected for further analysis. [sent-174, score-0.475]
76 The complete concept lattices were too large to display on a page. [sent-177, score-0.573]
77 Graphs of lattices A and B with reduced labeling on the stimuli are included in the supplementary 1 2 see http://code. [sent-178, score-0.487]
78 org A B Figure 3: A: a subgraph of lattice A with reduced labeling on the stimuli, i. [sent-184, score-0.394]
79 All cells forming this part of the concept lattice were responsive to faces. [sent-190, score-0.795]
80 The concepts on the right side are not exclusively ”face” concepts, but most members of their extents have something ”roundish” about them. [sent-192, score-0.457]
81 In these graphs, the top of the frame around each concept image contains the concept number and the list of cells in the intent. [sent-196, score-0.933]
82 3, A shows a subgraph from lattice A, which exclusively contained ”face” concepts. [sent-198, score-0.331]
83 their extents are consist of general ”face” images, while their intents are small (3 cells). [sent-203, score-0.252]
84 In contrast, the lower concepts introduce mostly single monkey faces, with the bottom concepts having an intent of 7 cells. [sent-204, score-0.573]
85 We may interpret this as an indication that the neural code has a higher ”resolution” for faces of conspecifics than for faces in general, i. [sent-205, score-0.306]
86 3, B shows a subgraph from lattice B with full labeling. [sent-210, score-0.305]
87 The concepts in the left half of the graph are face concepts, whereas the extents of the concepts in the right half also contain a number of non-face stimuli. [sent-211, score-0.622]
88 The bottom concept, being subordinate to both the ”round” and the ”face” concepts, encompasses stimuli with both characteristics, which points towards a product-of-experts encoding [25]. [sent-213, score-0.288]
89 randomly assigning stimuli to the recorded responses) to determine whether the found concepts are indeed meaningful. [sent-219, score-0.441]
90 This procedure leaves the lattice structure intact, but mixes up the extents. [sent-220, score-0.236]
91 Evidence of concept stability was obtained by trying different binarization thresholds: as stated in appendix A, we used a threshold probability of 0. [sent-223, score-0.442]
92 This technique is feasible even for high-level visual codes, where linear decoding methods [19, 20] fail, and it provides qualitative information about the structure of the code which goes beyond stimulus label decoding [4]. [sent-229, score-0.646]
93 Restructuring lattice theory: an approach based on hierarchies of concepts. [sent-278, score-0.236]
94 Exact Bayesian bin classification: a fast alternative to bayesian o a classification and its application to neural response analysis. [sent-389, score-0.258]
95 A Method of Bayesian thresholding A standard way of obtaining binary responses from neurons is thresholding the spike count within a certain time window. [sent-393, score-0.403]
96 This is a relatively straightforward task, if the stimuli are presented well separated in time and a lot of trials per stimulus are available. [sent-394, score-0.407]
97 However, under RSVP conditions with few trials per stimulus, response separation becomes more tricky, as the responses to subsequent stimuli will tend to follow each other without an intermediate return to baseline activity. [sent-396, score-0.392]
98 BBCa was designed for the purpose of inferring stimulus labels g from a continuous-valued, scalar measure z of a neural response. [sent-399, score-0.236]
99 The bin membership of a given neural response can then serve as the binary attribute required for FCA, since BBCa weighs bin configurations by their classification (i. [sent-404, score-0.461]
100 We proceed in a straight Bayesian fashion: since the bin membership is the only variable we are interested in, all other parameters (counting window size and position, class membership probabilities, bin boundaries) are marginalized. [sent-407, score-0.268]
wordName wordTfidf (topN-words)
[('concept', 0.403), ('fca', 0.388), ('lattice', 0.236), ('stimuli', 0.228), ('concepts', 0.213), ('stimulus', 0.179), ('neurons', 0.176), ('lattices', 0.17), ('ldi', 0.17), ('code', 0.159), ('formal', 0.141), ('extents', 0.136), ('cells', 0.127), ('decoding', 0.12), ('intents', 0.116), ('bin', 0.104), ('responses', 0.099), ('bbca', 0.097), ('monkeyface', 0.097), ('codeword', 0.085), ('sparseness', 0.083), ('intent', 0.079), ('rsvp', 0.078), ('xx', 0.077), ('attributes', 0.076), ('attribute', 0.071), ('subgraph', 0.069), ('cell', 0.068), ('monkey', 0.068), ('compartment', 0.068), ('visual', 0.068), ('response', 0.065), ('codes', 0.065), ('face', 0.06), ('endres', 0.058), ('spider', 0.058), ('stsa', 0.058), ('superconcept', 0.058), ('codewords', 0.058), ('neural', 0.057), ('extent', 0.057), ('relationships', 0.054), ('labeling', 0.051), ('xxx', 0.051), ('items', 0.051), ('members', 0.049), ('thresholding', 0.049), ('neurophysiology', 0.047), ('dense', 0.045), ('faces', 0.045), ('context', 0.045), ('conceptual', 0.044), ('sparse', 0.044), ('hypothetical', 0.044), ('semantic', 0.042), ('activity', 0.041), ('antichain', 0.039), ('binarization', 0.039), ('conspeci', 0.039), ('ganter', 0.039), ('gim', 0.039), ('humanface', 0.039), ('monkeyhand', 0.039), ('oram', 0.039), ('patters', 0.039), ('resposes', 0.039), ('roundish', 0.039), ('wille', 0.039), ('xxxx', 0.039), ('objects', 0.038), ('relation', 0.038), ('reduced', 0.038), ('active', 0.037), ('neuron', 0.036), ('encoding', 0.034), ('interpreting', 0.034), ('elicited', 0.034), ('andrews', 0.034), ('homunculus', 0.034), ('mammalian', 0.034), ('object', 0.034), ('population', 0.033), ('something', 0.033), ('represented', 0.033), ('bayesian', 0.032), ('icann', 0.031), ('invariances', 0.031), ('membership', 0.03), ('binary', 0.03), ('serial', 0.029), ('responsive', 0.029), ('http', 0.028), ('contexts', 0.028), ('counting', 0.028), ('activation', 0.027), ('encompasses', 0.026), ('exclusively', 0.026), ('temporal', 0.026), ('explore', 0.026), ('table', 0.025)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999893 109 nips-2008-Interpreting the neural code with Formal Concept Analysis
Author: Dominik Endres, Peter Foldiak
Abstract: We propose a novel application of Formal Concept Analysis (FCA) to neural decoding: instead of just trying to figure out which stimulus was presented, we demonstrate how to explore the semantic relationships in the neural representation of large sets of stimuli. FCA provides a way of displaying and interpreting such relationships via concept lattices. We explore the effects of neural code sparsity on the lattice. We then analyze neurophysiological data from high-level visual cortical area STSa, using an exact Bayesian approach to construct the formal context needed by FCA. Prominent features of the resulting concept lattices are discussed, including hierarchical face representation and indications for a product-of-experts code in real neurons. 1
2 0.14324574 67 nips-2008-Effects of Stimulus Type and of Error-Correcting Code Design on BCI Speller Performance
Author: Jeremy Hill, Jason Farquhar, Suzanna Martens, Felix Biessmann, Bernhard Schölkopf
Abstract: From an information-theoretic perspective, a noisy transmission system such as a visual Brain-Computer Interface (BCI) speller could benefit from the use of errorcorrecting codes. However, optimizing the code solely according to the maximal minimum-Hamming-distance criterion tends to lead to an overall increase in target frequency of target stimuli, and hence a significantly reduced average target-to-target interval (TTI), leading to difficulties in classifying the individual event-related potentials (ERPs) due to overlap and refractory effects. Clearly any change to the stimulus setup must also respect the possible psychophysiological consequences. Here we report new EEG data from experiments in which we explore stimulus types and codebooks in a within-subject design, finding an interaction between the two factors. Our data demonstrate that the traditional, rowcolumn code has particular spatial properties that lead to better performance than one would expect from its TTIs and Hamming-distances alone, but nonetheless error-correcting codes can improve performance provided the right stimulus type is used. 1
3 0.13091417 231 nips-2008-Temporal Dynamics of Cognitive Control
Author: Jeremy Reynolds, Michael C. Mozer
Abstract: Cognitive control refers to the flexible deployment of memory and attention in response to task demands and current goals. Control is often studied experimentally by presenting sequences of stimuli, some demanding a response, and others modulating the stimulus-response mapping. In these tasks, participants must maintain information about the current stimulus-response mapping in working memory. Prominent theories of cognitive control use recurrent neural nets to implement working memory, and optimize memory utilization via reinforcement learning. We present a novel perspective on cognitive control in which working memory representations are intrinsically probabilistic, and control operations that maintain and update working memory are dynamically determined via probabilistic inference. We show that our model provides a parsimonious account of behavioral and neuroimaging data, and suggest that it offers an elegant conceptualization of control in which behavior can be cast as optimal, subject to limitations on learning and the rate of information processing. Moreover, our model provides insight into how task instructions can be directly translated into appropriate behavior and then efficiently refined with subsequent task experience. 1
4 0.12281749 8 nips-2008-A general framework for investigating how far the decoding process in the brain can be simplified
Author: Masafumi Oizumi, Toshiyuki Ishii, Kazuya Ishibashi, Toshihiko Hosoya, Masato Okada
Abstract: “How is information decoded in the brain?” is one of the most difficult and important questions in neuroscience. Whether neural correlation is important or not in decoding neural activities is of special interest. We have developed a general framework for investigating how far the decoding process in the brain can be simplified. First, we hierarchically construct simplified probabilistic models of neural responses that ignore more than Kth-order correlations by using a maximum entropy principle. Then, we compute how much information is lost when information is decoded using the simplified models, i.e., “mismatched decoders”. We introduce an information theoretically correct quantity for evaluating the information obtained by mismatched decoders. We applied our proposed framework to spike data for vertebrate retina. We used 100-ms natural movies as stimuli and computed the information contained in neural activities about these movies. We found that the information loss is negligibly small in population activities of ganglion cells even if all orders of correlation are ignored in decoding. We also found that if we assume stationarity for long durations in the information analysis of dynamically changing stimuli like natural movies, pseudo correlations seem to carry a large portion of the information. 1
5 0.11487374 36 nips-2008-Beyond Novelty Detection: Incongruent Events, when General and Specific Classifiers Disagree
Author: Daphna Weinshall, Hynek Hermansky, Alon Zweig, Jie Luo, Holly Jimison, Frank Ohl, Misha Pavel
Abstract: Unexpected stimuli are a challenge to any machine learning algorithm. Here we identify distinct types of unexpected events, focusing on ’incongruent events’ when ’general level’ and ’specific level’ classifiers give conflicting predictions. We define a formal framework for the representation and processing of incongruent events: starting from the notion of label hierarchy, we show how partial order on labels can be deduced from such hierarchies. For each event, we compute its probability in different ways, based on adjacent levels (according to the partial order) in the label hierarchy. An incongruent event is an event where the probability computed based on some more specific level (in accordance with the partial order) is much smaller than the probability computed based on some more general level, leading to conflicting predictions. We derive algorithms to detect incongruent events from different types of hierarchies, corresponding to class membership or part membership. Respectively, we show promising results with real data on two specific problems: Out Of Vocabulary words in speech recognition, and the identification of a new sub-class (e.g., the face of a new individual) in audio-visual facial object recognition.
6 0.11139405 43 nips-2008-Cell Assemblies in Large Sparse Inhibitory Networks of Biologically Realistic Spiking Neurons
7 0.084082551 206 nips-2008-Sequential effects: Superstition or rational behavior?
8 0.082911663 137 nips-2008-Modeling Short-term Noise Dependence of Spike Counts in Macaque Prefrontal Cortex
9 0.082877003 60 nips-2008-Designing neurophysiology experiments to optimally constrain receptive field models along parametric submanifolds
10 0.078401446 240 nips-2008-Tracking Changing Stimuli in Continuous Attractor Neural Networks
11 0.072393134 100 nips-2008-How memory biases affect information transmission: A rational analysis of serial reproduction
12 0.068731919 156 nips-2008-Nonparametric sparse hierarchical models describe V1 fMRI responses to natural images
13 0.066876821 219 nips-2008-Spectral Hashing
14 0.066021919 172 nips-2008-Optimal Response Initiation: Why Recent Experience Matters
15 0.065316498 58 nips-2008-Dependence of Orientation Tuning on Recurrent Excitation and Inhibition in a Network Model of V1
16 0.062244669 26 nips-2008-Analyzing human feature learning as nonparametric Bayesian inference
17 0.060224202 66 nips-2008-Dynamic visual attention: searching for coding length increments
18 0.059213858 118 nips-2008-Learning Transformational Invariants from Natural Movies
19 0.058572397 136 nips-2008-Model selection and velocity estimation using novel priors for motion patterns
20 0.058517277 81 nips-2008-Extracting State Transition Dynamics from Multiple Spike Trains with Correlated Poisson HMM
topicId topicWeight
[(0, -0.162), (1, 0.029), (2, 0.196), (3, 0.057), (4, -0.06), (5, 0.006), (6, -0.043), (7, -0.035), (8, 0.09), (9, 0.06), (10, -0.035), (11, 0.07), (12, -0.154), (13, 0.012), (14, -0.042), (15, 0.114), (16, 0.071), (17, -0.029), (18, 0.092), (19, -0.121), (20, -0.117), (21, 0.008), (22, 0.112), (23, -0.064), (24, -0.074), (25, -0.064), (26, 0.025), (27, 0.003), (28, 0.001), (29, -0.062), (30, -0.034), (31, -0.074), (32, 0.074), (33, -0.011), (34, 0.124), (35, -0.025), (36, -0.069), (37, 0.13), (38, -0.114), (39, 0.028), (40, 0.006), (41, 0.047), (42, -0.029), (43, -0.031), (44, -0.013), (45, -0.007), (46, -0.181), (47, 0.02), (48, -0.035), (49, 0.166)]
simIndex simValue paperId paperTitle
same-paper 1 0.97928464 109 nips-2008-Interpreting the neural code with Formal Concept Analysis
Author: Dominik Endres, Peter Foldiak
Abstract: We propose a novel application of Formal Concept Analysis (FCA) to neural decoding: instead of just trying to figure out which stimulus was presented, we demonstrate how to explore the semantic relationships in the neural representation of large sets of stimuli. FCA provides a way of displaying and interpreting such relationships via concept lattices. We explore the effects of neural code sparsity on the lattice. We then analyze neurophysiological data from high-level visual cortical area STSa, using an exact Bayesian approach to construct the formal context needed by FCA. Prominent features of the resulting concept lattices are discussed, including hierarchical face representation and indications for a product-of-experts code in real neurons. 1
Author: Masafumi Oizumi, Toshiyuki Ishii, Kazuya Ishibashi, Toshihiko Hosoya, Masato Okada
Abstract: “How is information decoded in the brain?” is one of the most difficult and important questions in neuroscience. Whether neural correlation is important or not in decoding neural activities is of special interest. We have developed a general framework for investigating how far the decoding process in the brain can be simplified. First, we hierarchically construct simplified probabilistic models of neural responses that ignore more than Kth-order correlations by using a maximum entropy principle. Then, we compute how much information is lost when information is decoded using the simplified models, i.e., “mismatched decoders”. We introduce an information theoretically correct quantity for evaluating the information obtained by mismatched decoders. We applied our proposed framework to spike data for vertebrate retina. We used 100-ms natural movies as stimuli and computed the information contained in neural activities about these movies. We found that the information loss is negligibly small in population activities of ganglion cells even if all orders of correlation are ignored in decoding. We also found that if we assume stationarity for long durations in the information analysis of dynamically changing stimuli like natural movies, pseudo correlations seem to carry a large portion of the information. 1
3 0.66790026 67 nips-2008-Effects of Stimulus Type and of Error-Correcting Code Design on BCI Speller Performance
Author: Jeremy Hill, Jason Farquhar, Suzanna Martens, Felix Biessmann, Bernhard Schölkopf
Abstract: From an information-theoretic perspective, a noisy transmission system such as a visual Brain-Computer Interface (BCI) speller could benefit from the use of errorcorrecting codes. However, optimizing the code solely according to the maximal minimum-Hamming-distance criterion tends to lead to an overall increase in target frequency of target stimuli, and hence a significantly reduced average target-to-target interval (TTI), leading to difficulties in classifying the individual event-related potentials (ERPs) due to overlap and refractory effects. Clearly any change to the stimulus setup must also respect the possible psychophysiological consequences. Here we report new EEG data from experiments in which we explore stimulus types and codebooks in a within-subject design, finding an interaction between the two factors. Our data demonstrate that the traditional, rowcolumn code has particular spatial properties that lead to better performance than one would expect from its TTIs and Hamming-distances alone, but nonetheless error-correcting codes can improve performance provided the right stimulus type is used. 1
4 0.59067464 231 nips-2008-Temporal Dynamics of Cognitive Control
Author: Jeremy Reynolds, Michael C. Mozer
Abstract: Cognitive control refers to the flexible deployment of memory and attention in response to task demands and current goals. Control is often studied experimentally by presenting sequences of stimuli, some demanding a response, and others modulating the stimulus-response mapping. In these tasks, participants must maintain information about the current stimulus-response mapping in working memory. Prominent theories of cognitive control use recurrent neural nets to implement working memory, and optimize memory utilization via reinforcement learning. We present a novel perspective on cognitive control in which working memory representations are intrinsically probabilistic, and control operations that maintain and update working memory are dynamically determined via probabilistic inference. We show that our model provides a parsimonious account of behavioral and neuroimaging data, and suggest that it offers an elegant conceptualization of control in which behavior can be cast as optimal, subject to limitations on learning and the rate of information processing. Moreover, our model provides insight into how task instructions can be directly translated into appropriate behavior and then efficiently refined with subsequent task experience. 1
5 0.58948874 240 nips-2008-Tracking Changing Stimuli in Continuous Attractor Neural Networks
Author: K. Wong, Si Wu, Chi Fung
Abstract: Continuous attractor neural networks (CANNs) are emerging as promising models for describing the encoding of continuous stimuli in neural systems. Due to the translational invariance of their neuronal interactions, CANNs can hold a continuous family of neutrally stable states. In this study, we systematically explore how neutral stability of a CANN facilitates its tracking performance, a capacity believed to have wide applications in brain functions. We develop a perturbative approach that utilizes the dominant movement of the network stationary states in the state space. We quantify the distortions of the bump shape during tracking, and study their effects on the tracking performance. Results are obtained on the maximum speed for a moving stimulus to be trackable, and the reaction time to catch up an abrupt change in stimulus. 1
6 0.56594425 43 nips-2008-Cell Assemblies in Large Sparse Inhibitory Networks of Biologically Realistic Spiking Neurons
7 0.52892011 7 nips-2008-A computational model of hippocampal function in trace conditioning
8 0.52260226 36 nips-2008-Beyond Novelty Detection: Incongruent Events, when General and Specific Classifiers Disagree
9 0.4758096 172 nips-2008-Optimal Response Initiation: Why Recent Experience Matters
10 0.45729145 100 nips-2008-How memory biases affect information transmission: A rational analysis of serial reproduction
11 0.44029003 156 nips-2008-Nonparametric sparse hierarchical models describe V1 fMRI responses to natural images
12 0.41618761 66 nips-2008-Dynamic visual attention: searching for coding length increments
13 0.41571042 90 nips-2008-Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity
14 0.41402191 60 nips-2008-Designing neurophysiology experiments to optimally constrain receptive field models along parametric submanifolds
15 0.41257009 27 nips-2008-Artificial Olfactory Brain for Mixture Identification
16 0.39679542 58 nips-2008-Dependence of Orientation Tuning on Recurrent Excitation and Inhibition in a Network Model of V1
17 0.39165857 23 nips-2008-An ideal observer model of infant object perception
18 0.37010315 124 nips-2008-Load and Attentional Bayes
19 0.31082961 26 nips-2008-Analyzing human feature learning as nonparametric Bayesian inference
20 0.30981207 42 nips-2008-Cascaded Classification Models: Combining Models for Holistic Scene Understanding
topicId topicWeight
[(6, 0.061), (7, 0.553), (12, 0.019), (28, 0.119), (57, 0.048), (59, 0.02), (77, 0.031), (78, 0.014), (83, 0.031), (94, 0.01)]
simIndex simValue paperId paperTitle
same-paper 1 0.94970322 109 nips-2008-Interpreting the neural code with Formal Concept Analysis
Author: Dominik Endres, Peter Foldiak
Abstract: We propose a novel application of Formal Concept Analysis (FCA) to neural decoding: instead of just trying to figure out which stimulus was presented, we demonstrate how to explore the semantic relationships in the neural representation of large sets of stimuli. FCA provides a way of displaying and interpreting such relationships via concept lattices. We explore the effects of neural code sparsity on the lattice. We then analyze neurophysiological data from high-level visual cortical area STSa, using an exact Bayesian approach to construct the formal context needed by FCA. Prominent features of the resulting concept lattices are discussed, including hierarchical face representation and indications for a product-of-experts code in real neurons. 1
2 0.94083101 12 nips-2008-Accelerating Bayesian Inference over Nonlinear Differential Equations with Gaussian Processes
Author: Ben Calderhead, Mark Girolami, Neil D. Lawrence
Abstract: Identification and comparison of nonlinear dynamical system models using noisy and sparse experimental data is a vital task in many fields, however current methods are computationally expensive and prone to error due in part to the nonlinear nature of the likelihood surfaces induced. We present an accelerated sampling procedure which enables Bayesian inference of parameters in nonlinear ordinary and delay differential equations via the novel use of Gaussian processes (GP). Our method involves GP regression over time-series data, and the resulting derivative and time delay estimates make parameter inference possible without solving the dynamical system explicitly, resulting in dramatic savings of computational time. We demonstrate the speed and statistical accuracy of our approach using examples of both ordinary and delay differential equations, and provide a comprehensive comparison with current state of the art methods. 1
3 0.92530483 56 nips-2008-Deep Learning with Kernel Regularization for Visual Recognition
Author: Kai Yu, Wei Xu, Yihong Gong
Abstract: In this paper we aim to train deep neural networks for rapid visual recognition. The task is highly challenging, largely due to the lack of a meaningful regularizer on the functions realized by the networks. We propose a novel regularization method that takes advantage of kernel methods, where an oracle kernel function represents prior knowledge about the recognition task of interest. We derive an efficient algorithm using stochastic gradient descent, and demonstrate encouraging results on a wide range of recognition tasks, in terms of both accuracy and speed. 1
4 0.89897847 51 nips-2008-Convergence and Rate of Convergence of a Manifold-Based Dimension Reduction Algorithm
Author: Andrew Smith, Hongyuan Zha, Xiao-ming Wu
Abstract: We study the convergence and the rate of convergence of a local manifold learning algorithm: LTSA [13]. The main technical tool is the perturbation analysis on the linear invariant subspace that corresponds to the solution of LTSA. We derive a worst-case upper bound of errors for LTSA which naturally leads to a convergence result. We then derive the rate of convergence for LTSA in a special case. 1
5 0.86542463 45 nips-2008-Characterizing neural dependencies with copula models
Author: Pietro Berkes, Frank Wood, Jonathan W. Pillow
Abstract: The coding of information by neural populations depends critically on the statistical dependencies between neuronal responses. However, there is no simple model that can simultaneously account for (1) marginal distributions over single-neuron spike counts that are discrete and non-negative; and (2) joint distributions over the responses of multiple neurons that are often strongly dependent. Here, we show that both marginal and joint properties of neural responses can be captured using copula models. Copulas are joint distributions that allow random variables with arbitrary marginals to be combined while incorporating arbitrary dependencies between them. Different copulas capture different kinds of dependencies, allowing for a richer and more detailed description of dependencies than traditional summary statistics, such as correlation coefficients. We explore a variety of copula models for joint neural response distributions, and derive an efficient maximum likelihood procedure for estimating them. We apply these models to neuronal data collected in macaque pre-motor cortex, and quantify the improvement in coding accuracy afforded by incorporating the dependency structure between pairs of neurons. We find that more than one third of neuron pairs shows dependency concentrated in the lower or upper tails for their firing rate distribution. 1
6 0.66400349 71 nips-2008-Efficient Sampling for Gaussian Process Inference using Control Variables
7 0.65438789 137 nips-2008-Modeling Short-term Noise Dependence of Spike Counts in Macaque Prefrontal Cortex
8 0.63125139 188 nips-2008-QUIC-SVD: Fast SVD Using Cosine Trees
9 0.62511665 221 nips-2008-Stochastic Relational Models for Large-scale Dyadic Data using MCMC
10 0.62026852 213 nips-2008-Sparse Convolved Gaussian Processes for Multi-output Regression
11 0.61716491 60 nips-2008-Designing neurophysiology experiments to optimally constrain receptive field models along parametric submanifolds
12 0.60857254 54 nips-2008-Covariance Estimation for High Dimensional Data Vectors Using the Sparse Matrix Transform
13 0.59913731 99 nips-2008-High-dimensional support union recovery in multivariate regression
14 0.59080589 8 nips-2008-A general framework for investigating how far the decoding process in the brain can be simplified
15 0.58609343 146 nips-2008-Multi-task Gaussian Process Learning of Robot Inverse Dynamics
16 0.58499861 83 nips-2008-Fast High-dimensional Kernel Summations Using the Monte Carlo Multipole Method
17 0.58451569 192 nips-2008-Reducing statistical dependencies in natural signals using radial Gaussianization
18 0.57941091 66 nips-2008-Dynamic visual attention: searching for coding length increments
19 0.57900572 64 nips-2008-DiscLDA: Discriminative Learning for Dimensionality Reduction and Classification
20 0.57896113 63 nips-2008-Dimensionality Reduction for Data in Multiple Feature Representations