nips nips2002 nips2002-176 knowledge-graph by maker-knowledge-mining

176 nips-2002-Replay, Repair and Consolidation


Source: pdf

Author: Szabolcs Káli, Peter Dayan

Abstract: A standard view of memory consolidation is that episodes are stored temporarily in the hippocampus, and are transferred to the neocortex through replay. Various recent experimental challenges to the idea of transfer, particularly for human memory, are forcing its re-evaluation. However, although there is independent neurophysiological evidence for replay, short of transfer, there are few theoretical ideas for what it might be doing. We suggest and demonstrate two important computational roles associated with neocortical indices.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 uk Abstract A standard view of memory consolidation is that episodes are stored temporarily in the hippocampus, and are transferred to the neocortex through replay. [sent-7, score-0.747]

2 We suggest and demonstrate two important computational roles associated with neocortical indices. [sent-10, score-0.31]

3 3, 4 Second, the same patients suffer from anterograde amnesia (that is, they cannot lay down new memories), even though many neocortical areas are palpably functioning, and procedural storage (including aversive conditioning and skill learning) works (more) normally. [sent-15, score-0.428]

4 15 The first and third of these evidentiary foundations are currently under active debate, specially for episodic memories (ie autobiographical memories for happenings). [sent-19, score-0.556]

5 This catastrophic interference can be avoided by re-storing old patterns (or something equivalent10, 19) at the same time as storing new information. [sent-23, score-0.226]

6 Thus, according to these schemes, patterns are stored wholesale in the hippocampus when they first appear, and are continually read back to cortex to cause plasticity along with the new information. [sent-24, score-0.809]

7 They regard the hippocampus as the final point of storage for all episodic memory, and permanently required for its recall. [sent-29, score-0.68]

8 If the hippocampus stores patterns permanently, what could the point be of replay? [sent-31, score-0.482]

9 In this scheme, the role of replay is building an index to the memory, effectively a form of recognition model. [sent-35, score-0.401]

10 Section 3 treats the repair of hippocampal indexing in the light of the vicissitudes of semantic change. [sent-37, score-0.633]

11 Section 4 sketches our account of the semantic elaboration of the index. [sent-38, score-0.283]

12 2 Semantic and Episodic Memory Figure 1 shows our existing account of the interaction between the neocortex and the hippocampus in semantic and episodic memory. [sent-39, score-0.957]

13 The conventional interpretation for this is as a model of semantic memory – the generic facts of the world, stripped of information about the time and place and other circumstances under which they were learnt. [sent-42, score-0.387]

14 However, the individual patterns on which the semantic learning is based are treated as episodic patterns, which should be recalled wholesale. [sent-43, score-0.864]

15 One main contribution of that work was to put episodic and semantic information into such particular correspondence. [sent-44, score-0.577]

16 All units in neocortical areas A, B, and C are connected to all units in area E/P through bidirectional, symmetric weights, but connections between units in the input layer are restricted to the same cortical area. [sent-53, score-0.59]

17 The hippocampus (HC) is not directly implemented, but it can influence and store the patterns in EP. [sent-55, score-0.51]

18 Recall performance on specific (episodic) patterns as a function of time between the initial presentation of the episodic pattern and testing (or, equivalently, time between training and lesion in hippocampals) in the simulations. [sent-58, score-0.515]

19 In this previous model, the hippocampus acts as a fast-learning repository for the EP representation of patterns that have been (relatively recently) experienced, and plays two roles: aiding recall and training the neocortex. [sent-60, score-0.63]

20 The hippocampus improves recall by performing pattern completion on the EP representations induced by partial or noisy inputs , thus finding the nearest matching stored . [sent-61, score-0.783]

21 In turn, this, through neocortical semantic knowledge, engenders recall of an appropriate . [sent-62, score-0.684]

22 The hippocampus trains the neocortex in an off-line (sleep) mode, reporting the patterns that it has stored to the neocortex to give the latter’s incremental plasticity the opportunity to absorb the new information. [sent-63, score-0.954]

23 The upper (thin) curve shows how well on average the full model can recall whole items from a partial cue as a function of time since the item was stored; the lower (thick) curve shows the same in the case that the hippocampal contribution is eliminated immediately before testing. [sent-66, score-0.418]

24 Both curves show how the neocortical network forgets particular episodic patterns as a function of continued semantic training. [sent-69, score-1.111]

25 Thus, first, the hippocampus is mandatorily required if memories are to be preserved – the forgetting curve for the normals in figure 1B is actually dominated by hippocampal forgetting. [sent-73, score-0.715]

26 Second, the inverted U-shaped curve in figure 1B arises because testing happens immediately after hippocampal removal. [sent-74, score-0.278]

27 Memories might turn out to be stabilized in the face of hippocampal damage in other ways. [sent-76, score-0.288]

28 21 For instance, cortical plasticity might be suppressed, if the hippocampus reports unfamiliarity as a plasticizing signal. [sent-77, score-0.43]

29 We are thus forced to start from the possibility that the hippocampus might indeed be a permanent repository, and reconsider the issue of replay and consolidation in the resulting light. [sent-82, score-0.837]

30 In this new scheme, there is still a critical role for replay, but one that is focused on the indexing relationship between neocortical and hippocampal representations rather than on writing into cortex the contents of the hippocampus. [sent-83, score-0.712]

31 3 Maintaining Access to Episodes Consider the fate of an episode that is stored in the hippocampus. [sent-84, score-0.418]

32 In a hierarchical network where the hippocampus is directly connected only to the topmost areas, successful recall of such an episode depends on the correspondence between low- and high-level cortical areas embodied by the neocortical network. [sent-85, score-1.075]

33 The neocortical network is the substrate of neocortical learning, reflecting, for instance, refinement of the existing semantic representation, changes in input statistics, or acquisition of a new semantic domain. [sent-89, score-1.238]

34 Such plasticity may disrupt the recall of stored episodic patterns by changing the correspondence between the input areas and EP. [sent-90, score-1.005]

35 The first of these possibilities may restrict the learning abilities of the neocortical network. [sent-92, score-0.287]

36 However, replay can be used to allow the connections into and out of the hippocampus to track the changing neocortical representational code. [sent-93, score-0.976]

37 In order to assess the effect of neocortical learning on the recall of previously stored episodes, either in the presence or absence of replay, the following paradigm was employed. [sent-94, score-0.626]

38 We started training the neocortical network by presenting to the input areas random combinations of valid patterns (20 independently generated random binary patterns for each area). [sent-95, score-0.771]

39 After a moderate amount of such general training (10,000 pattern presentations total), the EP representations of particular input patterns were associated with corresponding stored hippocampal traces, forming a set of stored episodes. [sent-96, score-1.078]

40 The quality of recall for these episodes was then monitored while general training continued. [sent-97, score-0.305]

41 Figure 2A shows as a function of the length of general semantic training the percentage of correct recall for the episodes stored after 10,000 presentations. [sent-98, score-0.813]

42 Clearly, neocortical learning comes to erase the route to recall, even though the episode remains perfectly stored in the hippocampus throughout. [sent-100, score-1.036]

43 The larger graphs are averages over all stored episodes, while the smaller graphs are for individual episodes. [sent-102, score-0.225]

44 (B) and (C) analyze the reasons why episodic recall breaks down in (A). [sent-105, score-0.431]

45 (B) shows how the EP representation of stored episodes drifts away from the original stored patterns. [sent-106, score-0.675]

46 (C) shows how well recall works if it starts from the stored EP representation of the episode. [sent-107, score-0.373]

47 Figure 2B shows that semantic learning after the storage of the episode causes the EP representation of the episode to move away from the version with which the stored hippocampal trace is associated. [sent-109, score-1.22]

48 The magnitude of this change is such that, eventually, even the full original episode may fail to activate the corresponding hippocampal memory trace. [sent-110, score-0.555]

49 The effect of representational change on hippocampally directed recall in the input areas is milder in our case, as seen in Figure 2C; provided that the correct hippocampal trace does get activated, the full episode can be successfully recalled most of the time. [sent-111, score-0.862]

50 However, this component accounts for the relatively slower initial rise of episodic recall in Figure 2A (compare with Figure 2D), as well as some of the variability between patterns in Figure 2A (data not shown). [sent-112, score-0.586]

51 Within these epochs, the memory traces stored in the hippocampus get activated at random, which leads to the reactivation of the associated EP pattern, which in turn reactivates the input areas according to the existing semantic mapping. [sent-114, score-1.058]

52 The resulting pattern may be different from the one that initially gave rise to the stored episode, due to subsequent changes in the neocortical connections. [sent-115, score-0.555]

53 Indeed, maintaining this representational proximity exactly sets the requirement for the frequency of replay of the episodes. [sent-117, score-0.401]

54 As in our previous model, we assume that the local connections within each neocortical area implement a local attractor structure, which, in the absence of feedforward activation, restricts activity patterns within that area to those that correspond to valid input patterns. [sent-118, score-0.606]

55 Such an off-line reconstruction of the low-level representation of stored episodes may then support a wide variety of memory processes (including the previous model’s focus on gradually incorporating the information carried by that episode into the neocortical knowledge base 11, 21 ). [sent-120, score-1.034]

56 Here we focus on its use for maintenance of the episodic index. [sent-121, score-0.294]

57 To this end, starting from the reconstructed episode, the semantic correspondence between the different levels is employed in the feedforward direction in order to determine the up-to-date EP representation of the episode. [sent-122, score-0.396]

58 This EP pattern is then associated with the stored hippocampal episode which initiated the replay, so that the hippocampal and input level representations of the episode are again in register. [sent-123, score-1.274]

59 Figure 2B demonstrates the efficacy of replay: the hippocampally stored episode now remains tied to the (shifting) EP representation of the episode, and episodic replay stays at high levels despite substantial changes in the neocortical network. [sent-124, score-1.487]

60 4 Index Extension Another important potential role for replay is extending the semantic aspects of the indexing scheme. [sent-125, score-0.73]

61 It should be possible to retrieve episodic memories on the basis of all input patterns to which they are closely related through the network of cortical semantic knowledge. [sent-126, score-1.008]

62 At present, this can happen only if the cortex produces similar EP codes for all those input patterns that are semantically related. [sent-127, score-0.295]

63 However, requiring that all semantic proximity be coded by syntactic proximity in essentially one single layer, is far too stringent a requirement. [sent-128, score-0.393]

64 Rather, we should expect that the bulk of semantic information lives in synapses that are invisible to this layer, ie connections within and between lower layers, and this information must also influence indexing. [sent-129, score-0.342]

65 One way to extend semantic indexing involves on-line sampling. [sent-130, score-0.351]

66 That is, probabilistic updating in the cortical semantic network starting from a given input pattern is the canonical way of exploring the semantic neighborhood of an input. [sent-131, score-0.756]

67 Over sampling, the cortical pattern and its EP code change together, providing the opportunity for a match to be made between the EP activity and the contents of episodic memory. [sent-133, score-0.45]

68 These sampling dynamics would allow the recall of semantically relevant episodes, even if their explicit code is rather distant. [sent-134, score-0.245]

69 The role for replay in this process is to allow the semantic index to be extended through off-line rather than on-line sampling starting from the episodic patterns stored in the hippocampus. [sent-135, score-1.444]

70 It is thus analogous to Sutton’s24 use of replay in his DYNA architecture, in which an internal model of a Markov decision process is used to erase inconsistencies in a learned value function, and also to the wake-sleep algorithm’s 22 use of sleep sampling to learn a recognition model. [sent-136, score-0.464]

71 The main requirement is for a further plastic layer between EP and CA3 (presumably the perforant path) so that when replay based on an episode leads to a semantically, but not syntactically, related pattern, then the EP code for that pattern can induce hippocampal recall of the episode. [sent-138, score-1.032]

72 Figure 3 illustrates this use of replay in a highly simplified case (subject to the limitations of the RBM). [sent-139, score-0.358]

73 Here, there are 3 modules of units, each with possible patterns, and a semantic structure such that (with wrap-around, so, eg, ) and independent of the choice in and . [sent-140, score-0.283]

74 Figure 3A shows the covariance matrix of the activities of the EP units to the possible input patterns (arranged lexicographically). [sent-141, score-0.236]

75 The relatedness of the EP representation of related patterns is clear in the rich structure of this matrix – this shows the extent of the explicit code learnt by the RBM. [sent-142, score-0.25]

76 Imagine that and have been stored as episodic patterns. [sent-144, score-0.519]

77 That is, their EP representations are stored in the hippocampus and are available for recall and replay. [sent-145, score-0.692]

78 We may expect to retrieve from its semantic relation . [sent-146, score-0.283]

79 Figure 3B shows the explicit proximity (inverse square distance, see caption) of the EP representations of the input patterns to the EP representation of . [sent-147, score-0.337]

80 Although is close, so are many other patterns that are not nearly so closely semantically related. [sent-148, score-0.233]

81 For this simulation, for reasons of simulation time, the input patterns were chosen to be orthogonal; the hidden unit representations were nevertheless highly non-orthogonal; iterations of Gibbs sampling were used during RBM learning. [sent-164, score-0.321]

82 The banding shows the semantic structure (see text), but, as seen in (B), only weakly. [sent-167, score-0.283]

83 B) The proximities ( ) of the EP representations ( ) for all the patterns to that for (the entry for is blank; see boxed ). [sent-168, score-0.278]

84 Despite the covariance structure in (A), the syntactic representation of semantic closeness is weak: is , for instance. [sent-170, score-0.341]

85 C) Three stages of (unclamped) Gibbs sampling starting ( times each) proximity (bar from the hippocampally replayed EP representations of (left column) and (right column). [sent-173, score-0.292]

86 After only few iterations, and still dominate; after more, the semantically close patterns and dominate for and and for . [sent-175, score-0.233]

87 D) Logarithmically scaled proximities following delta-rule learning for the mapping from EP representations of the patterns in (C) to and respectively. [sent-176, score-0.278]

88 Now, the remapped EP representations of semantically relevant inputs are vastly closer to their associated episodic memories. [sent-177, score-0.424]

89 The two columns show histograms of the patterns retrieved in the visible layer after rounds of Gibbs sampling starting ( times) from the hippocampal representation of (left) and (right). [sent-182, score-0.607]

90 The network has learnt much about the semantic relationships, although it is far from perfect (over-training seems to make it worse, for reasons we do not understand), and equally likely patterns are not generated exactly equally often. [sent-183, score-0.52]

91 21 The columns of these histograms show how many sampled visible patterns are not close to one of the valid inputs; this happens only rarely. [sent-184, score-0.224]

92 During replay, the EP representation of these semantically-related patterns is then available so that a model mapping EP to an appropriate input to the hippocampal pattern matching process can be learnt. [sent-185, score-0.568]

93 In this paper, we have considered two particular aspects of the consolidation of the indexing relationship between semantic memory (in the neocortex) and episodic memory (in the hippocampus). [sent-190, score-1.004]

94 We showed how replay could be used to maintain the index in the face of on-going neocortical plasticity, and to broaden it in the light of neocortical semantic knowledge that is not directly accessible through the explicit code in the upper layers of cortex. [sent-191, score-1.275]

95 Unlike memory consolidation, neither of these involves neocortical plasticity during replay. [sent-192, score-0.464]

96 Despite some theoretical suggestions,25 it is not clear how the semantic model specifies these distances. [sent-196, score-0.283]

97 Our pragmatic solution was to replay the episodes and rely on the transience of the Markov chain induced by Gibbs sampling to produce semantic cousins with which it should be related. [sent-197, score-0.87]

98 Our model involves interaction between a hippocampal store for episodes and a neocortical store for semantics. [sent-199, score-0.792]

99 However, the computational issues about indexing apply with the same force if the episodes are actually stored separately elsewhere, such as in more frontal structures (McClelland, personal communication). [sent-200, score-0.484]

100 What now seems unlikely, despite our best earlier efforts, is that the problems of indexing can be circumvented by storing the episodes wholly within the semantic network. [sent-202, score-0.542]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('replay', 0.358), ('hippocampus', 0.304), ('episodic', 0.294), ('neocortical', 0.287), ('semantic', 0.283), ('hippocampal', 0.258), ('ep', 0.249), ('stored', 0.225), ('episode', 0.193), ('episodes', 0.191), ('patterns', 0.178), ('consolidation', 0.151), ('memories', 0.131), ('recall', 0.114), ('recalled', 0.109), ('memory', 0.104), ('hippocampally', 0.096), ('neocortex', 0.076), ('plasticity', 0.073), ('indexing', 0.068), ('presentations', 0.067), ('areas', 0.059), ('semantically', 0.055), ('cortical', 0.053), ('proximities', 0.051), ('representations', 0.049), ('catastrophic', 0.048), ('permanently', 0.048), ('cue', 0.046), ('mcclelland', 0.044), ('percent', 0.044), ('pattern', 0.043), ('proximity', 0.043), ('consolidated', 0.041), ('replayed', 0.041), ('sleep', 0.041), ('squire', 0.041), ('thousand', 0.04), ('sampling', 0.038), ('code', 0.038), ('gibbs', 0.038), ('mcnaughton', 0.036), ('rbm', 0.036), ('network', 0.036), ('representation', 0.034), ('storage', 0.034), ('input', 0.033), ('sci', 0.033), ('continued', 0.033), ('ie', 0.032), ('damage', 0.03), ('correspondence', 0.029), ('dayan', 0.029), ('cortex', 0.029), ('substrate', 0.029), ('edited', 0.029), ('store', 0.028), ('layer', 0.028), ('area', 0.028), ('erase', 0.027), ('lond', 0.027), ('moscovitch', 0.027), ('nadel', 0.027), ('philos', 0.027), ('psychol', 0.027), ('reactivation', 0.027), ('relatives', 0.027), ('wilson', 0.027), ('connections', 0.027), ('inputs', 0.026), ('units', 0.025), ('feedforward', 0.025), ('starting', 0.025), ('syntactic', 0.024), ('hc', 0.024), ('repair', 0.024), ('arguing', 0.024), ('permanent', 0.024), ('integrity', 0.024), ('soc', 0.024), ('amnesia', 0.024), ('anterograde', 0.024), ('hungarian', 0.024), ('activated', 0.023), ('roles', 0.023), ('reasons', 0.023), ('visible', 0.023), ('histograms', 0.023), ('matching', 0.022), ('older', 0.022), ('academy', 0.022), ('forgetting', 0.022), ('opportunity', 0.022), ('biol', 0.022), ('initiated', 0.022), ('index', 0.022), ('role', 0.021), ('evidence', 0.02), ('compelling', 0.02), ('inverted', 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999976 176 nips-2002-Replay, Repair and Consolidation

Author: Szabolcs Káli, Peter Dayan

Abstract: A standard view of memory consolidation is that episodes are stored temporarily in the hippocampus, and are transferred to the neocortex through replay. Various recent experimental challenges to the idea of transfer, particularly for human memory, are forcing its re-evaluation. However, although there is independent neurophysiological evidence for replay, short of transfer, there are few theoretical ideas for what it might be doing. We suggest and demonstrate two important computational roles associated with neocortical indices.

2 0.16854346 125 nips-2002-Learning Semantic Similarity

Author: Jaz Kandola, Nello Cristianini, John S. Shawe-taylor

Abstract: The standard representation of text documents as bags of words suffers from well known limitations, mostly due to its inability to exploit semantic similarity between terms. Attempts to incorporate some notion of term similarity include latent semantic indexing [8], the use of semantic networks [9], and probabilistic methods [5]. In this paper we propose two methods for inferring such similarity from a corpus. The first one defines word-similarity based on document-similarity and viceversa, giving rise to a system of equations whose equilibrium point we use to obtain a semantic similarity measure. The second method models semantic relations by means of a diffusion process on a graph defined by lexicon and co-occurrence information. Both approaches produce valid kernel functions parametrised by a real number. The paper shows how the alignment measure can be used to successfully perform model selection over this parameter. Combined with the use of support vector machines we obtain positive results. 1

3 0.16096632 146 nips-2002-Modeling Midazolam's Effect on the Hippocampus and Recognition Memory

Author: Kenneth J. Malmberg, René Zeelenberg, Richard M. Shiffrin

Abstract: The benz.odiaze:pine '~1idazolam causes dense,but temporary ~ anterograde amnesia, similar to that produced by- hippocampal damage~Does the action of M'idazola:m on the hippocanlpus cause less storage, or less accurate storage, .of information in episodic. long-term menlory?- \rVe used a sinlple variant of theREJv1. JD.odel [18] to fit data collected. by IIirsbnla.n~Fisher, .IIenthorn,Arndt} and Passa.nnante [9] on the effects of Midazola.m, study time~ and normative \vQrd.. frequenc:y on both yes-no and remember-k.novv recognition m.emory. That a: simple strength. 'model fit well \\tas cont.rary to the expectations of 'flirshman et aLMore important,within the Bayesian based R.EM modeling frame\vork, the data were consistentw'ith the view that Midazolam causes less accurate storage~ rather than less storage, of infornlation in episodic mcm.ory..

4 0.10369771 112 nips-2002-Inferring a Semantic Representation of Text via Cross-Language Correlation Analysis

Author: Alexei Vinokourov, Nello Cristianini, John Shawe-Taylor

Abstract: The problem of learning a semantic representation of a text document from data is addressed, in the situation where a corpus of unlabeled paired documents is available, each pair being formed by a short English document and its French translation. This representation can then be used for any retrieval, categorization or clustering task, both in a standard and in a cross-lingual setting. By using kernel functions, in this case simple bag-of-words inner products, each part of the corpus is mapped to a high-dimensional space. The correlations between the two spaces are then learnt by using kernel Canonical Correlation Analysis. A set of directions is found in the first and in the second space that are maximally correlated. Since we assume the two representations are completely independent apart from the semantic content, any correlation between them should reflect some semantic similarity. Certain patterns of English words that relate to a specific meaning should correlate with certain patterns of French words corresponding to the same meaning, across the corpus. Using the semantic representation obtained in this way we first demonstrate that the correlations detected between the two versions of the corpus are significantly higher than random, and hence that a representation based on such features does capture statistical patterns that should reflect semantic information. Then we use such representation both in cross-language and in single-language retrieval tasks, observing performance that is consistently and significantly superior to LSI on the same data.

5 0.10250415 163 nips-2002-Prediction and Semantic Association

Author: Thomas L. Griffiths, Mark Steyvers

Abstract: We explore the consequences of viewing semantic association as the result of attempting to predict the concepts likely to arise in a particular context. We argue that the success of existing accounts of semantic representation comes as a result of indirectly addressing this problem, and show that a closer correspondence to human data can be obtained by taking a probabilistic approach that explicitly models the generative structure of language. 1

6 0.10131622 102 nips-2002-Hidden Markov Model of Cortical Synaptic Plasticity: Derivation of the Learning Rule

7 0.060108729 187 nips-2002-Spikernels: Embedding Spiking Neurons in Inner-Product Spaces

8 0.05949584 180 nips-2002-Selectivity and Metaplasticity in a Unified Calcium-Dependent Model

9 0.058336057 5 nips-2002-A Digital Antennal Lobe for Pattern Equalization: Analysis and Design

10 0.057895672 116 nips-2002-Interpreting Neural Response Variability as Monte Carlo Sampling of the Posterior

11 0.056995414 73 nips-2002-Dynamic Bayesian Networks with Deterministic Latent Tables

12 0.050107766 35 nips-2002-Automatic Acquisition and Efficient Representation of Syntactic Structures

13 0.049262337 18 nips-2002-Adaptation and Unsupervised Learning

14 0.048495427 81 nips-2002-Expected and Unexpected Uncertainty: ACh and NE in the Neocortex

15 0.04636528 177 nips-2002-Retinal Processing Emulation in a Programmable 2-Layer Analog Array Processor CMOS Chip

16 0.045251377 19 nips-2002-Adapting Codes and Embeddings for Polychotomies

17 0.044647917 157 nips-2002-On the Dirichlet Prior and Bayesian Regularization

18 0.043610312 186 nips-2002-Spike Timing-Dependent Plasticity in the Address Domain

19 0.039324626 115 nips-2002-Informed Projections

20 0.039178438 11 nips-2002-A Model for Real-Time Computation in Generic Neural Microcircuits


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.128), (1, 0.044), (2, 0.007), (3, -0.032), (4, -0.109), (5, 0.097), (6, 0.037), (7, -0.062), (8, 0.061), (9, -0.1), (10, -0.134), (11, -0.054), (12, 0.078), (13, -0.04), (14, 0.007), (15, 0.065), (16, -0.025), (17, 0.036), (18, 0.014), (19, -0.042), (20, -0.046), (21, -0.016), (22, 0.082), (23, -0.01), (24, -0.068), (25, -0.023), (26, 0.158), (27, 0.052), (28, 0.073), (29, 0.091), (30, 0.04), (31, -0.041), (32, 0.056), (33, -0.116), (34, 0.082), (35, -0.143), (36, 0.215), (37, -0.169), (38, -0.09), (39, -0.141), (40, -0.031), (41, -0.069), (42, 0.061), (43, -0.007), (44, 0.138), (45, 0.317), (46, -0.004), (47, 0.005), (48, -0.063), (49, -0.031)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97311395 176 nips-2002-Replay, Repair and Consolidation

Author: Szabolcs Káli, Peter Dayan

Abstract: A standard view of memory consolidation is that episodes are stored temporarily in the hippocampus, and are transferred to the neocortex through replay. Various recent experimental challenges to the idea of transfer, particularly for human memory, are forcing its re-evaluation. However, although there is independent neurophysiological evidence for replay, short of transfer, there are few theoretical ideas for what it might be doing. We suggest and demonstrate two important computational roles associated with neocortical indices.

2 0.74863231 146 nips-2002-Modeling Midazolam's Effect on the Hippocampus and Recognition Memory

Author: Kenneth J. Malmberg, René Zeelenberg, Richard M. Shiffrin

Abstract: The benz.odiaze:pine '~1idazolam causes dense,but temporary ~ anterograde amnesia, similar to that produced by- hippocampal damage~Does the action of M'idazola:m on the hippocanlpus cause less storage, or less accurate storage, .of information in episodic. long-term menlory?- \rVe used a sinlple variant of theREJv1. JD.odel [18] to fit data collected. by IIirsbnla.n~Fisher, .IIenthorn,Arndt} and Passa.nnante [9] on the effects of Midazola.m, study time~ and normative \vQrd.. frequenc:y on both yes-no and remember-k.novv recognition m.emory. That a: simple strength. 'model fit well \\tas cont.rary to the expectations of 'flirshman et aLMore important,within the Bayesian based R.EM modeling frame\vork, the data were consistentw'ith the view that Midazolam causes less accurate storage~ rather than less storage, of infornlation in episodic mcm.ory..

3 0.54411978 163 nips-2002-Prediction and Semantic Association

Author: Thomas L. Griffiths, Mark Steyvers

Abstract: We explore the consequences of viewing semantic association as the result of attempting to predict the concepts likely to arise in a particular context. We argue that the success of existing accounts of semantic representation comes as a result of indirectly addressing this problem, and show that a closer correspondence to human data can be obtained by taking a probabilistic approach that explicitly models the generative structure of language. 1

4 0.42497349 125 nips-2002-Learning Semantic Similarity

Author: Jaz Kandola, Nello Cristianini, John S. Shawe-taylor

Abstract: The standard representation of text documents as bags of words suffers from well known limitations, mostly due to its inability to exploit semantic similarity between terms. Attempts to incorporate some notion of term similarity include latent semantic indexing [8], the use of semantic networks [9], and probabilistic methods [5]. In this paper we propose two methods for inferring such similarity from a corpus. The first one defines word-similarity based on document-similarity and viceversa, giving rise to a system of equations whose equilibrium point we use to obtain a semantic similarity measure. The second method models semantic relations by means of a diffusion process on a graph defined by lexicon and co-occurrence information. Both approaches produce valid kernel functions parametrised by a real number. The paper shows how the alignment measure can be used to successfully perform model selection over this parameter. Combined with the use of support vector machines we obtain positive results. 1

5 0.41274256 112 nips-2002-Inferring a Semantic Representation of Text via Cross-Language Correlation Analysis

Author: Alexei Vinokourov, Nello Cristianini, John Shawe-Taylor

Abstract: The problem of learning a semantic representation of a text document from data is addressed, in the situation where a corpus of unlabeled paired documents is available, each pair being formed by a short English document and its French translation. This representation can then be used for any retrieval, categorization or clustering task, both in a standard and in a cross-lingual setting. By using kernel functions, in this case simple bag-of-words inner products, each part of the corpus is mapped to a high-dimensional space. The correlations between the two spaces are then learnt by using kernel Canonical Correlation Analysis. A set of directions is found in the first and in the second space that are maximally correlated. Since we assume the two representations are completely independent apart from the semantic content, any correlation between them should reflect some semantic similarity. Certain patterns of English words that relate to a specific meaning should correlate with certain patterns of French words corresponding to the same meaning, across the corpus. Using the semantic representation obtained in this way we first demonstrate that the correlations detected between the two versions of the corpus are significantly higher than random, and hence that a representation based on such features does capture statistical patterns that should reflect semantic information. Then we use such representation both in cross-language and in single-language retrieval tasks, observing performance that is consistently and significantly superior to LSI on the same data.

6 0.39205113 18 nips-2002-Adaptation and Unsupervised Learning

7 0.38593829 81 nips-2002-Expected and Unexpected Uncertainty: ACh and NE in the Neocortex

8 0.36656103 15 nips-2002-A Probabilistic Model for Learning Concatenative Morphology

9 0.28337565 35 nips-2002-Automatic Acquisition and Efficient Representation of Syntactic Structures

10 0.26537752 42 nips-2002-Bias-Optimal Incremental Problem Solving

11 0.24717759 102 nips-2002-Hidden Markov Model of Cortical Synaptic Plasticity: Derivation of the Learning Rule

12 0.23664348 116 nips-2002-Interpreting Neural Response Variability as Monte Carlo Sampling of the Posterior

13 0.23072363 180 nips-2002-Selectivity and Metaplasticity in a Unified Calcium-Dependent Model

14 0.22431125 177 nips-2002-Retinal Processing Emulation in a Programmable 2-Layer Analog Array Processor CMOS Chip

15 0.22203435 22 nips-2002-Adaptive Nonlinear System Identification with Echo State Networks

16 0.22186589 192 nips-2002-Support Vector Machines for Multiple-Instance Learning

17 0.2093396 7 nips-2002-A Hierarchical Bayesian Markovian Model for Motifs in Biopolymer Sequences

18 0.20574906 186 nips-2002-Spike Timing-Dependent Plasticity in the Address Domain

19 0.19734426 73 nips-2002-Dynamic Bayesian Networks with Deterministic Latent Tables

20 0.19536522 188 nips-2002-Stability-Based Model Selection


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(11, 0.048), (23, 0.034), (36, 0.329), (42, 0.06), (54, 0.075), (55, 0.034), (64, 0.013), (67, 0.048), (68, 0.049), (74, 0.047), (79, 0.02), (92, 0.019), (95, 0.01), (98, 0.094)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.82952332 176 nips-2002-Replay, Repair and Consolidation

Author: Szabolcs Káli, Peter Dayan

Abstract: A standard view of memory consolidation is that episodes are stored temporarily in the hippocampus, and are transferred to the neocortex through replay. Various recent experimental challenges to the idea of transfer, particularly for human memory, are forcing its re-evaluation. However, although there is independent neurophysiological evidence for replay, short of transfer, there are few theoretical ideas for what it might be doing. We suggest and demonstrate two important computational roles associated with neocortical indices.

2 0.42994469 41 nips-2002-Bayesian Monte Carlo

Author: Zoubin Ghahramani, Carl E. Rasmussen

Abstract: We investigate Bayesian alternatives to classical Monte Carlo methods for evaluating integrals. Bayesian Monte Carlo (BMC) allows the incorporation of prior knowledge, such as smoothness of the integrand, into the estimation. In a simple problem we show that this outperforms any classical importance sampling method. We also attempt more challenging multidimensional integrals involved in computing marginal likelihoods of statistical models (a.k.a. partition functions and model evidences). We find that Bayesian Monte Carlo outperformed Annealed Importance Sampling, although for very high dimensional problems or problems with massive multimodality BMC may be less adequate. One advantage of the Bayesian approach to Monte Carlo is that samples can be drawn from any distribution. This allows for the possibility of active design of sample points so as to maximise information gain.

3 0.4249509 102 nips-2002-Hidden Markov Model of Cortical Synaptic Plasticity: Derivation of the Learning Rule

Author: Michael Eisele, Kenneth D. Miller

Abstract: Cortical synaptic plasticity depends on the relative timing of pre- and postsynaptic spikes and also on the temporal pattern of presynaptic spikes and of postsynaptic spikes. We study the hypothesis that cortical synaptic plasticity does not associate individual spikes, but rather whole firing episodes, and depends only on when these episodes start and how long they last, but as little as possible on the timing of individual spikes. Here we present the mathematical background for such a study. Standard methods from hidden Markov models are used to define what “firing episodes” are. Estimating the probability of being in such an episode requires not only the knowledge of past spikes, but also of future spikes. We show how to construct a causal learning rule, which depends only on past spikes, but associates pre- and postsynaptic firing episodes as if it also knew future spikes. We also show that this learning rule agrees with some features of synaptic plasticity in superficial layers of rat visual cortex (Froemke and Dan, Nature 416:433, 2002).

4 0.42328703 127 nips-2002-Learning Sparse Topographic Representations with Products of Student-t Distributions

Author: Max Welling, Simon Osindero, Geoffrey E. Hinton

Abstract: We propose a model for natural images in which the probability of an image is proportional to the product of the probabilities of some filter outputs. We encourage the system to find sparse features by using a Studentt distribution to model each filter output. If the t-distribution is used to model the combined outputs of sets of neurally adjacent filters, the system learns a topographic map in which the orientation, spatial frequency and location of the filters change smoothly across the map. Even though maximum likelihood learning is intractable in our model, the product form allows a relatively efficient learning procedure that works well even for highly overcomplete sets of filters. Once the model has been learned it can be used as a prior to derive the “iterated Wiener filter” for the purpose of denoising images.

5 0.42193347 11 nips-2002-A Model for Real-Time Computation in Generic Neural Microcircuits

Author: Wolfgang Maass, Thomas Natschläger, Henry Markram

Abstract: A key challenge for neural modeling is to explain how a continuous stream of multi-modal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real-time. We propose a new computational model that is based on principles of high dimensional dynamical systems in combination with statistical learning theory. It can be implemented on generic evolved or found recurrent circuitry.

6 0.42032069 76 nips-2002-Dynamical Constraints on Computing with Spike Timing in the Cortex

7 0.41369349 199 nips-2002-Timing and Partial Observability in the Dopamine System

8 0.41343158 18 nips-2002-Adaptation and Unsupervised Learning

9 0.41331887 50 nips-2002-Circuit Model of Short-Term Synaptic Dynamics

10 0.4131172 5 nips-2002-A Digital Antennal Lobe for Pattern Equalization: Analysis and Design

11 0.41252312 116 nips-2002-Interpreting Neural Response Variability as Monte Carlo Sampling of the Posterior

12 0.41163659 10 nips-2002-A Model for Learning Variance Components of Natural Images

13 0.41143733 81 nips-2002-Expected and Unexpected Uncertainty: ACh and NE in the Neocortex

14 0.4108718 46 nips-2002-Boosting Density Estimation

15 0.41063187 163 nips-2002-Prediction and Semantic Association

16 0.40982318 3 nips-2002-A Convergent Form of Approximate Policy Iteration

17 0.4091 24 nips-2002-Adaptive Scaling for Feature Selection in SVMs

18 0.40832233 44 nips-2002-Binary Tuning is Optimal for Neural Rate Coding with High Temporal Resolution

19 0.40830362 141 nips-2002-Maximally Informative Dimensions: Analyzing Neural Responses to Natural Signals

20 0.40830198 62 nips-2002-Coulomb Classifiers: Generalizing Support Vector Machines via an Analogy to Electrostatic Systems