nips nips2001 nips2001-169 knowledge-graph by maker-knowledge-mining

169 nips-2001-Small-World Phenomena and the Dynamics of Information


Source: pdf

Author: Jon M. Kleinberg

Abstract: unkown-abstract

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 In recent work, we have been investigating the problem of decentralized search in large information networks [14, 15]. [sent-3, score-0.699]

2 Our initial motivation was an experiment that dealt directly with the search problem in a decidedly pre-Internet context: Stanley Milgram’s famous study of the small-world phenomenon [16, 17]. [sent-4, score-0.166]

3 Milgram was seeking to determine whether most pairs of people in society were linked by short chains of acquaintances, and for this purpose he recruited individuals to try forwarding a letter to a designated “target” through people they knew on a firstname basis. [sent-5, score-0.32]

4 This latter point is concerned precisely with a type of decentralized navigation in a social network, consisting of people as nodes and links joining pairs of people who know each other. [sent-9, score-1.209]

5 This is clearly what was taking place in Milgram’s experiments: participants, using the information available to them, were estimating which of their acquaintances would lead to the shortest path through the full social network. [sent-15, score-0.151]

6 One starts with a p-dimensional lattice, in which nodes are joined only to their nearest neighbors. [sent-18, score-0.172]

7 One then adds k directed long-range links out of each node v, for a constant k; the endpoint of each link is chosen uniformly at random. [sent-19, score-0.603]

8 Results from the theory of random graphs can be used to show that with high probability, there will be short paths connecting all pairs of nodes (see e. [sent-20, score-0.31]

9 This network model, a superposition of a lattice and a set of long-range links, is a natural one in which to study the behavior of a decentralized search algorithm. [sent-25, score-0.934]

10 The algorithm knows the structure of the lattice; it starts from a node s, and is told the coordinates of a target node t. [sent-26, score-0.422]

11 It successively traverses links of the network so as to reach the target as quickly as possible; but, crucially, it does not know the long-range links out of any node that it has not yet visited. [sent-27, score-0.789]

12 In addition to moving forward across directed links, the algorithm may travel in reverse across any link that it has already followed in the forward direction; this allows it to back up when it does not want to continue exploring from its current node. [sent-28, score-0.22]

13 We say that the algorithm has search time Y (n) if, on a randomly generated n-node network with s and t chosen uniformly at random, it reaches the target t in at most Y (n) steps with probability at least 1 − ε(n), for a function ε(·) going to 0 with n. [sent-30, score-0.307]

14 The first result in [15] is that no decentralized algorithm can achieve a polylogarithmic search time in this network model — even though, with high probability, there are paths of polylogarithmic length joining all pairs of nodes. [sent-31, score-1.71]

15 Specifically, when we construct a long-range link (v, w) out of v, we do not choose w uniformly at random; rather, we choose it with probability proportional to d−α, where d is the lattice distance from v to w. [sent-33, score-0.395]

16 In this way, the long-range links become correlated to the geometry of the lattice. [sent-34, score-0.251]

17 We show in [15] that when α is equal to p, the dimension of the underlying lattice, then a decentralized greedy algorithm achieves search time proportional to log2 n; and for any other value of α, there is no decentralized algorithm with a polylogarithmic search time. [sent-35, score-1.757]

18 Recent work by Zhang, Goel, and Govindan [26] has shown how the distribution of links associated with the optimal value of α may lead to improved performance for decentralized search in the Freenet peer-to-peer system. [sent-36, score-0.913]

19 In joint work with Kempe and Demers [12], we have studied how distributions that are inverse-polynomial in the distance between nodes can be used in the design of gossip protocols for spreading information in a network of communicating agents. [sent-39, score-0.402]

20 The goal of the present paper is to consider more generally the problem of decentralized search in networks with partial information about the underlying structure. [sent-40, score-0.728]

21 While a lattice makes for a natural network backbone, we would like to understand the extent to which the principles underlying efficient decentralized search can be abstracted away from a lattice-like structure. [sent-41, score-0.963]

22 We begin by considering networks that are generated from a hierarchical structure, and show that qualitatively similar results can be obtained; we then discuss a general model of group structures, which can be viewed as a simultaneous generalization of lattices and hierarchies. [sent-42, score-0.389]

23 Thus we describe this case first, and then move on to the case in which each node has only a constant number of out-links. [sent-45, score-0.182]

24 2 Hierarchical Network Models In a number of settings, nodes represent objects that can be classified according to a hierarchy or taxonomy; and nodes are more likely to form links if they belong to the same small sub-tree in the hierarchy, indicating they are more closely related. [sent-46, score-0.685]

25 Let V denote the set of leaves of T ; we use n to denote the size of V , and for two leaves v and w, we use h(v, w) to denote the height of the least common ancestor of v and w in T . [sent-48, score-0.206]

26 We are also given a monotone non-increasing function f(·) that will determine link probabilities. [sent-49, score-0.149]

27 For each node v ∈ V , we create a random link to w with probability proportional to f(h(v, w)); in other words, the probability of choosing w is equal to f(h(v, w))/ x=v f(h(v, x)). [sent-50, score-0.297]

28 We create k links out of each node v this way, choosing the endpoint w each time independently and with repetition allowed. [sent-51, score-0.474]

29 It is important to note that the tree T is used only in the generation process of G; neither the edges nor the non-leaf nodes of T appear in G. [sent-54, score-0.172]

30 (By way of contrast, the lattice model in [15] included both the long-range links and the nearest-neighbor links of the lattice itself. [sent-55, score-0.936]

31 ) When we use the term “node” without any qualification, we are referring to nodes of G, and hence to leaves of T ; we will use “internal node” to refer to non-leaf nodes of T . [sent-56, score-0.424]

32 We refer to the process producing G as a hierarchical model with exponent α if the function f(h) grows asymptotically like b−αh: f(h) b−α h = 0 for all α < α and lim = 0 for all α > α. [sent-57, score-0.391]

33 −α h h→∞ b h→∞ f(h) lim There are several natural interpretations for a hierarchical network model. [sent-58, score-0.25]

34 The linkage probabilities then have a simple meaning — they are based on the distance between the topics of the pages, as measured by the height of their least common ancestor in the topic hierarchy. [sent-64, score-0.212]

35 A page on Sci- ence/Computer Science/Algorithms may thus be more likely to link to one on Science/Computer Science/Machine Learning than to one on Arts/Music/Opera. [sent-65, score-0.194]

36 It is worth noting that a number of recent models for the link structure of the Web, as well as other relational structures, have looked at different ways in which similarities in content can affect the probability of linkage; see e. [sent-67, score-0.166]

37 Studies performed by Killworth and Bernard [13] showed that in choosing a recipient for the letter, participants were overwhelmingly guided by one of two criteria: similarity to the target in geography, or similarity to the target in occupation. [sent-71, score-0.172]

38 If one views the lattice as forming a simple model for geographic factors, the hierarchical model can similarly be interpreted as forming a “topic hierarchy” of occupations, with individuals at the leaves. [sent-72, score-0.485]

39 Thus, for example, the occupations of “banker” and “stock broker” may belong to the same small sub-tree; since the target in one of Milgram’s experiments was a stock broker, it might therefore be a good strategy to send the letter to a banker. [sent-73, score-0.244]

40 Independently of our work here, Watts, Dodds, and Newman have recently studied hierarchical structures for modeling Milgram’s experiment in social networks [24]. [sent-74, score-0.323]

41 We now consider the search problem in a graph G generated from a hierarchical model: A decentralized algorithm has knowledge of the tree T , and knows the location of a target leaf that it must reach; however, it only learns the structure of G as it visits nodes. [sent-75, score-1.048]

42 The exponent α determines how the structures of G and T are related; how does this affect the navigability of G? [sent-76, score-0.222]

43 In the analysis of the lattice model [15], the key property of the optimal exponent was that, from any point, there was a reasonable probability of a long-range link that halved the distance to the target. [sent-77, score-0.529]

44 We make use of a similar idea here: when α = 1, there is always a reasonable probability of finding a long-range link into a strictly smaller sub-tree containing the target. [sent-78, score-0.192]

45 As mentioned above, we focus here on the case of polylogarithmic outdegree, with the case of constant out-degree deferred until later. [sent-79, score-0.44]

46 1 (a) There is a hierarchical model with exponent α = 1 and polylogarithmic out-degree in which a decentralized algorithm can achieve search time O(log n). [sent-81, score-1.413]

47 (b) For every α = 1, there is no hierarchical model with exponent α and polylogarithmic out-degree in which a decentralized algorithm can achieve polylogarithmic search time. [sent-82, score-1.817]

48 To prove (a), we show that when the search is at a node v whose least common ancestor with the target has height h, there is a high probability that v has a link into the sub-tree of height h−1 containing the target. [sent-88, score-0.742]

49 In this way, the search reaches the target in logarithmically many steps. [sent-89, score-0.252]

50 To prove (b), we exhibit a sub-tree T containing the target such that, with high probability, it takes any decentralized algorithm more than a polylogarithmic number of steps to find a link into T . [sent-90, score-1.178]

51 3 Group Structures The analysis of the search problem in a hierarchical model is similar to the analysis of the lattice-based approach in [15], although the two types of models seem superficially quite different. [sent-91, score-0.307]

52 Consider a collection of individuals in a social network, and suppose that we know of certain groups to which individuals belong — people who live in the same town, or work in the same profession, or have some other affiliation in common. [sent-93, score-0.377]

53 In a lattice-based model, there may be a group for each subset of lattice points contained in a common ball (grouping based on proximity); in a hierarchical model, there may be a group for each subset of leaves contained in a common sub-tree. [sent-95, score-0.783]

54 We now discuss the notion of a group structure, to make this precise; we follow a model proposed in joint work with Kempe and Demers [12], where we were concerned with designing gossip protocols for lattices and hierarchies. [sent-96, score-0.33]

55 A group structure consists of an underlying set V of nodes, and a collection of subsets of V (the groups). [sent-98, score-0.261]

56 (i) If R is a group of size q ≥ 2 containing a node v, then there is a group R ⊆ R containing v that is strictly smaller than R, but has size at least λq. [sent-100, score-0.58]

57 are groups that all have size at most q and all contain a common node v, then their union has size at most βq. [sent-104, score-0.21]

58 However, it is easy to construct examples of group structures that do not arise in this way from lattices or hierarchies. [sent-106, score-0.299]

59 Given a group structure (V, {Ri}), and a monotone non-increasing function f(·), we now consider the following process for generating a graph on V . [sent-107, score-0.277]

60 For two nodes v and w, we use q(v, w) to denote the minimum size of a group containing both v and w. [sent-108, score-0.389]

61 ) For each node v ∈ V , we create a random link to w with probability proportional to f(q(v, w)); repeating this k times independently yields k links out of v. [sent-110, score-0.548]

62 We refer to this as a group-induced model with exponent α if f(q) grows asymptotically like q −α : lim h→∞ f(q) q−α = 0 for all α < α and lim = 0 for all α > α. [sent-111, score-0.304]

63 h→∞ f(q) q−α A decentralized search algorithm in such a network is given knowledge of the full group structure, and must follow links of G to a designated target t. [sent-112, score-1.201]

64 1 (a) For every group structure, there is a group-induced model with exponent α = 1 and polylogarithmic out-degree in which a decentralized algorithm can achieve search time O(log n). [sent-116, score-1.419]

65 (b) For every α < 1, there is no group-induced model with exponent α and polylogarithmic out-degree in which a decentralized algorithm can achieve polylogarithmic search time. [sent-117, score-1.676]

66 Notice that in a hierarchical model, the smallest group (sub-tree) containing two nodes v and w has size bh(v,w) , and so Theorem 3. [sent-118, score-0.53]

67 Similarly, on a lattice, the smallest group (ball) containing two nodes v and w at lattice distance d has size Θ(dp ), and so Theorem 3. [sent-121, score-0.634]

68 1(a) implies a version of the result from [15], that efficient search is possible in a lattice model when nodes form links with probability d−p . [sent-122, score-0.806]

69 (In the version of the lattice result implied here, there are no nearest-neighbor links at all; but each node has a polylogarithmic number of out-links. [sent-123, score-1.018]

70 We consider a node v — the current point in the search — for which the smallest group containing v and the target t has size q. [sent-127, score-0.615]

71 Using group structure properties (i) and (ii), we show there is a high probability that v has a link into a group containing t of size between λ2 q and λq. [sent-128, score-0.53]

72 In this way, the search passes through groups containing t of sizes that diminish geometrically, and hence it terminates in logarithmic time. [sent-129, score-0.272]

73 This is because there exist group-induced models with exponents α > 1 in which decentralized algorithms can achieve polylogarithmic search time. [sent-132, score-1.142]

74 For example, consider an undirected graph G∗ in which each node has 3 neighbors, and each pair of nodes can be connected by a path of length O(log n). [sent-133, score-0.406]

75 Give a group structure (V, {Ri}), and a cut-off value q, we define a graph H(q) on V by joining any two nodes that belong to a common group of size at most q. [sent-136, score-0.69]

76 Note that H(q) is not a random graph; it is defined simply in terms of the group structure and q. [sent-137, score-0.191]

77 We now argue that if many pairs of nodes are far apart in H(q), for a suitably large value of q, then a decentralized algorithm cannot be efficient when α > 1. [sent-138, score-0.699]

78 Suppose there exist constants γ, δ > 0 so that a constant fraction of all pairs of nodes have shortest-path distance Ω(nδ ) in H(nγ ). [sent-141, score-0.267]

79 Then for every α > 1, there is no group-induced model on (V, {R i}) with exponent α and a polylogarithmic number of out-links per node in which a decentralized algorithm can achieve polylogarithmic search time. [sent-142, score-1.822]

80 Notice this property holds for group structures arising from both lattices and hierarchies; in a lattice, a constant fraction of all pairs in H(n1/2p) have distance Ω(n1/2p), while in a hierarchy, the graph H(nγ ) is disconnected for every γ < 1. [sent-143, score-0.425]

81 4 Nodes with a Constant Number of Out-Links Thus far, by giving each node more than a constant number of out-links, we have been able to design very simple search algorithms in networks generated according to the optimal exponent α. [sent-144, score-0.575]

82 From each node, there is a way to make progress toward the target node t, and so the structure of the graph G funnels the search towards its destination. [sent-145, score-0.501]

83 First of all, with high probability, many nodes will have all their links leading “away” from the target in the hierarchy. [sent-147, score-0.509]

84 In this section, we work with a hierarchical model, and construct graphs with con- stant out-degree k; the value of k will need to be sufficiently large in terms of other parameters of the model. [sent-150, score-0.196]

85 To deal with the problem that t itself may have no incoming links, we relax the search problem to that of finding a cluster of nodes containing t. [sent-152, score-0.436]

86 We place r nodes at each leaf of T , forming a set V of n = mr nodes total. [sent-155, score-0.433]

87 We then define a graph G on V as in Section 2: for a non-increasing function f(·), we create k links out of each node v ∈ V , choosing w as an endpoint with probability proportional to f(h(v, w)). [sent-156, score-0.533]

88 As before, we refer to this process as a hierarchical model with exponent α, for the appropriate value of α. [sent-157, score-0.337]

89 We refer to each set of r nodes at a common leaf of T as a cluster, and define the resolution of the hierarchical model to be the value r. [sent-158, score-0.465]

90 A decentralized algorithm is given knowledge of T , and a target node t; it must reach any node in the cluster containing t. [sent-159, score-0.972]

91 Unlike the previous algorithms we have developed, which only moved forward across links, the algorithm we design here will need to make use of the ability to travel in reverse across any link that it has already followed in the forward direction. [sent-160, score-0.248]

92 Note also that we cannot easily reduce the current search problem to that of Section 2 by collapsing clusters into “super-nodes,” since there are not necessarily links joining nodes within the same cluster. [sent-161, score-0.647]

93 The search task clearly becomes easier as the resolution of the model (i. [sent-162, score-0.2]

94 Thus, our goal is to achieve polylogarithmic search time in a hierarchical model with polylogarithmic resolution. [sent-165, score-1.159]

95 1 (a) There is a hierarchical model with exponent α = 1, constant out-degree, and polylogarithmic resolution in which a decentralized algorithm can achieve polylogarithmic search time. [sent-167, score-1.887]

96 (b) For every α = 1, there is no hierarchical model with exponent α, constant outdegree, and polylogarithmic resolution in which a decentralized algorithm can achieve polylogarithmic search time. [sent-168, score-1.887]

97 The search algorithm used to establish part (a) operates in phases. [sent-169, score-0.166]

98 It begins each phase j with a collection of Θ(log n) nodes all belonging to the sub-tree Tj that contains the target t and whose root is at depth j. [sent-170, score-0.299]

99 During phase j, it explores outward from each of these nodes until it has discovered a larger but still polylogarithmicsized set of nodes belonging to Tj . [sent-171, score-0.344]

100 From among these, there is a high probability that at least Θ(log n) have links into the smaller sub-tree Tj+1 that contains t and whose root is at depth j + 1. [sent-172, score-0.251]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('decentralized', 0.496), ('polylogarithmic', 0.404), ('links', 0.251), ('lattice', 0.217), ('nodes', 0.172), ('search', 0.166), ('milgram', 0.165), ('exponent', 0.162), ('group', 0.147), ('node', 0.146), ('hierarchical', 0.141), ('web', 0.122), ('link', 0.122), ('watts', 0.11), ('newman', 0.092), ('strogatz', 0.092), ('target', 0.086), ('social', 0.085), ('freenet', 0.073), ('kempe', 0.073), ('page', 0.072), ('containing', 0.07), ('lattices', 0.064), ('protocols', 0.064), ('individuals', 0.061), ('structures', 0.06), ('graph', 0.059), ('people', 0.058), ('joining', 0.058), ('leaf', 0.056), ('hierarchy', 0.055), ('crawling', 0.055), ('gossip', 0.055), ('huberman', 0.055), ('lukose', 0.055), ('puniyani', 0.055), ('network', 0.055), ('lim', 0.054), ('letter', 0.054), ('paths', 0.052), ('endpoint', 0.048), ('ancestor', 0.048), ('theorem', 0.047), ('leaves', 0.046), ('world', 0.045), ('achieve', 0.044), ('structure', 0.044), ('collection', 0.041), ('scalable', 0.041), ('height', 0.038), ('topic', 0.038), ('networks', 0.037), ('acquaintances', 0.037), ('adamic', 0.037), ('broker', 0.037), ('dodds', 0.037), ('gnutella', 0.037), ('goel', 0.037), ('govindan', 0.037), ('killworth', 0.037), ('liation', 0.037), ('occupations', 0.037), ('outdegree', 0.037), ('sigcomm', 0.037), ('constant', 0.036), ('reverse', 0.036), ('groups', 0.036), ('belong', 0.035), ('acm', 0.035), ('refer', 0.034), ('resolution', 0.034), ('forming', 0.033), ('stock', 0.032), ('exponents', 0.032), ('demers', 0.032), ('linkage', 0.032), ('duncan', 0.032), ('achlioptas', 0.032), ('kleinberg', 0.032), ('forward', 0.031), ('tj', 0.031), ('pairs', 0.031), ('chains', 0.03), ('underlying', 0.029), ('create', 0.029), ('cornell', 0.029), ('ball', 0.029), ('bernard', 0.029), ('path', 0.029), ('common', 0.028), ('six', 0.028), ('di', 0.028), ('cluster', 0.028), ('short', 0.028), ('design', 0.028), ('construct', 0.028), ('distance', 0.028), ('proofs', 0.028), ('graphs', 0.027), ('monotone', 0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000004 169 nips-2001-Small-World Phenomena and the Dynamics of Information

Author: Jon M. Kleinberg

Abstract: unkown-abstract

2 0.10842127 184 nips-2001-The Intelligent surfer: Probabilistic Combination of Link and Content Information in PageRank

Author: Matthew Richardson, Pedro Domingos

Abstract: The PageRank algorithm, used in the Google search engine, greatly improves the results of Web search by taking into account the link structure of the Web. PageRank assigns to a page a score proportional to the number of times a random surfer would visit that page, if it surfed indefinitely from page to page, following all outlinks from a page with equal probability. We propose to improve PageRank by using a more intelligent surfer, one that is guided by a probabilistic model of the relevance of a page to a query. Efficient execution of our algorithm at query time is made possible by precomputing at crawl time (and thus once for all queries) the necessary terms. Experiments on two large subsets of the Web indicate that our algorithm significantly outperforms PageRank in the (human-rated) quality of the pages returned, while remaining efficient enough to be used in today’s large search engines. 1

3 0.09265995 52 nips-2001-Computing Time Lower Bounds for Recurrent Sigmoidal Neural Networks

Author: M. Schmitt

Abstract: Recurrent neural networks of analog units are computers for realvalued functions. We study the time complexity of real computation in general recurrent neural networks. These have sigmoidal, linear, and product units of unlimited order as nodes and no restrictions on the weights. For networks operating in discrete time, we exhibit a family of functions with arbitrarily high complexity, and we derive almost tight bounds on the time required to compute these functions. Thus, evidence is given of the computational limitations that time-bounded analog recurrent neural networks are subject to. 1

4 0.084119968 90 nips-2001-Hyperbolic Self-Organizing Maps for Semantic Navigation

Author: Jorg Ontrup, Helge Ritter

Abstract: We introduce a new type of Self-Organizing Map (SOM) to navigate in the Semantic Space of large text collections. We propose a “hyperbolic SOM” (HSOM) based on a regular tesselation of the hyperbolic plane, which is a non-euclidean space characterized by constant negative gaussian curvature. The exponentially increasing size of a neighborhood around a point in hyperbolic space provides more freedom to map the complex information space arising from language into spatial relations. We describe experiments, showing that the HSOM can successfully be applied to text categorization tasks and yields results comparable to other state-of-the-art methods.

5 0.078847617 118 nips-2001-Matching Free Trees with Replicator Equations

Author: Marcello Pelillo

Abstract: Motivated by our recent work on rooted tree matching, in this paper we provide a solution to the problem of matching two free (i.e., unrooted) trees by constructing an association graph whose maximal cliques are in one-to-one correspondence with maximal common subtrees. We then solve the problem using simple replicator dynamics from evolutionary game theory. Experiments on hundreds of uniformly random trees are presented. The results are impressive: despite the inherent inability of these simple dynamics to escape from local optima, they always returned a globally optimal solution.

6 0.077192448 115 nips-2001-Linear-time inference in Hierarchical HMMs

7 0.076205678 149 nips-2001-Probabilistic Abstraction Hierarchies

8 0.061705422 89 nips-2001-Grouping with Bias

9 0.051418841 110 nips-2001-Learning Hierarchical Structures with Linear Relational Embedding

10 0.049843363 93 nips-2001-Incremental A*

11 0.047195278 43 nips-2001-Bayesian time series classification

12 0.046900425 24 nips-2001-Active Information Retrieval

13 0.046313263 135 nips-2001-On Spectral Clustering: Analysis and an algorithm

14 0.046280932 188 nips-2001-The Unified Propagation and Scaling Algorithm

15 0.043733262 193 nips-2001-Unsupervised Learning of Human Motion Models

16 0.043577738 56 nips-2001-Convolution Kernels for Natural Language

17 0.043071382 190 nips-2001-Thin Junction Trees

18 0.041912008 106 nips-2001-Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering

19 0.040419079 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments

20 0.039467335 132 nips-2001-Novel iteration schemes for the Cluster Variation Method


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.141), (1, -0.026), (2, 0.011), (3, -0.042), (4, -0.028), (5, -0.097), (6, -0.12), (7, -0.032), (8, -0.115), (9, -0.008), (10, -0.0), (11, -0.043), (12, -0.022), (13, 0.068), (14, -0.004), (15, 0.023), (16, -0.051), (17, 0.058), (18, 0.01), (19, -0.049), (20, -0.039), (21, 0.05), (22, 0.007), (23, 0.082), (24, -0.174), (25, 0.13), (26, -0.016), (27, 0.007), (28, 0.094), (29, 0.156), (30, 0.07), (31, 0.004), (32, 0.037), (33, 0.072), (34, -0.024), (35, -0.125), (36, -0.076), (37, -0.087), (38, 0.01), (39, -0.295), (40, -0.135), (41, 0.053), (42, -0.125), (43, -0.011), (44, 0.104), (45, 0.036), (46, -0.025), (47, 0.149), (48, 0.112), (49, -0.126)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96234673 169 nips-2001-Small-World Phenomena and the Dynamics of Information

Author: Jon M. Kleinberg

Abstract: unkown-abstract

2 0.64535916 184 nips-2001-The Intelligent surfer: Probabilistic Combination of Link and Content Information in PageRank

Author: Matthew Richardson, Pedro Domingos

Abstract: The PageRank algorithm, used in the Google search engine, greatly improves the results of Web search by taking into account the link structure of the Web. PageRank assigns to a page a score proportional to the number of times a random surfer would visit that page, if it surfed indefinitely from page to page, following all outlinks from a page with equal probability. We propose to improve PageRank by using a more intelligent surfer, one that is guided by a probabilistic model of the relevance of a page to a query. Efficient execution of our algorithm at query time is made possible by precomputing at crawl time (and thus once for all queries) the necessary terms. Experiments on two large subsets of the Web indicate that our algorithm significantly outperforms PageRank in the (human-rated) quality of the pages returned, while remaining efficient enough to be used in today’s large search engines. 1

3 0.61688572 90 nips-2001-Hyperbolic Self-Organizing Maps for Semantic Navigation

Author: Jorg Ontrup, Helge Ritter

Abstract: We introduce a new type of Self-Organizing Map (SOM) to navigate in the Semantic Space of large text collections. We propose a “hyperbolic SOM” (HSOM) based on a regular tesselation of the hyperbolic plane, which is a non-euclidean space characterized by constant negative gaussian curvature. The exponentially increasing size of a neighborhood around a point in hyperbolic space provides more freedom to map the complex information space arising from language into spatial relations. We describe experiments, showing that the HSOM can successfully be applied to text categorization tasks and yields results comparable to other state-of-the-art methods.

4 0.54321331 93 nips-2001-Incremental A*

Author: S. Koenig, M. Likhachev

Abstract: Incremental search techniques find optimal solutions to series of similar search tasks much faster than is possible by solving each search task from scratch. While researchers have developed incremental versions of uninformed search methods, we develop an incremental version of A*. The first search of Lifelong Planning A* is the same as that of A* but all subsequent searches are much faster because it reuses those parts of the previous search tree that are identical to the new search tree. We then present experimental results that demonstrate the advantages of Lifelong Planning A* for simple route planning tasks. 1 Overview Artificial intelligence has investigated knowledge-based search techniques that allow one to solve search tasks in large domains. Most of the research on these methods has studied how to solve one-shot search problems. However, search is often a repetitive process, where one needs to solve a series of similar search tasks, for example, because the actual situation turns out to be slightly different from the one initially assumed or because the situation changes over time. An example for route planning tasks are changing traffic conditions. Thus, one needs to replan for the new situation, for example if one always wants to display the least time-consuming route from the airport to the conference center on a web page. In these situations, most search methods replan from scratch, that is, solve the search problems independently. Incremental search techniques share with case-based planning, plan adaptation, repair-based planning, and learning search-control knowledge the property that they find solutions to series of similar search tasks much faster than is possible by solving each search task from scratch. Incremental search techniques, however, differ from the other techniques in that the quality of their solutions is guaranteed to be as good as the quality of the solutions obtained by replanning from scratch. Although incremental search methods are not widely known in artificial intelligence and control, different researchers have developed incremental search versions of uninformed search methods in the algorithms literature. An overview can be found in [FMSN00]. We, on the other hand, develop an incremental version of A*, thus combining ideas from the algorithms literature and the artificial intelligence literature. We call the algorithm Lifelong Planning A* (LPA*), in analogy to “lifelong learning” [Thr98], because it reuses £ We thank Anthony Stentz for his support. The Intelligent Decision-Making Group is partly supported by NSF awards under contracts IIS9984827, IIS-0098807, and ITR/AP-0113881. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the sponsoring organizations and agencies or the U.S. government. information from previous searches. LPA* uses heuristics to focus the search and always finds a shortest path for the current edge costs. The first search of LPA* is the same as that of A* but all subsequent searches are much faster. LPA* produces at least the search tree that A* builds. However, it achieves a substantial speedup over A* because it reuses those parts of the previous search tree that are identical to the new search tree. 2 The Route Planning Task Lifelong Planning A* (LPA*) solves the following search task: It applies to finite graph search problems on known graphs whose edge costs can increase or decrease over time. denotes the finite set of vertices of the graph. denotes the set of successors of vertex . Similarly, denotes the set of predecessors of vertex . denotes the cost of moving from vertex to vertex . LPA* always determines a shortest path from a given start vertex to a given goal vertex , knowing both the topology of the graph and the current edge costs. We use to denote the start distance of vertex , that is, the length of a shortest path from to .      ¨     ¨¦ £ £ ¡ ©§¥¤¢     FP HFE TSRQIGD¨  ¨¦ £ £ ¡    4 ©D¥CBA@!¨ ¨     ¨¦

5 0.4872005 118 nips-2001-Matching Free Trees with Replicator Equations

Author: Marcello Pelillo

Abstract: Motivated by our recent work on rooted tree matching, in this paper we provide a solution to the problem of matching two free (i.e., unrooted) trees by constructing an association graph whose maximal cliques are in one-to-one correspondence with maximal common subtrees. We then solve the problem using simple replicator dynamics from evolutionary game theory. Experiments on hundreds of uniformly random trees are presented. The results are impressive: despite the inherent inability of these simple dynamics to escape from local optima, they always returned a globally optimal solution.

6 0.44260302 149 nips-2001-Probabilistic Abstraction Hierarchies

7 0.43396759 52 nips-2001-Computing Time Lower Bounds for Recurrent Sigmoidal Neural Networks

8 0.37804309 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments

9 0.35462984 115 nips-2001-Linear-time inference in Hierarchical HMMs

10 0.32565722 76 nips-2001-Fast Parameter Estimation Using Green's Functions

11 0.30675384 110 nips-2001-Learning Hierarchical Structures with Linear Relational Embedding

12 0.28935707 89 nips-2001-Grouping with Bias

13 0.25447017 24 nips-2001-Active Information Retrieval

14 0.25015607 17 nips-2001-A Quantitative Model of Counterfactual Reasoning

15 0.22682087 186 nips-2001-The Noisy Euclidean Traveling Salesman Problem and Learning

16 0.22541094 56 nips-2001-Convolution Kernels for Natural Language

17 0.22399543 64 nips-2001-EM-DD: An Improved Multiple-Instance Learning Technique

18 0.22298323 192 nips-2001-Tree-based reparameterization for approximate inference on loopy graphs

19 0.21970533 91 nips-2001-Improvisation and Learning

20 0.21893571 3 nips-2001-ACh, Uncertainty, and Cortical Inference


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(14, 0.019), (17, 0.035), (19, 0.038), (27, 0.092), (30, 0.089), (38, 0.019), (54, 0.285), (59, 0.029), (72, 0.049), (79, 0.069), (83, 0.04), (91, 0.148)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.90570384 196 nips-2001-Very loopy belief propagation for unwrapping phase images

Author: Brendan J. Frey, Ralf Koetter, Nemanja Petrovic

Abstract: Since the discovery that the best error-correcting decoding algorithm can be viewed as belief propagation in a cycle-bound graph, researchers have been trying to determine under what circumstances

same-paper 2 0.80878627 169 nips-2001-Small-World Phenomena and the Dynamics of Information

Author: Jon M. Kleinberg

Abstract: unkown-abstract

3 0.59246296 162 nips-2001-Relative Density Nets: A New Way to Combine Backpropagation with HMM's

Author: Andrew D. Brown, Geoffrey E. Hinton

Abstract: Logistic units in the first hidden layer of a feedforward neural network compute the relative probability of a data point under two Gaussians. This leads us to consider substituting other density models. We present an architecture for performing discriminative learning of Hidden Markov Models using a network of many small HMM's. Experiments on speech data show it to be superior to the standard method of discriminatively training HMM's. 1

4 0.59163773 183 nips-2001-The Infinite Hidden Markov Model

Author: Matthew J. Beal, Zoubin Ghahramani, Carl E. Rasmussen

Abstract: We show that it is possible to extend hidden Markov models to have a countably infinite number of hidden states. By using the theory of Dirichlet processes we can implicitly integrate out the infinitely many transition parameters, leaving only three hyperparameters which can be learned from data. These three hyperparameters define a hierarchical Dirichlet process capable of capturing a rich set of transition dynamics. The three hyperparameters control the time scale of the dynamics, the sparsity of the underlying state-transition matrix, and the expected number of distinct hidden states in a finite sequence. In this framework it is also natural to allow the alphabet of emitted symbols to be infinite— consider, for example, symbols being possible words appearing in English text.

5 0.59110308 161 nips-2001-Reinforcement Learning with Long Short-Term Memory

Author: Bram Bakker

Abstract: This paper presents reinforcement learning with a Long ShortTerm Memory recurrent neural network: RL-LSTM. Model-free RL-LSTM using Advantage(,x) learning and directed exploration can solve non-Markovian tasks with long-term dependencies between relevant events. This is demonstrated in a T-maze task, as well as in a difficult variation of the pole balancing task. 1

6 0.58830917 123 nips-2001-Modeling Temporal Structure in Classical Conditioning

7 0.5878123 13 nips-2001-A Natural Policy Gradient

8 0.5876109 7 nips-2001-A Dynamic HMM for On-line Segmentation of Sequential Data

9 0.58524776 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments

10 0.58332163 89 nips-2001-Grouping with Bias

11 0.58272588 100 nips-2001-Iterative Double Clustering for Unsupervised and Semi-Supervised Learning

12 0.58235085 52 nips-2001-Computing Time Lower Bounds for Recurrent Sigmoidal Neural Networks

13 0.58210784 102 nips-2001-KLD-Sampling: Adaptive Particle Filters

14 0.58149481 3 nips-2001-ACh, Uncertainty, and Cortical Inference

15 0.58133495 56 nips-2001-Convolution Kernels for Natural Language

16 0.57940102 132 nips-2001-Novel iteration schemes for the Cluster Variation Method

17 0.5791378 157 nips-2001-Rates of Convergence of Performance Gradient Estimates Using Function Approximation and Bias in Reinforcement Learning

18 0.57802171 121 nips-2001-Model-Free Least-Squares Policy Iteration

19 0.5775044 182 nips-2001-The Fidelity of Local Ordinal Encoding

20 0.57749224 95 nips-2001-Infinite Mixtures of Gaussian Process Experts