nips nips2001 nips2001-93 knowledge-graph by maker-knowledge-mining

93 nips-2001-Incremental A*


Source: pdf

Author: S. Koenig, M. Likhachev

Abstract: Incremental search techniques find optimal solutions to series of similar search tasks much faster than is possible by solving each search task from scratch. While researchers have developed incremental versions of uninformed search methods, we develop an incremental version of A*. The first search of Lifelong Planning A* is the same as that of A* but all subsequent searches are much faster because it reuses those parts of the previous search tree that are identical to the new search tree. We then present experimental results that demonstrate the advantages of Lifelong Planning A* for simple route planning tasks. 1 Overview Artificial intelligence has investigated knowledge-based search techniques that allow one to solve search tasks in large domains. Most of the research on these methods has studied how to solve one-shot search problems. However, search is often a repetitive process, where one needs to solve a series of similar search tasks, for example, because the actual situation turns out to be slightly different from the one initially assumed or because the situation changes over time. An example for route planning tasks are changing traffic conditions. Thus, one needs to replan for the new situation, for example if one always wants to display the least time-consuming route from the airport to the conference center on a web page. In these situations, most search methods replan from scratch, that is, solve the search problems independently. Incremental search techniques share with case-based planning, plan adaptation, repair-based planning, and learning search-control knowledge the property that they find solutions to series of similar search tasks much faster than is possible by solving each search task from scratch. Incremental search techniques, however, differ from the other techniques in that the quality of their solutions is guaranteed to be as good as the quality of the solutions obtained by replanning from scratch. Although incremental search methods are not widely known in artificial intelligence and control, different researchers have developed incremental search versions of uninformed search methods in the algorithms literature. An overview can be found in [FMSN00]. We, on the other hand, develop an incremental version of A*, thus combining ideas from the algorithms literature and the artificial intelligence literature. We call the algorithm Lifelong Planning A* (LPA*), in analogy to “lifelong learning” [Thr98], because it reuses £ We thank Anthony Stentz for his support. The Intelligent Decision-Making Group is partly supported by NSF awards under contracts IIS9984827, IIS-0098807, and ITR/AP-0113881. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the sponsoring organizations and agencies or the U.S. government. information from previous searches. LPA* uses heuristics to focus the search and always finds a shortest path for the current edge costs. The first search of LPA* is the same as that of A* but all subsequent searches are much faster. LPA* produces at least the search tree that A* builds. However, it achieves a substantial speedup over A* because it reuses those parts of the previous search tree that are identical to the new search tree. 2 The Route Planning Task Lifelong Planning A* (LPA*) solves the following search task: It applies to finite graph search problems on known graphs whose edge costs can increase or decrease over time. denotes the finite set of vertices of the graph. denotes the set of successors of vertex . Similarly, denotes the set of predecessors of vertex . denotes the cost of moving from vertex to vertex . LPA* always determines a shortest path from a given start vertex to a given goal vertex , knowing both the topology of the graph and the current edge costs. We use to denote the start distance of vertex , that is, the length of a shortest path from to .      ¨     ¨¦ £ £ ¡ ©§¥¤¢     FP HFE TSRQIGD¨  ¨¦ £ £ ¡    4 ©D¥CBA@!¨ ¨     ¨¦

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu ¡ ¢ Abstract Incremental search techniques find optimal solutions to series of similar search tasks much faster than is possible by solving each search task from scratch. [sent-5, score-0.628]

2 While researchers have developed incremental versions of uninformed search methods, we develop an incremental version of A*. [sent-6, score-0.421]

3 The first search of Lifelong Planning A* is the same as that of A* but all subsequent searches are much faster because it reuses those parts of the previous search tree that are identical to the new search tree. [sent-7, score-0.692]

4 We then present experimental results that demonstrate the advantages of Lifelong Planning A* for simple route planning tasks. [sent-8, score-0.197]

5 1 Overview Artificial intelligence has investigated knowledge-based search techniques that allow one to solve search tasks in large domains. [sent-9, score-0.429]

6 Most of the research on these methods has studied how to solve one-shot search problems. [sent-10, score-0.199]

7 However, search is often a repetitive process, where one needs to solve a series of similar search tasks, for example, because the actual situation turns out to be slightly different from the one initially assumed or because the situation changes over time. [sent-11, score-0.445]

8 An example for route planning tasks are changing traffic conditions. [sent-12, score-0.228]

9 In these situations, most search methods replan from scratch, that is, solve the search problems independently. [sent-14, score-0.434]

10 Incremental search techniques share with case-based planning, plan adaptation, repair-based planning, and learning search-control knowledge the property that they find solutions to series of similar search tasks much faster than is possible by solving each search task from scratch. [sent-15, score-0.628]

11 Incremental search techniques, however, differ from the other techniques in that the quality of their solutions is guaranteed to be as good as the quality of the solutions obtained by replanning from scratch. [sent-16, score-0.199]

12 Although incremental search methods are not widely known in artificial intelligence and control, different researchers have developed incremental search versions of uninformed search methods in the algorithms literature. [sent-17, score-0.819]

13 LPA* uses heuristics to focus the search and always finds a shortest path for the current edge costs. [sent-26, score-0.529]

14 The first search of LPA* is the same as that of A* but all subsequent searches are much faster. [sent-27, score-0.232]

15 LPA* produces at least the search tree that A* builds. [sent-28, score-0.225]

16 However, it achieves a substantial speedup over A* because it reuses those parts of the previous search tree that are identical to the new search tree. [sent-29, score-0.46]

17 2 The Route Planning Task Lifelong Planning A* (LPA*) solves the following search task: It applies to finite graph search problems on known graphs whose edge costs can increase or decrease over time. [sent-30, score-0.521]

18 Similarly, denotes the set of predecessors of vertex . [sent-33, score-0.474]

19 denotes the cost of moving from vertex to vertex . [sent-34, score-0.948]

20 LPA* always determines a shortest path from a given start vertex to a given goal vertex , knowing both the topology of the graph and the current edge costs. [sent-35, score-1.333]

21 We use to denote the start distance of vertex , that is, the length of a shortest path from to . [sent-36, score-0.751]

22  8 6 4 ¨ 1 ¨¦ 975%32$§£ ) 0(    ¨ ¨ e¥IG¨ FP HFE  ¨¦ a ©Db`   X WU  YH ¥V¨    dc¨ To motivate and test LPA*, we use a special case of these search tasks that is easy to visualize. [sent-39, score-0.23]

23 LPA* always determines a shortest path between two given cells of the gridworld, knowing both the topology of the gridworld and which cells are currently blocked. [sent-42, score-0.551]

24 This is a special case of the graph search problems on eight-connected grids whose edge costs are either one or infinity. [sent-43, score-0.322]

25 3 Reusing Information from Previous Searches The graph search problems can be solved with traditional graph-search methods, such as breadth-first search, if they update the shortest path every time some edge costs change. [sent-46, score-0.523]

26 The original gridworld is shown on top and the changed gridworld is shown at the bottom. [sent-50, score-0.463]

27 In particular, three blocked cells became traversable (namely, B3, C5, and D2) and three traversable cells became blocked (namely, A1, A4, D3). [sent-52, score-0.398]

28 Thus, two percent of the cells changed their status but the obstacle density remained the same. [sent-53, score-0.22]

29 ) The shortest path changed since one cell on the original shortest path became blocked. [sent-57, score-0.572]

30 f Once the start distances of all cells are known, one can easily trace back a shortest path from the start cell to the goal cell by always greedily decreasing the start distance, starting at the goal cell. [sent-58, score-0.735]

31 This is similar to how A* traces the shortest path back from to using the search tree it has constructed. [sent-59, score-0.426]

32 The start distances are shown in each traversable cell of the original and changed gridworlds. [sent-61, score-0.373]

33 Those cells whose start distances in the changed gridworld have changed from the corresponding ones in the original gridworld are shaded gray. [sent-62, score-0.795]

34 ¨ FP HF RQIGE ¨ There are two different ways of decreasing the search effort for determining the start distances for the changed gridworld. [sent-64, score-0.458]

35 First, some start distances have not changed and thus need not get recomputed. [sent-65, score-0.259]

36 It is a simple matter of restating it to search from the start vertex to the goal vertex. [sent-68, score-0.772]

37 To avoid biasing our experimental results in favor of LPA*, we changed the termination condition of DynamicSWSF-FP so that it stops immediately after it is sure that it has found a shortest path. [sent-70, score-0.305]

38 ) Second, heuristic knowledge, in form of approximations of the goal distances, can be used to focus the search and determine that some start distances need not get computed at all. [sent-71, score-0.4]

39 We demonstrate that the two ways of decreasing the search effort are orthogonal by developing LPA* that combines both of them and thus is able to replan faster than either DynamicSWSF-FP or A*. [sent-73, score-0.235]

40 Figure 2 shows in gray those cells whose start distances each of the four algorithms recomputes. [sent-74, score-0.227]

41 ) During the search in the original gridworld, DynamicSWSF-FP computes the same start distances as breadth-first search during the first search and LPA* computes the same start distances as A*. [sent-76, score-0.905]

42 During the search in the changed gridworld, however, both incremental search (DynamicSWSF-FP) and heuristic search (A*) individually decrease the number of start distances that need to get recomputed compared to breadth-first search, and together (LPA*) decrease the number even more. [sent-77, score-0.986]

43 As for A*, the heuristics approximate the goal distances of the vertices . [sent-79, score-0.326]

44 They need to be consistent, that is, satisfy and for all vertices and with . [sent-80, score-0.181]

45 ¨ ©¦    4 ©¦ ¥¤ 4 YD$§i6 $¦   ¨   £ ¨1 ¨¦ £  ¨ ( ¡ X W ¨ ¢©QH ¥U ©¦   X W ¨ ¦¡ YH ¥U '§¨ ¨ ©¦ `  ¨¦ £ ¡   4 ©§£ C¢S ¨   S ¨ ¨ LPA* maintains an estimate of the start distance of each vertex . [sent-81, score-0.576]

46  1)'    ¨ £¥ £ ¡ ©§¦¤¢  A vertex is called locally consistent iff its g-value equals its rhs-value. [sent-90, score-0.586]

47 Thus, this concept is important because the g-values of all vertices equal their start distances iff all vertices are locally consistent. [sent-92, score-0.628]

48 However, LPA* does not make every vertex locally consistent. [sent-93, score-0.552]

49 Instead, it uses the heuristics to focus the search and update only the g-values that are relevant for computing a shortest path from to . [sent-94, score-0.444]

50 X W YH QU ¨ ¨ ©¦   FP HFE RQIGD¨ LPA* maintains a priority queue that always contains exactly the locally inconsistent vertices. [sent-95, score-0.693]

51 These are the vertices whose g-values LPA* potentially needs to update to make them locally consistent. [sent-96, score-0.283]

52 The keys of the vertices in the priority queue correspond to the f-values used by A*, and LPA* always expands the vertex in the priority queue with the smallest key, similar to A* that always expands the vertex in the priority queue with the smallest f-value. [sent-97, score-3.019]

53 The key of vertex is a vector with two components: U ¡ ¢ ¨ ¨ $¦ V CB The pseudocode uses the following functions to manage the priority queue: U. [sent-99, score-0.853]

54 TopKey returns the smallest priority of all vertices in priority queue . [sent-100, score-1.093]

55 Pop deletes the vertex with the smallest priority in priority queue and returns the vertex. [sent-104, score-1.386]

56 LPA* expands vertices in the order of increasing k -values and vertices with equal k -values in order of increasing k -values. [sent-127, score-0.46]

57 This is similar to A* that expands vertices in the order of increasing f-values (since the heuristics are consistent) and vertices with equal f-values that are on the same branch of the search tree in order of increasing g-values (since it grows the search tree). [sent-128, score-0.928]

58 ¢ $¦   £ %$¦§C'¡ ©¢’ ¨ ¨ a `  ¨¦ ¨$¦ 4€ V V ¨ ©¦ 4€ V %$¦ € ¨ )  ¨¦ ©@V ¡ $¦ € ¨ ¨ %$¦ ‡ €  ¨ ¦ ‰6  ¨ ¦ ©$4 V ©@V V  ¨¦ €  ¨ ©DV ©¦ 4 ‡  ¨¦ ©G4 V 6  ¨¦ ‡ &©2‘V V V ‡ € A locally inconsistent vertex is called overconsistent iff . [sent-129, score-0.726]

59 When LPA* because vertex expands a locally overconsistent vertex 12-13 , then has the smallest key among all locally inconsistent vertices. [sent-130, score-1.396]

60 implies that and thus LPA* expands overconsistent vertices in the same order as A*. [sent-131, score-0.375]

61 During the expansion of vertex , LPA* sets the g-value of vertex to its rhsvalue and thus its start distance 12 , which is the desired value and also makes the vertex locally consistent. [sent-132, score-1.576]

62 When LPA* expands inconsistent vertex is called underconsistent iff a locally underconsistent vertex 15-16 , then it simply sets the g-value of the vertex to infinity 15 . [sent-135, score-1.844]

63 This makes the vertex either locally consistent or locally overconsistent. [sent-136, score-0.63]

64 If the expanded vertex was locally overconsistent, then the change of its g-value can affect the local consistency of its successors 13 . [sent-137, score-0.6]

65 Similarly, if the expanded vertex was locally underconsistent, then it and its successors can be affected 16 . [sent-138, score-0.6]

66 LPA* therefore updates rhs-values of these vertices, checks their local consistency, and adds them to or removes them from the priority queue accordingly. [sent-139, score-0.541]

67 This is similar to A* that expands vertices until it expands at which point in time the g-value of equals its start distance and the f-value of the vertex to expand next is no smaller than the f-value of . [sent-141, score-0.927]

68 It turns out that LPA* expands a vertex at most twice, namely at most once when it is underconsistent and at most once when it is overconsistent. [sent-142, score-0.656]

69 Thus, ComputeShortestPath returns after a number of vertex expansions that is at most twice the number of vertices. [sent-143, score-0.525]

70 Otherwise, one can trace back a shortest path from to by always moving from the current vertex , starting at , to any predecessor that minimizes until is reached (ties can be broken arbitrarily), similar to what A* can do if it does not use backpointers. [sent-146, score-0.7]

71 The main function Main() first calls Initialize() to initialize the search problem 17 . [sent-148, score-0.248]

72 Initialize() sets the initial g-values of all vertices to infinity and sets their rhs-values according to Equation 1 03-04 . [sent-149, score-0.181]

73 Thus, initially is the only locally inconsistent vertex and is inserted into the otherwise empty priority queue with a key calculated according to Equation 2 05 . [sent-150, score-1.141]

74 This initialization guarantees that the first call to ComputeShortestPath() performs exactly an A* search, that is, expands exactly the same vertices as A* in exactly the same order, provided that A* breaks ties between vertices with the same f-values suitably. [sent-151, score-0.5]

75 Notice that, in an actual implementation, Initialize() only needs to initialize a vertex when it encounters it during the search and thus does not need to initialize all vertices up front. [sent-152, score-0.976]

76 First, a vertex sometimes gets removed from the priority queue and then immediately reinserted with a different key. [sent-158, score-0.994]

77 For example, a vertex can get removed on line 07 and then be reentered on line 08 . [sent-159, score-0.474]

78 In this case, it is often more efficient to leave the vertex in the priority queue, update its key, and only change its position in the priority queue. [sent-160, score-1.098]

79 Second, when UpdateVertex on line 13 computes the rhs-value for a successor of an overconsistent vertex it is unnecessary to take the minimum over all of its respective predecessors. [sent-161, score-0.603]

80 It is sufficient to compute the rhs-value as the minimum of its old rhs-value and the sum of the new g-value of the overconsistent vertex and the cost of moving from the overconsistent vertex to the successor. [sent-162, score-1.14]

81 The reason is that only the g-value of the overconsistent vertex has changed. [sent-163, score-0.57]

82 Third, when UpdateVertex on line 16 computes the rhs-value for a successor of an underconsistent vertex, the only g-value that has changed is the g-value of the underconsistent vertex. [sent-165, score-0.306]

83 Fourth, the second and third optimization concerned the computations of the rhs-values of the successors after the g-value of a vertex has changed. [sent-168, score-0.522]

84 Similar optimizations can be made for the computation of the rhs-value of a vertex after the cost of one of its incoming edges has changed. [sent-169, score-0.505]

85 ¢ 6 Analytical and Experimental Results C 1B The pseudocode uses the following functions to manage the priority queue: U. [sent-171, score-0.354]

86 Top returns a vertex with the smallest priority of all vertices in is empty, then U. [sent-172, score-1.047]

87 TopKey returns the smallest priority of all vertices in priority queue . [sent-176, score-1.093]

88 Update changes the priority of vertex in priority queue to . [sent-180, score-1.329]

89 (It does nothing if the current priority of vertex already equals . [sent-181, score-0.786]

90 ) The priority queues of all four algorithms were implemented as binary heaps. [sent-205, score-0.312]

91 Since all algorithms determine the same paths (if they break ties suitably), we need to compare their total search time until a shortest path has been found. [sent-206, score-0.44]

92 Note that we count two vertex expansions, not just one vertex expansion, if LPA* expands the same vertex twice, to avoid biasing our experimental results in favor of LPA*. [sent-208, score-1.57]

93 The start cell is at coordinates (34, 20) and the goal cell is at coordinates (5, 20), where the upper leftmost cell is at coordinates (0, 0). [sent-210, score-0.225]

94 Then, it was changed 500 times in a row, each time by making eight randomly chosen blocked cells traversable and eight randomly chosen traversable cells blocked. [sent-212, score-0.426]

95 Thus, each time one percent of the cells changed their status but the obstacle density remained the same. [sent-213, score-0.22]

96 After each of the 500 changes, the algorithms recomputed a shortest path from the start cell to the goal cell. [sent-214, score-0.367]

97 The table confirms the observations made in Section 3: LPA* outperforms the other three search methods according to all three performance measures. [sent-216, score-0.199]

98 ve = va = hp =   incremental search heuristic search A* ve = 284. [sent-217, score-0.6]

99 9   ve = va = hp = uninformed search breadth-first search 1331. [sent-229, score-0.534]

100 6   complete search We have also applied LPA* successfully to more complex planning tasks, including the kind of route planning tasks that Focussed Dynamic A* [Ste95] applies to. [sent-241, score-0.572]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('vertex', 0.474), ('lpa', 0.436), ('priority', 0.312), ('queue', 0.208), ('search', 0.199), ('vertices', 0.181), ('gridworld', 0.179), ('lifelong', 0.179), ('planning', 0.145), ('updatevertex', 0.132), ('shortest', 0.126), ('sgoal', 0.12), ('sstart', 0.12), ('changed', 0.105), ('expands', 0.098), ('computeshortestpath', 0.096), ('overconsistent', 0.096), ('yh', 0.095), ('calculatekey', 0.084), ('underconsistent', 0.084), ('incremental', 0.081), ('locally', 0.078), ('distances', 0.078), ('start', 0.076), ('fp', 0.076), ('path', 0.075), ('cells', 0.073), ('traversable', 0.072), ('costs', 0.063), ('hfe', 0.06), ('uninformed', 0.06), ('edge', 0.06), ('cc', 0.058), ('route', 0.052), ('returns', 0.051), ('db', 0.049), ('initialize', 0.049), ('successors', 0.048), ('heuristics', 0.044), ('inconsistent', 0.044), ('cell', 0.042), ('wu', 0.042), ('obstacle', 0.042), ('seb', 0.042), ('cb', 0.041), ('ties', 0.04), ('fseb', 0.036), ('georgia', 0.036), ('gridworlds', 0.036), ('pedb', 0.036), ('replan', 0.036), ('reuses', 0.036), ('bc', 0.035), ('iff', 0.034), ('searches', 0.033), ('successor', 0.033), ('else', 0.032), ('fc', 0.032), ('tasks', 0.031), ('blocked', 0.031), ('erb', 0.031), ('optimizations', 0.031), ('fty', 0.031), ('qh', 0.03), ('hp', 0.03), ('smallest', 0.029), ('biasing', 0.028), ('keys', 0.026), ('maintains', 0.026), ('tree', 0.026), ('va', 0.025), ('recomputed', 0.025), ('key', 0.025), ('always', 0.025), ('forever', 0.024), ('heap', 0.024), ('inserts', 0.024), ('likhachev', 0.024), ('qu', 0.024), ('rqig', 0.024), ('rqige', 0.024), ('runtimes', 0.024), ('sedb', 0.024), ('traversability', 0.024), ('needs', 0.024), ('heuristic', 0.024), ('termination', 0.024), ('changes', 0.023), ('goal', 0.023), ('bb', 0.023), ('became', 0.023), ('favor', 0.022), ('dc', 0.021), ('ve', 0.021), ('removes', 0.021), ('atlanta', 0.021), ('mazes', 0.021), ('manage', 0.021), ('pseudocode', 0.021), ('sed', 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999964 93 nips-2001-Incremental A*

Author: S. Koenig, M. Likhachev

Abstract: Incremental search techniques find optimal solutions to series of similar search tasks much faster than is possible by solving each search task from scratch. While researchers have developed incremental versions of uninformed search methods, we develop an incremental version of A*. The first search of Lifelong Planning A* is the same as that of A* but all subsequent searches are much faster because it reuses those parts of the previous search tree that are identical to the new search tree. We then present experimental results that demonstrate the advantages of Lifelong Planning A* for simple route planning tasks. 1 Overview Artificial intelligence has investigated knowledge-based search techniques that allow one to solve search tasks in large domains. Most of the research on these methods has studied how to solve one-shot search problems. However, search is often a repetitive process, where one needs to solve a series of similar search tasks, for example, because the actual situation turns out to be slightly different from the one initially assumed or because the situation changes over time. An example for route planning tasks are changing traffic conditions. Thus, one needs to replan for the new situation, for example if one always wants to display the least time-consuming route from the airport to the conference center on a web page. In these situations, most search methods replan from scratch, that is, solve the search problems independently. Incremental search techniques share with case-based planning, plan adaptation, repair-based planning, and learning search-control knowledge the property that they find solutions to series of similar search tasks much faster than is possible by solving each search task from scratch. Incremental search techniques, however, differ from the other techniques in that the quality of their solutions is guaranteed to be as good as the quality of the solutions obtained by replanning from scratch. Although incremental search methods are not widely known in artificial intelligence and control, different researchers have developed incremental search versions of uninformed search methods in the algorithms literature. An overview can be found in [FMSN00]. We, on the other hand, develop an incremental version of A*, thus combining ideas from the algorithms literature and the artificial intelligence literature. We call the algorithm Lifelong Planning A* (LPA*), in analogy to “lifelong learning” [Thr98], because it reuses £ We thank Anthony Stentz for his support. The Intelligent Decision-Making Group is partly supported by NSF awards under contracts IIS9984827, IIS-0098807, and ITR/AP-0113881. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the sponsoring organizations and agencies or the U.S. government. information from previous searches. LPA* uses heuristics to focus the search and always finds a shortest path for the current edge costs. The first search of LPA* is the same as that of A* but all subsequent searches are much faster. LPA* produces at least the search tree that A* builds. However, it achieves a substantial speedup over A* because it reuses those parts of the previous search tree that are identical to the new search tree. 2 The Route Planning Task Lifelong Planning A* (LPA*) solves the following search task: It applies to finite graph search problems on known graphs whose edge costs can increase or decrease over time. denotes the finite set of vertices of the graph. denotes the set of successors of vertex . Similarly, denotes the set of predecessors of vertex . denotes the cost of moving from vertex to vertex . LPA* always determines a shortest path from a given start vertex to a given goal vertex , knowing both the topology of the graph and the current edge costs. We use to denote the start distance of vertex , that is, the length of a shortest path from to .      ¨     ¨¦ £ £ ¡ ©§¥¤¢     FP HFE TSRQIGD¨  ¨¦ £ £ ¡    4 ©D¥CBA@!¨ ¨     ¨¦

2 0.10561366 32 nips-2001-An Efficient, Exact Algorithm for Solving Tree-Structured Graphical Games

Author: Michael L. Littman, Michael J. Kearns, Satinder P. Singh

Abstract: We describe a new algorithm for computing a Nash equilibrium in graphical games, a compact representation for multi-agent systems that we introduced in previous work. The algorithm is the first to compute equilibria both efficiently and exactly for a non-trivial class of graphical games. 1

3 0.055940073 193 nips-2001-Unsupervised Learning of Human Motion Models

Author: Yang Song, Luis Goncalves, Pietro Perona

Abstract: This paper presents an unsupervised learning algorithm that can derive the probabilistic dependence structure of parts of an object (a moving human body in our examples) automatically from unlabeled data. The distinguished part of this work is that it is based on unlabeled data, i.e., the training features include both useful foreground parts and background clutter and the correspondence between the parts and detected features are unknown. We use decomposable triangulated graphs to depict the probabilistic independence of parts, but the unsupervised technique is not limited to this type of graph. In the new approach, labeling of the data (part assignments) is taken as hidden variables and the EM algorithm is applied. A greedy algorithm is developed to select parts and to search for the optimal structure based on the differential entropy of these variables. The success of our algorithm is demonstrated by applying it to generate models of human motion automatically from unlabeled real image sequences.

4 0.049843363 169 nips-2001-Small-World Phenomena and the Dynamics of Information

Author: Jon M. Kleinberg

Abstract: unkown-abstract

5 0.046104319 89 nips-2001-Grouping with Bias

Author: Stella X. Yu, Jianbo Shi

Abstract: With the optimization of pattern discrimination as a goal, graph partitioning approaches often lack the capability to integrate prior knowledge to guide grouping. In this paper, we consider priors from unitary generative models, partially labeled data and spatial attention. These priors are modelled as constraints in the solution space. By imposing uniformity condition on the constraints, we restrict the feasible space to one of smooth solutions. A subspace projection method is developed to solve this constrained eigenproblema We demonstrate that simple priors can greatly improve image segmentation results. 1

6 0.039267261 118 nips-2001-Matching Free Trees with Replicator Equations

7 0.034873229 128 nips-2001-Multiagent Planning with Factored MDPs

8 0.033276819 94 nips-2001-Incremental Learning and Selective Sampling via Parametric Optimization Framework for SVM

9 0.031526417 7 nips-2001-A Dynamic HMM for On-line Segmentation of Sequential Data

10 0.031148268 177 nips-2001-Switch Packet Arbitration via Queue-Learning

11 0.030501485 106 nips-2001-Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering

12 0.030098535 55 nips-2001-Convergence of Optimistic and Incremental Q-Learning

13 0.029632334 73 nips-2001-Eye movements and the maturation of cortical orientation selectivity

14 0.029235911 142 nips-2001-Orientational and Geometric Determinants of Place and Head-direction

15 0.028814085 186 nips-2001-The Noisy Euclidean Traveling Salesman Problem and Learning

16 0.028440252 150 nips-2001-Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex

17 0.027351309 101 nips-2001-K-Local Hyperplane and Convex Distance Nearest Neighbor Algorithms

18 0.027181756 42 nips-2001-Bayesian morphometry of hippocampal cells suggests same-cell somatodendritic repulsion

19 0.026849991 190 nips-2001-Thin Junction Trees

20 0.026366642 84 nips-2001-Global Coordination of Local Linear Models


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.081), (1, -0.024), (2, 0.03), (3, -0.012), (4, 0.002), (5, -0.035), (6, -0.068), (7, 0.007), (8, -0.026), (9, 0.021), (10, 0.018), (11, -0.036), (12, 0.026), (13, 0.035), (14, 0.016), (15, -0.023), (16, -0.064), (17, 0.062), (18, -0.032), (19, -0.004), (20, -0.071), (21, 0.103), (22, 0.041), (23, 0.098), (24, -0.001), (25, 0.126), (26, -0.075), (27, -0.039), (28, -0.006), (29, -0.046), (30, 0.031), (31, 0.063), (32, 0.06), (33, 0.062), (34, -0.031), (35, -0.014), (36, -0.051), (37, 0.099), (38, -0.127), (39, -0.071), (40, -0.102), (41, 0.037), (42, -0.117), (43, -0.04), (44, 0.045), (45, -0.202), (46, -0.086), (47, 0.074), (48, 0.236), (49, 0.024)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97846997 93 nips-2001-Incremental A*

Author: S. Koenig, M. Likhachev

Abstract: Incremental search techniques find optimal solutions to series of similar search tasks much faster than is possible by solving each search task from scratch. While researchers have developed incremental versions of uninformed search methods, we develop an incremental version of A*. The first search of Lifelong Planning A* is the same as that of A* but all subsequent searches are much faster because it reuses those parts of the previous search tree that are identical to the new search tree. We then present experimental results that demonstrate the advantages of Lifelong Planning A* for simple route planning tasks. 1 Overview Artificial intelligence has investigated knowledge-based search techniques that allow one to solve search tasks in large domains. Most of the research on these methods has studied how to solve one-shot search problems. However, search is often a repetitive process, where one needs to solve a series of similar search tasks, for example, because the actual situation turns out to be slightly different from the one initially assumed or because the situation changes over time. An example for route planning tasks are changing traffic conditions. Thus, one needs to replan for the new situation, for example if one always wants to display the least time-consuming route from the airport to the conference center on a web page. In these situations, most search methods replan from scratch, that is, solve the search problems independently. Incremental search techniques share with case-based planning, plan adaptation, repair-based planning, and learning search-control knowledge the property that they find solutions to series of similar search tasks much faster than is possible by solving each search task from scratch. Incremental search techniques, however, differ from the other techniques in that the quality of their solutions is guaranteed to be as good as the quality of the solutions obtained by replanning from scratch. Although incremental search methods are not widely known in artificial intelligence and control, different researchers have developed incremental search versions of uninformed search methods in the algorithms literature. An overview can be found in [FMSN00]. We, on the other hand, develop an incremental version of A*, thus combining ideas from the algorithms literature and the artificial intelligence literature. We call the algorithm Lifelong Planning A* (LPA*), in analogy to “lifelong learning” [Thr98], because it reuses £ We thank Anthony Stentz for his support. The Intelligent Decision-Making Group is partly supported by NSF awards under contracts IIS9984827, IIS-0098807, and ITR/AP-0113881. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the sponsoring organizations and agencies or the U.S. government. information from previous searches. LPA* uses heuristics to focus the search and always finds a shortest path for the current edge costs. The first search of LPA* is the same as that of A* but all subsequent searches are much faster. LPA* produces at least the search tree that A* builds. However, it achieves a substantial speedup over A* because it reuses those parts of the previous search tree that are identical to the new search tree. 2 The Route Planning Task Lifelong Planning A* (LPA*) solves the following search task: It applies to finite graph search problems on known graphs whose edge costs can increase or decrease over time. denotes the finite set of vertices of the graph. denotes the set of successors of vertex . Similarly, denotes the set of predecessors of vertex . denotes the cost of moving from vertex to vertex . LPA* always determines a shortest path from a given start vertex to a given goal vertex , knowing both the topology of the graph and the current edge costs. We use to denote the start distance of vertex , that is, the length of a shortest path from to .      ¨     ¨¦ £ £ ¡ ©§¥¤¢     FP HFE TSRQIGD¨  ¨¦ £ £ ¡    4 ©D¥CBA@!¨ ¨     ¨¦

2 0.48613459 169 nips-2001-Small-World Phenomena and the Dynamics of Information

Author: Jon M. Kleinberg

Abstract: unkown-abstract

3 0.45367023 32 nips-2001-An Efficient, Exact Algorithm for Solving Tree-Structured Graphical Games

Author: Michael L. Littman, Michael J. Kearns, Satinder P. Singh

Abstract: We describe a new algorithm for computing a Nash equilibrium in graphical games, a compact representation for multi-agent systems that we introduced in previous work. The algorithm is the first to compute equilibria both efficiently and exactly for a non-trivial class of graphical games. 1

4 0.37132904 149 nips-2001-Probabilistic Abstraction Hierarchies

Author: Eran Segal, Daphne Koller, Dirk Ormoneit

Abstract: Many domains are naturally organized in an abstraction hierarchy or taxonomy, where the instances in “nearby” classes in the taxonomy are similar. In this paper, we provide a general probabilistic framework for clustering data into a set of classes organized as a taxonomy, where each class is associated with a probabilistic model from which the data was generated. The clustering algorithm simultaneously optimizes three things: the assignment of data instances to clusters, the models associated with the clusters, and the structure of the abstraction hierarchy. A unique feature of our approach is that it utilizes global optimization algorithms for both of the last two steps, reducing the sensitivity to noise and the propensity to local maxima that are characteristic of algorithms such as hierarchical agglomerative clustering that only take local steps. We provide a theoretical analysis for our algorithm, showing that it converges to a local maximum of the joint likelihood of model and data. We present experimental results on synthetic data, and on real data in the domains of gene expression and text.

5 0.35147449 89 nips-2001-Grouping with Bias

Author: Stella X. Yu, Jianbo Shi

Abstract: With the optimization of pattern discrimination as a goal, graph partitioning approaches often lack the capability to integrate prior knowledge to guide grouping. In this paper, we consider priors from unitary generative models, partially labeled data and spatial attention. These priors are modelled as constraints in the solution space. By imposing uniformity condition on the constraints, we restrict the feasible space to one of smooth solutions. A subspace projection method is developed to solve this constrained eigenproblema We demonstrate that simple priors can greatly improve image segmentation results. 1

6 0.34875306 64 nips-2001-EM-DD: An Improved Multiple-Instance Learning Technique

7 0.34835777 110 nips-2001-Learning Hierarchical Structures with Linear Relational Embedding

8 0.31173849 42 nips-2001-Bayesian morphometry of hippocampal cells suggests same-cell somatodendritic repulsion

9 0.30579445 186 nips-2001-The Noisy Euclidean Traveling Salesman Problem and Learning

10 0.30160391 177 nips-2001-Switch Packet Arbitration via Queue-Learning

11 0.29789788 184 nips-2001-The Intelligent surfer: Probabilistic Combination of Link and Content Information in PageRank

12 0.28740296 7 nips-2001-A Dynamic HMM for On-line Segmentation of Sequential Data

13 0.28662768 193 nips-2001-Unsupervised Learning of Human Motion Models

14 0.28654179 158 nips-2001-Receptive field structure of flow detectors for heading perception

15 0.27850437 94 nips-2001-Incremental Learning and Selective Sampling via Parametric Optimization Framework for SVM

16 0.26565042 106 nips-2001-Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering

17 0.25540361 10 nips-2001-A Hierarchical Model of Complex Cells in Visual Cortex for the Binocular Perception of Motion-in-Depth

18 0.2477079 108 nips-2001-Learning Body Pose via Specialized Maps

19 0.24440092 146 nips-2001-Playing is believing: The role of beliefs in multi-agent learning

20 0.23156258 26 nips-2001-Active Portfolio-Management based on Error Correction Neural Networks


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(14, 0.014), (17, 0.024), (19, 0.54), (27, 0.046), (30, 0.044), (38, 0.016), (59, 0.025), (72, 0.064), (79, 0.027), (83, 0.021), (91, 0.075)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.94754523 93 nips-2001-Incremental A*

Author: S. Koenig, M. Likhachev

Abstract: Incremental search techniques find optimal solutions to series of similar search tasks much faster than is possible by solving each search task from scratch. While researchers have developed incremental versions of uninformed search methods, we develop an incremental version of A*. The first search of Lifelong Planning A* is the same as that of A* but all subsequent searches are much faster because it reuses those parts of the previous search tree that are identical to the new search tree. We then present experimental results that demonstrate the advantages of Lifelong Planning A* for simple route planning tasks. 1 Overview Artificial intelligence has investigated knowledge-based search techniques that allow one to solve search tasks in large domains. Most of the research on these methods has studied how to solve one-shot search problems. However, search is often a repetitive process, where one needs to solve a series of similar search tasks, for example, because the actual situation turns out to be slightly different from the one initially assumed or because the situation changes over time. An example for route planning tasks are changing traffic conditions. Thus, one needs to replan for the new situation, for example if one always wants to display the least time-consuming route from the airport to the conference center on a web page. In these situations, most search methods replan from scratch, that is, solve the search problems independently. Incremental search techniques share with case-based planning, plan adaptation, repair-based planning, and learning search-control knowledge the property that they find solutions to series of similar search tasks much faster than is possible by solving each search task from scratch. Incremental search techniques, however, differ from the other techniques in that the quality of their solutions is guaranteed to be as good as the quality of the solutions obtained by replanning from scratch. Although incremental search methods are not widely known in artificial intelligence and control, different researchers have developed incremental search versions of uninformed search methods in the algorithms literature. An overview can be found in [FMSN00]. We, on the other hand, develop an incremental version of A*, thus combining ideas from the algorithms literature and the artificial intelligence literature. We call the algorithm Lifelong Planning A* (LPA*), in analogy to “lifelong learning” [Thr98], because it reuses £ We thank Anthony Stentz for his support. The Intelligent Decision-Making Group is partly supported by NSF awards under contracts IIS9984827, IIS-0098807, and ITR/AP-0113881. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the sponsoring organizations and agencies or the U.S. government. information from previous searches. LPA* uses heuristics to focus the search and always finds a shortest path for the current edge costs. The first search of LPA* is the same as that of A* but all subsequent searches are much faster. LPA* produces at least the search tree that A* builds. However, it achieves a substantial speedup over A* because it reuses those parts of the previous search tree that are identical to the new search tree. 2 The Route Planning Task Lifelong Planning A* (LPA*) solves the following search task: It applies to finite graph search problems on known graphs whose edge costs can increase or decrease over time. denotes the finite set of vertices of the graph. denotes the set of successors of vertex . Similarly, denotes the set of predecessors of vertex . denotes the cost of moving from vertex to vertex . LPA* always determines a shortest path from a given start vertex to a given goal vertex , knowing both the topology of the graph and the current edge costs. We use to denote the start distance of vertex , that is, the length of a shortest path from to .      ¨     ¨¦ £ £ ¡ ©§¥¤¢     FP HFE TSRQIGD¨  ¨¦ £ £ ¡    4 ©D¥CBA@!¨ ¨     ¨¦

2 0.9068597 124 nips-2001-Modeling the Modulatory Effect of Attention on Human Spatial Vision

Author: Laurent Itti, Jochen Braun, Christof Koch

Abstract: We present new simulation results , in which a computational model of interacting visual neurons simultaneously predicts the modulation of spatial vision thresholds by focal visual attention, for five dual-task human psychophysics experiments. This new study complements our previous findings that attention activates a winnertake-all competition among early visual neurons within one cortical hypercolumn. This

3 0.88046801 83 nips-2001-Geometrical Singularities in the Neuromanifold of Multilayer Perceptrons

Author: Shun-ichi Amari, Hyeyoung Park, Tomoko Ozeki

Abstract: Singularities are ubiquitous in the parameter space of hierarchical models such as multilayer perceptrons. At singularities, the Fisher information matrix degenerates, and the Cramer-Rao paradigm does no more hold, implying that the classical model selection theory such as AIC and MDL cannot be applied. It is important to study the relation between the generalization error and the training error at singularities. The present paper demonstrates a method of analyzing these errors both for the maximum likelihood estimator and the Bayesian predictive distribution in terms of Gaussian random fields, by using simple models. 1

4 0.79836607 119 nips-2001-Means, Correlations and Bounds

Author: Martijn Leisink, Bert Kappen

Abstract: The partition function for a Boltzmann machine can be bounded from above and below. We can use this to bound the means and the correlations. For networks with small weights, the values of these statistics can be restricted to non-trivial regions (i.e. a subset of [-1 , 1]). Experimental results show that reasonable bounding occurs for weight sizes where mean field expansions generally give good results. 1

5 0.76421326 109 nips-2001-Learning Discriminative Feature Transforms to Low Dimensions in Low Dimentions

Author: Kari Torkkola

Abstract: The marriage of Renyi entropy with Parzen density estimation has been shown to be a viable tool in learning discriminative feature transforms. However, it suffers from computational complexity proportional to the square of the number of samples in the training data. This sets a practical limit to using large databases. We suggest immediate divorce of the two methods and remarriage of Renyi entropy with a semi-parametric density estimation method, such as a Gaussian Mixture Models (GMM). This allows all of the computation to take place in the low dimensional target space, and it reduces computational complexity proportional to square of the number of components in the mixtures. Furthermore, a convenient extension to Hidden Markov Models as commonly used in speech recognition becomes possible.

6 0.45274651 1 nips-2001-(Not) Bounding the True Error

7 0.39972448 186 nips-2001-The Noisy Euclidean Traveling Salesman Problem and Learning

8 0.39388183 13 nips-2001-A Natural Policy Gradient

9 0.38255563 192 nips-2001-Tree-based reparameterization for approximate inference on loopy graphs

10 0.37833148 103 nips-2001-Kernel Feature Spaces and Nonlinear Blind Souce Separation

11 0.37819263 52 nips-2001-Computing Time Lower Bounds for Recurrent Sigmoidal Neural Networks

12 0.35911834 195 nips-2001-Variance Reduction Techniques for Gradient Estimates in Reinforcement Learning

13 0.35869399 155 nips-2001-Quantizing Density Estimators

14 0.35278642 64 nips-2001-EM-DD: An Improved Multiple-Instance Learning Technique

15 0.35198721 138 nips-2001-On the Generalization Ability of On-Line Learning Algorithms

16 0.35024911 114 nips-2001-Learning from Infinite Data in Finite Time

17 0.35016361 147 nips-2001-Pranking with Ranking

18 0.34810722 68 nips-2001-Entropy and Inference, Revisited

19 0.34808254 157 nips-2001-Rates of Convergence of Performance Gradient Estimates Using Function Approximation and Bias in Reinforcement Learning

20 0.34400871 54 nips-2001-Contextual Modulation of Target Saliency