nips nips2005 nips2005-121 knowledge-graph by maker-knowledge-mining

121 nips-2005-Location-based activity recognition


Source: pdf

Author: Lin Liao, Dieter Fox, Henry Kautz

Abstract: Learning patterns of human behavior from sensor data is extremely important for high-level activity inference. We show how to extract and label a person’s activities and significant places from traces of GPS data. In contrast to existing techniques, our approach simultaneously detects and classifies the significant locations of a person and takes the highlevel context into account. Our system uses relational Markov networks to represent the hierarchical activity model that encodes the complex relations among GPS readings, activities and significant places. We apply FFT-based message passing to perform efficient summation over large numbers of nodes in the networks. We present experiments that show significant improvements over existing techniques. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 We show how to extract and label a person’s activities and significant places from traces of GPS data. [sent-2, score-0.696]

2 In contrast to existing techniques, our approach simultaneously detects and classifies the significant locations of a person and takes the highlevel context into account. [sent-3, score-0.204]

3 Our system uses relational Markov networks to represent the hierarchical activity model that encodes the complex relations among GPS readings, activities and significant places. [sent-4, score-0.672]

4 We apply FFT-based message passing to perform efficient summation over large numbers of nodes in the networks. [sent-5, score-0.402]

5 Such data is used to recognize the high-level activities in which a person is engaged and to determine the relationship between activities and locations that are important to the user [1, 6, 8, 3]. [sent-9, score-1.022]

6 Our goal is to segment the user’s day into everyday activities such as “working,” “visiting,” “travel,” and to recognize and label significant locations that are associated with one or more activity, such as “work place,” “friend’s house,” “user’s bus stop. [sent-10, score-0.656]

7 Some significant locations, for example, the place where the user drops off his children at school, may be visited only briefly, and so would be excluded by a high threshold. [sent-16, score-0.247]

8 A lower threshold, however, would include too many insignificant locations, for example, a place where the user briefly waited at a traffic light. [sent-17, score-0.216]

9 Second, concerns for computational efficiency prevented previous approaches from tackling the problem of activity and place labeling in full generality. [sent-19, score-0.446]

10 [1] does not distinguish between places and activities; although [8] does, the implementation limited places to a single activity. [sent-20, score-0.548]

11 Neither approaches model or label the user’s activities when moving between places. [sent-21, score-0.375]

12 [6] and [3] learn transportation patterns, but not place labels. [sent-22, score-0.203]

13 For a simple example, if a system could learn that a person rarely went to a restaurant more than once a day, then it could correctly give a low probability to an interpretation of a day’s data under which the user went to three restaurants. [sent-25, score-0.316]

14 Our previous work [8] used clique templates in relational Markov networks for concisely expressing global features, but the MCMC inference algorithm we used made it costly to reason with aggregate features, such as statistics on the number of times a given activity occurs. [sent-26, score-0.73]

15 The ability to efficiently leverage global features of the data stream could enhance the scope and accuracy of activity recognition. [sent-27, score-0.286]

16 This paper presents a unified approach to automated activity and place labeling which overcomes these limitations. [sent-28, score-0.446]

17 Contributions of this work include the following: • We show how to simultaneously solve the tasks of identifying significant locations and labeling both places and activities from raw GPS data, all in a conditionally trained relational Markov network. [sent-29, score-0.948]

18 Our approach is notable in that nodes representing significant places are dynamically added to the graph during inference. [sent-30, score-0.372]

19 No arbitrary thresholds regarding the time spent at a location or the number of significant places are employed. [sent-31, score-0.342]

20 • Our model creates a complete interpretation of the log of a user’s data, including transportation activities as well as activities performed at particular places. [sent-32, score-0.775]

21 It allows different kinds of activities to be performed at the same location. [sent-33, score-0.347]

22 • We extend our work on using clique templates for global features to support efficient inference by belief propagation. [sent-34, score-0.43]

23 We introduce, in particular, specialized Fast Fourier Transform (FFT) templates for belief propagation over aggregate (counting) features, which reduce computation time by an exponential amount. [sent-35, score-0.237]

24 We begin with a discussion of relational Markov networks and a description of an FFT belief propagation algorithm for aggregate statistical features. [sent-38, score-0.274]

25 RMNs extend CRFs by providing a relational language for describing clique structures and enforcing parameter sharing at the template level. [sent-44, score-0.392]

26 Thereby RMNs provide a very flexible and concise framework for defining the features we use in our activity recognition context. [sent-45, score-0.28]

27 A key concept of RMNs are relational clique templates, which specify the structure of a CRF in a concise way. [sent-46, score-0.33]

28 In a nutshell, a clique template C ∈ C is similar to a database query (e. [sent-47, score-0.269]

29 Each clique template C is additionally associated with a potential function φC (vC ) that maps values of variables to a non-negative real number. [sent-50, score-0.269]

30 To compute such a conditional distribution, the RMN generates a CRF with the cliques specified by the clique templates. [sent-53, score-0.321]

31 All cliques that originate from the same template must share the same weight vector wC . [sent-54, score-0.234]

32 In the context of place labeling, [8] showed how to use non-zero mean priors in order to transfer weights learned for one person to another person. [sent-60, score-0.244]

33 Performing aggregation would require the generation of cliques that contain all nodes over which the aggregation is performed. [sent-68, score-0.379]

34 2 Efficient summation templates In our model, we address the inference of aggregate cliques at the template level within the framework of BP. [sent-71, score-0.643]

35 Each type of aggregation function is associated with a computation template that specifies how to propagate messages through the clique. [sent-72, score-0.284]

36 To handle summation cliques with potentially large numbers of addends, our summation template dynamically builds a summation tree, which is a pairwise Markov network as shown in Fig. [sent-74, score-0.78]

37 In a summation tree, the leaves are the original addends and each (a) p1 Place a1 Activity Local evidence a2 p2 . [sent-76, score-0.219]

38 eE 1 e1 N eE N Figure 1: (a) Summation tree that represents ysum = yi , where the Si ’s are auxiliary nodes to ensure the summation relation. [sent-92, score-0.503]

39 Each activity node ai is connected to E observed local evidence nodes e1 to eE . [sent-94, score-0.353]

40 Place nodes pi are generated based on the i i inferred activities and each place is connected to all activity nodes that are within a certain distance. [sent-95, score-0.867]

41 8 i=1 internal node yjk represents the sum of its two children yj and yk , and this sum relation is encoded by an auxiliary node Sjk and its potential. [sent-96, score-0.335]

42 It is easy to see that the n summation tree guarantees that the root represents ysum = i=1 yi , where y1 to yn are the leaves of the tree. [sent-98, score-0.363]

43 To define the BP protocol for summation trees, we need to specify two types of messages: an upward message from an auxiliary node to its parent (e. [sent-99, score-0.48]

44 , mS12 y12 ), and a downward message from an auxiliary node to one of its two children (e. [sent-101, score-0.33]

45 Upward message update: Starting with Equation (3), we can update an upward message mSij yij as follows. [sent-104, score-0.461]

46 mSij yij (yij ) = φS (yi , yj , yij ) myi Sij (yi ) myj Sij (yj ) yi ,yj myi Sij (yi ) myj Sij (yij − yi ) = (4) yi = F −1 F(myi Sij (yi )) · F(myj Sij (yj )) (5) where φS (yi , yj , yij ) is the local potential of Sij encoding the equality yij = yi + yj . [sent-105, score-1.795]

47 Therefore, message mSij yij is the convolution of myi Sij and myj Sij . [sent-107, score-0.523]

48 The computational complexity of one summation using FFT is O(k log k), where k is the maximum number of states in yi and yj . [sent-110, score-0.415]

49 Downward message update: We also allow messages to pass from sum variables downward to its children. [sent-111, score-0.328]

50 From Equation (3) we get the downward message mSij yi as mSij yi (yi ) = φS (yi , yj , yij )myj Sij (yj )myij Sij (yij ) yj ,yij myj Sij (yj )myij Sij (yi + yj ) (6) = F −1 F(myj Sij (yj )) · F(myij Sij (yij )) (7) = yj where (6) again follows from the sum relation. [sent-117, score-1.209]

51 Note that the downward message mSij yi turns out to be the correlation of messages myj Sij and myij Sij . [sent-118, score-0.657]

52 At each level of a summation tree, the number of messages (nodes) is reduced by half and the size of each message is doubled. [sent-121, score-0.428]

53 Suppose the tree has n upward messages at the bottom and the maximum size of a message is k . [sent-122, score-0.365]

54 Therefore, updating all messages in a summation clique takes O(n log2 n) instead of time exponential in n, as would be the case for a non-specialized implementation of aggregation. [sent-124, score-0.484]

55 1 Location-based Activity Model Overview To recognize activities and places, we first segment raw GPS traces by grouping consecutive GPS readings based on their spatial relationship. [sent-126, score-0.491]

56 In this section, we focus on inferring activities and types of significant places after segmentation. [sent-130, score-0.621]

57 To do so, we construct a hierarchical RMN that explicitly encodes the relations between activities and places. [sent-131, score-0.347]

58 At the lower level of the hierarchy, each activity node is connected to various features, summarizing information resulting from the GPS segmentation. [sent-134, score-0.255]

59 These features measure compatibility between types of activities at neighboring nodes in the trace. [sent-136, score-0.494]

60 Our model also aims at determining those places that play a significant role in the activities of a person, such as home, work place, friend’s home, grocery stores, restaurants, and bus stops. [sent-137, score-0.703]

61 Such significant places comprise the upper level of the CRF shown in Fig. [sent-138, score-0.274]

62 However, since these places are not known a priori, we must additionally detect a person’s significant places. [sent-140, score-0.274]

63 To incorporate place detection into our system, we use an iterative algorithm that re-estimates activities and places. [sent-141, score-0.469]

64 Before we describe this algorithm, let us first look at the features that are used to determine the types of significant places under the assumption that the locations and number of these places are known. [sent-142, score-0.679]

65 • The activities that occur at a place strongly indicate the type of the place. [sent-143, score-0.469]

66 Our features consider the frequencies of the different activities at a place. [sent-145, score-0.396]

67 This is done by generating a clique for each place that contains all activity nodes in its vicinity. [sent-146, score-0.6]

68 We add two additional summation cliques that count the number of homes and work places. [sent-150, score-0.39]

69 a 0 := BP inference( CRF0 ) // infer sequence of activities 5. [sent-181, score-0.347]

70 , pK i := generate places(a∗ i−1 ) // Instantiate places 8. [sent-187, score-0.315]

71 return a∗ , p∗ i i Table 1: Algorithm for extracting and labeling activities and significant places. [sent-202, score-0.497]

72 Note that the above two types of aggregation features can generate large cliques in the CRF, which could make standard inference intractable. [sent-203, score-0.369]

73 In our inference, we use the optimized summation templates discussed in Section 2. [sent-204, score-0.268]

74 2 Place Detection and Labeling Algorithm Table 1 summarizes our algorithm for efficiently constructing a CRF that jointly estimates a person’s activities and the types of his significant places. [sent-207, score-0.347]

75 In Step 2 and 3, this trace is segmented into activities ai and their local evidence ej , which are then used to generate CRF0 without significant places. [sent-209, score-0.419]

76 BP inference i is first performed in this CRF so as to determine the activity estimate a∗ 0 , which consists of a sequence of locations and the most likely activity performed at that location (Step 4). [sent-210, score-0.589]

77 This is done by classifying individual activities in the sequence according to whether or not they belong to a significant place. [sent-212, score-0.347]

78 All instances at which a significant activity occurs generate a place node. [sent-214, score-0.365]

79 Because a place can be visited multiple times within a sequence, we perform clustering and merge duplicate places into the same place node. [sent-215, score-0.518]

80 These places are added to the model and BP is performed in this complete CRF. [sent-217, score-0.274]

81 Since a CRFi can have a different structure than the previous CRFi−1 , it might generate a different activity sequence. [sent-218, score-0.243]

82 If this is the case, the algorithm returns to Step 5 and re-generates the set of places using the improved activity sequence. [sent-219, score-0.476]

83 Extracting significant places We compare our model with a widely-used approach that uses a time threshold to determine whether or not a location is significant [1, 6, 8, 3]. [sent-226, score-0.336]

84 1 minute to 10 minutes, and we measure the false positive and false negative locations extracted from the GPS traces. [sent-230, score-0.281]

85 2(a), any fixed threshold is not satisfactory: low thresholds have many false positives, and high thresholds result in many false negatives. [sent-232, score-0.26]

86 In contrast, our model performs much better: it only generates 4 false positives and 3 false negative. [sent-233, score-0.209]

87 Labeling places and activities In our system the labels of activities generate instances of places, which then help to better estimate the activities occurring in their spatial area. [sent-235, score-1.386]

88 The results are given with and without taking the detected places into account. [sent-237, score-0.274]

89 More specifically, without places are results achieved by CRF0 generated by Step 4 of the algorithm in Table 1, and results with places are those achieved after model convergence. [sent-238, score-0.548]

90 Second, performing joint inference over activities and places increases the quality of inference. [sent-242, score-0.688]

91 The reason for this is that a place node connects all the activities occurring in its spatial area so that these activities can be labeled in a more consistent way. [sent-243, score-0.925]

92 A further evaluation of the detected places showed that our system achieved 90. [sent-244, score-0.274]

93 6% accuracy in place detection and labeling (see [7] for more results). [sent-245, score-0.279]

94 Efficiency of inference We compared our optimized BP algorithm using FFT summation cliques with inference based on MCMC and regular BP, using the model and data from [8]. [sent-246, score-0.459]

95 As can be seen, naive BP becomes extremely slow for only 20 nodes, MCMC only works for up to 500 nodes, while our algorithm can perform summation for 2, 000 variables within a few minutes. [sent-252, score-0.206]

96 Furthermore, once these locations are determined, they help to better detect low-level activities occurring in their vicinity. [sent-256, score-0.459]

97 Summation cliques are extremely important to introduce long-term, soft constraints into activity recognition. [sent-257, score-0.345]

98 We show how to incorporate such cliques into belief propagation using bi-directional FFT computations. [sent-258, score-0.22]

99 The clique templates of RMNs are well suited to specify such clique-specific inference mechanisms and we are developing additional techniques, including clique-specific MCMC and local dynamic programming. [sent-259, score-0.331]

100 We demonstrate that the model can be trained from a group of persons and then applied successfully to a different person, achieving more than 85% accuracy in determining low-level activities and above 90% accuracy in detecting and labeling significant places. [sent-261, score-0.587]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('activities', 0.347), ('places', 0.274), ('crf', 0.26), ('sij', 0.209), ('activity', 0.202), ('summation', 0.182), ('clique', 0.178), ('gps', 0.169), ('bp', 0.158), ('myj', 0.149), ('cliques', 0.143), ('yij', 0.136), ('fft', 0.13), ('yj', 0.127), ('messages', 0.124), ('relational', 0.123), ('message', 0.122), ('person', 0.122), ('labeling', 0.122), ('place', 0.122), ('msij', 0.111), ('rmn', 0.111), ('rmns', 0.111), ('cant', 0.108), ('yi', 0.106), ('nodes', 0.098), ('user', 0.094), ('template', 0.091), ('ee', 0.088), ('templates', 0.086), ('false', 0.085), ('bus', 0.082), ('downward', 0.082), ('locations', 0.082), ('fourier', 0.082), ('transportation', 0.081), ('upward', 0.081), ('crfi', 0.074), ('myi', 0.074), ('myij', 0.074), ('aggregate', 0.074), ('aggregation', 0.069), ('inference', 0.067), ('homes', 0.065), ('signi', 0.061), ('fox', 0.059), ('liao', 0.059), ('wc', 0.059), ('day', 0.059), ('mcmc', 0.055), ('vc', 0.055), ('node', 0.053), ('belief', 0.05), ('features', 0.049), ('transform', 0.049), ('fc', 0.048), ('persons', 0.048), ('instantiate', 0.048), ('crfs', 0.048), ('traces', 0.047), ('home', 0.044), ('convolution', 0.042), ('auxiliary', 0.042), ('generate', 0.041), ('positives', 0.039), ('readings', 0.039), ('tree', 0.038), ('addends', 0.037), ('friend', 0.037), ('leisure', 0.037), ('pickup', 0.037), ('went', 0.037), ('ysum', 0.037), ('car', 0.037), ('location', 0.036), ('accuracy', 0.035), ('markov', 0.034), ('pk', 0.034), ('thresholds', 0.032), ('geographic', 0.032), ('sleep', 0.032), ('sjk', 0.032), ('trace', 0.031), ('children', 0.031), ('occurring', 0.03), ('recognize', 0.03), ('week', 0.029), ('minute', 0.029), ('concise', 0.029), ('mb', 0.029), ('taskar', 0.029), ('yjk', 0.029), ('segment', 0.028), ('extracting', 0.028), ('label', 0.028), ('propagation', 0.027), ('threshold', 0.026), ('connects', 0.026), ('restaurant', 0.026), ('naive', 0.024)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000002 121 nips-2005-Location-based activity recognition

Author: Lin Liao, Dieter Fox, Henry Kautz

Abstract: Learning patterns of human behavior from sensor data is extremely important for high-level activity inference. We show how to extract and label a person’s activities and significant places from traces of GPS data. In contrast to existing techniques, our approach simultaneously detects and classifies the significant locations of a person and takes the highlevel context into account. Our system uses relational Markov networks to represent the hierarchical activity model that encodes the complex relations among GPS readings, activities and significant places. We apply FFT-based message passing to perform efficient summation over large numbers of nodes in the networks. We present experiments that show significant improvements over existing techniques. 1

2 0.10979994 46 nips-2005-Consensus Propagation

Author: Benjamin V. Roy, Ciamac C. Moallemi

Abstract: We propose consensus propagation, an asynchronous distributed protocol for averaging numbers across a network. We establish convergence, characterize the convergence rate for regular graphs, and demonstrate that the protocol exhibits better scaling properties than pairwise averaging, an alternative that has received much recent attention. Consensus propagation can be viewed as a special case of belief propagation, and our results contribute to the belief propagation literature. In particular, beyond singly-connected graphs, there are very few classes of relevant problems for which belief propagation is known to converge. 1

3 0.099007986 96 nips-2005-Inference with Minimal Communication: a Decision-Theoretic Variational Approach

Author: O. P. Kreidl, Alan S. Willsky

Abstract: Given a directed graphical model with binary-valued hidden nodes and real-valued noisy observations, consider deciding upon the maximum a-posteriori (MAP) or the maximum posterior-marginal (MPM) assignment under the restriction that each node broadcasts only to its children exactly one single-bit message. We present a variational formulation, viewing the processing rules local to all nodes as degrees-of-freedom, that minimizes the loss in expected (MAP or MPM) performance subject to such online communication constraints. The approach leads to a novel message-passing algorithm to be executed offline, or before observations are realized, which mitigates the performance loss by iteratively coupling all rules in a manner implicitly driven by global statistics. We also provide (i) illustrative examples, (ii) assumptions that guarantee convergence and efficiency and (iii) connections to active research areas. 1

4 0.096585982 125 nips-2005-Message passing for task redistribution on sparse graphs

Author: K. Wong, Zhuo Gao, David Tax

Abstract: The problem of resource allocation in sparse graphs with real variables is studied using methods of statistical physics. An efficient distributed algorithm is devised on the basis of insight gained from the analysis and is examined using numerical simulations, showing excellent performance and full agreement with the theoretical results.

5 0.086989082 184 nips-2005-Structured Prediction via the Extragradient Method

Author: Ben Taskar, Simon Lacoste-Julian, Michael I. Jordan

Abstract: We present a simple and scalable algorithm for large-margin estimation of structured models, including an important class of Markov networks and combinatorial models. We formulate the estimation problem as a convex-concave saddle-point problem and apply the extragradient method, yielding an algorithm with linear convergence using simple gradient and projection calculations. The projection step can be solved using combinatorial algorithms for min-cost quadratic flow. This makes the approach an efficient alternative to formulations based on reductions to a quadratic program (QP). We present experiments on two very different structured prediction tasks: 3D image segmentation and word alignment, illustrating the favorable scaling properties of our algorithm. 1

6 0.084142514 70 nips-2005-Fast Information Value for Graphical Models

7 0.081774928 200 nips-2005-Variable KD-Tree Algorithms for Spatial Pattern Search and Discovery

8 0.070906222 171 nips-2005-Searching for Character Models

9 0.068612292 141 nips-2005-Norepinephrine and Neural Interrupts

10 0.061002966 129 nips-2005-Modeling Neural Population Spiking Activity with Gibbs Distributions

11 0.059699398 131 nips-2005-Multiple Instance Boosting for Object Detection

12 0.058876272 92 nips-2005-Hyperparameter and Kernel Learning for Graph Based Semi-Supervised Classification

13 0.058272876 189 nips-2005-Tensor Subspace Analysis

14 0.057196438 11 nips-2005-A Hierarchical Compositional System for Rapid Object Detection

15 0.056072447 204 nips-2005-Walk-Sum Interpretation and Analysis of Gaussian Belief Propagation

16 0.055171546 175 nips-2005-Sequence and Tree Kernels with Statistical Feature Mining

17 0.055008616 67 nips-2005-Extracting Dynamical Structure Embedded in Neural Activity

18 0.054668687 123 nips-2005-Maximum Margin Semi-Supervised Learning for Structured Variables

19 0.052432682 45 nips-2005-Conditional Visual Tracking in Kernel Space

20 0.051370718 65 nips-2005-Estimating the wrong Markov random field: Benefits in the computation-limited setting


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.186), (1, 0.007), (2, 0.006), (3, 0.05), (4, -0.037), (5, -0.02), (6, -0.102), (7, 0.17), (8, 0.069), (9, 0.143), (10, 0.047), (11, -0.108), (12, -0.04), (13, -0.016), (14, 0.023), (15, 0.013), (16, -0.01), (17, 0.02), (18, 0.038), (19, -0.027), (20, -0.045), (21, 0.05), (22, 0.065), (23, 0.05), (24, 0.04), (25, -0.034), (26, -0.045), (27, 0.086), (28, -0.035), (29, -0.114), (30, -0.067), (31, -0.074), (32, 0.001), (33, 0.043), (34, -0.04), (35, -0.053), (36, 0.034), (37, 0.022), (38, 0.061), (39, 0.074), (40, 0.007), (41, -0.14), (42, -0.073), (43, -0.072), (44, 0.098), (45, 0.021), (46, 0.138), (47, -0.007), (48, 0.099), (49, -0.012)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95359397 121 nips-2005-Location-based activity recognition

Author: Lin Liao, Dieter Fox, Henry Kautz

Abstract: Learning patterns of human behavior from sensor data is extremely important for high-level activity inference. We show how to extract and label a person’s activities and significant places from traces of GPS data. In contrast to existing techniques, our approach simultaneously detects and classifies the significant locations of a person and takes the highlevel context into account. Our system uses relational Markov networks to represent the hierarchical activity model that encodes the complex relations among GPS readings, activities and significant places. We apply FFT-based message passing to perform efficient summation over large numbers of nodes in the networks. We present experiments that show significant improvements over existing techniques. 1

2 0.68049961 46 nips-2005-Consensus Propagation

Author: Benjamin V. Roy, Ciamac C. Moallemi

Abstract: We propose consensus propagation, an asynchronous distributed protocol for averaging numbers across a network. We establish convergence, characterize the convergence rate for regular graphs, and demonstrate that the protocol exhibits better scaling properties than pairwise averaging, an alternative that has received much recent attention. Consensus propagation can be viewed as a special case of belief propagation, and our results contribute to the belief propagation literature. In particular, beyond singly-connected graphs, there are very few classes of relevant problems for which belief propagation is known to converge. 1

3 0.65779823 96 nips-2005-Inference with Minimal Communication: a Decision-Theoretic Variational Approach

Author: O. P. Kreidl, Alan S. Willsky

Abstract: Given a directed graphical model with binary-valued hidden nodes and real-valued noisy observations, consider deciding upon the maximum a-posteriori (MAP) or the maximum posterior-marginal (MPM) assignment under the restriction that each node broadcasts only to its children exactly one single-bit message. We present a variational formulation, viewing the processing rules local to all nodes as degrees-of-freedom, that minimizes the loss in expected (MAP or MPM) performance subject to such online communication constraints. The approach leads to a novel message-passing algorithm to be executed offline, or before observations are realized, which mitigates the performance loss by iteratively coupling all rules in a manner implicitly driven by global statistics. We also provide (i) illustrative examples, (ii) assumptions that guarantee convergence and efficiency and (iii) connections to active research areas. 1

4 0.55300617 204 nips-2005-Walk-Sum Interpretation and Analysis of Gaussian Belief Propagation

Author: Dmitry Malioutov, Alan S. Willsky, Jason K. Johnson

Abstract: This paper presents a new framework based on walks in a graph for analysis and inference in Gaussian graphical models. The key idea is to decompose correlations between variables as a sum over all walks between those variables in the graph. The weight of each walk is given by a product of edgewise partial correlations. We provide a walk-sum interpretation of Gaussian belief propagation in trees and of the approximate method of loopy belief propagation in graphs with cycles. This perspective leads to a better understanding of Gaussian belief propagation and of its convergence in loopy graphs. 1

5 0.54129767 125 nips-2005-Message passing for task redistribution on sparse graphs

Author: K. Wong, Zhuo Gao, David Tax

Abstract: The problem of resource allocation in sparse graphs with real variables is studied using methods of statistical physics. An efficient distributed algorithm is devised on the basis of insight gained from the analysis and is examined using numerical simulations, showing excellent performance and full agreement with the theoretical results.

6 0.48714599 70 nips-2005-Fast Information Value for Graphical Models

7 0.47273114 184 nips-2005-Structured Prediction via the Extragradient Method

8 0.45861709 127 nips-2005-Mixture Modeling by Affinity Propagation

9 0.41914573 171 nips-2005-Searching for Character Models

10 0.40804365 11 nips-2005-A Hierarchical Compositional System for Rapid Object Detection

11 0.3928397 200 nips-2005-Variable KD-Tree Algorithms for Spatial Pattern Search and Discovery

12 0.35815653 104 nips-2005-Laplacian Score for Feature Selection

13 0.35762298 163 nips-2005-Recovery of Jointly Sparse Signals from Few Random Projections

14 0.33074799 174 nips-2005-Separation of Music Signals by Harmonic Structure Modeling

15 0.32738554 189 nips-2005-Tensor Subspace Analysis

16 0.30808014 92 nips-2005-Hyperparameter and Kernel Learning for Graph Based Semi-Supervised Classification

17 0.30192134 43 nips-2005-Comparing the Effects of Different Weight Distributions on Finding Sparse Representations

18 0.29645395 45 nips-2005-Conditional Visual Tracking in Kernel Space

19 0.29615688 68 nips-2005-Factorial Switching Kalman Filters for Condition Monitoring in Neonatal Intensive Care

20 0.29174453 162 nips-2005-Rate Distortion Codes in Sensor Networks: A System-level Analysis


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(3, 0.062), (10, 0.04), (27, 0.042), (31, 0.062), (34, 0.076), (35, 0.324), (39, 0.027), (55, 0.06), (57, 0.03), (69, 0.055), (73, 0.05), (88, 0.062), (91, 0.026)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.79730558 121 nips-2005-Location-based activity recognition

Author: Lin Liao, Dieter Fox, Henry Kautz

Abstract: Learning patterns of human behavior from sensor data is extremely important for high-level activity inference. We show how to extract and label a person’s activities and significant places from traces of GPS data. In contrast to existing techniques, our approach simultaneously detects and classifies the significant locations of a person and takes the highlevel context into account. Our system uses relational Markov networks to represent the hierarchical activity model that encodes the complex relations among GPS readings, activities and significant places. We apply FFT-based message passing to perform efficient summation over large numbers of nodes in the networks. We present experiments that show significant improvements over existing techniques. 1

2 0.72504002 142 nips-2005-Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games

Author: Gabriel Y. Weintraub, Lanier Benkard, Benjamin Van Roy

Abstract: We propose a mean-field approximation that dramatically reduces the computational complexity of solving stochastic dynamic games. We provide conditions that guarantee our method approximates an equilibrium as the number of agents grow. We then derive a performance bound to assess how well the approximation performs for any given number of agents. We apply our method to an important class of problems in applied microeconomics. We show with numerical experiments that we are able to greatly expand the set of economic problems that can be analyzed computationally. 1

3 0.68302137 90 nips-2005-Hot Coupling: A Particle Approach to Inference and Normalization on Pairwise Undirected Graphs

Author: Firas Hamze, Nando de Freitas

Abstract: This paper presents a new sampling algorithm for approximating functions of variables representable as undirected graphical models of arbitrary connectivity with pairwise potentials, as well as for estimating the notoriously difficult partition function of the graph. The algorithm fits into the framework of sequential Monte Carlo methods rather than the more widely used MCMC, and relies on constructing a sequence of intermediate distributions which get closer to the desired one. While the idea of using “tempered” proposals is known, we construct a novel sequence of target distributions where, rather than dropping a global temperature parameter, we sequentially couple individual pairs of variables that are, initially, sampled exactly from a spanning tree of the variables. We present experimental results on inference and estimation of the partition function for sparse and densely-connected graphs.

4 0.44324777 133 nips-2005-Nested sampling for Potts models

Author: Iain Murray, David MacKay, Zoubin Ghahramani, John Skilling

Abstract: Nested sampling is a new Monte Carlo method by Skilling [1] intended for general Bayesian computation. Nested sampling provides a robust alternative to annealing-based methods for computing normalizing constants. It can also generate estimates of other quantities such as posterior expectations. The key technical requirement is an ability to draw samples uniformly from the prior subject to a constraint on the likelihood. We provide a demonstration with the Potts model, an undirected graphical model. 1

5 0.43965703 72 nips-2005-Fast Online Policy Gradient Learning with SMD Gain Vector Adaptation

Author: Jin Yu, Douglas Aberdeen, Nicol N. Schraudolph

Abstract: Reinforcement learning by direct policy gradient estimation is attractive in theory but in practice leads to notoriously ill-behaved optimization problems. We improve its robustness and speed of convergence with stochastic meta-descent, a gain vector adaptation method that employs fast Hessian-vector products. In our experiments the resulting algorithms outperform previously employed online stochastic, offline conjugate, and natural policy gradient methods. 1

6 0.43850291 132 nips-2005-Nearest Neighbor Based Feature Selection for Regression and its Application to Neural Activity

7 0.43844721 154 nips-2005-Preconditioner Approximations for Probabilistic Graphical Models

8 0.43157351 144 nips-2005-Off-policy Learning with Options and Recognizers

9 0.43110844 96 nips-2005-Inference with Minimal Communication: a Decision-Theoretic Variational Approach

10 0.42964885 177 nips-2005-Size Regularized Cut for Data Clustering

11 0.42663217 184 nips-2005-Structured Prediction via the Extragradient Method

12 0.42524096 200 nips-2005-Variable KD-Tree Algorithms for Spatial Pattern Search and Discovery

13 0.4249405 78 nips-2005-From Weighted Classification to Policy Search

14 0.42486286 102 nips-2005-Kernelized Infomax Clustering

15 0.42098525 9 nips-2005-A Domain Decomposition Method for Fast Manifold Learning

16 0.4198643 153 nips-2005-Policy-Gradient Methods for Planning

17 0.4197199 178 nips-2005-Soft Clustering on Graphs

18 0.41904348 74 nips-2005-Faster Rates in Regression via Active Learning

19 0.41889054 47 nips-2005-Consistency of one-class SVM and related algorithms

20 0.41876003 43 nips-2005-Comparing the Effects of Different Weight Distributions on Finding Sparse Representations