nips nips2009 nips2009-150 knowledge-graph by maker-knowledge-mining

150 nips-2009-Maximum likelihood trajectories for continuous-time Markov chains


Source: pdf

Author: Theodore J. Perkins

Abstract: Continuous-time Markov chains are used to model systems in which transitions between states as well as the time the system spends in each state are random. Many computational problems related to such chains have been solved, including determining state distributions as a function of time, parameter estimation, and control. However, the problem of inferring most likely trajectories, where a trajectory is a sequence of states as well as the amount of time spent in each state, appears unsolved. We study three versions of this problem: (i) an initial value problem, in which an initial state is given and we seek the most likely trajectory until a given final time, (ii) a boundary value problem, in which initial and final states and times are given, and we seek the most likely trajectory connecting them, and (iii) trajectory inference under partial observability, analogous to finding maximum likelihood trajectories for hidden Markov models. We show that maximum likelihood trajectories are not always well-defined, and describe a polynomial time test for well-definedness. When well-definedness holds, we show that each of the three problems can be solved in polynomial time, and we develop efficient dynamic programming algorithms for doing so. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 ca Abstract Continuous-time Markov chains are used to model systems in which transitions between states as well as the time the system spends in each state are random. [sent-3, score-0.647]

2 However, the problem of inferring most likely trajectories, where a trajectory is a sequence of states as well as the amount of time spent in each state, appears unsolved. [sent-5, score-0.68]

3 1 Introduction A continuous-time Markov chain (CTMC) is a model of a dynamical system which, upon entering some state, remains in that state for a random real-valued amount of time (called the dwell time or occupancy time) and then transitions randomly to a new state. [sent-9, score-1.332]

4 Or, the state may correspond to the numbers of different types of molecules in an interacting system, and transitions are the result of chemical reactions between molecules [2]. [sent-14, score-0.49]

5 Estimating the parameters of a CTMC from fully observed data involves estimating state transition probabilities, just as for DTMCs, but adds estimation of the state dwell time distributions. [sent-19, score-1.245]

6 When the state of a CTMC is observed periodically through time, but some transitions between observation times may go unseen, the parameter estimation problem can also be solved through embedding 1 techniques [8]. [sent-21, score-0.49]

7 In scenarios such as manufacturing or robot navigation, one may assume that the state transitions or dwell times are under at least partial control. [sent-22, score-1.15]

8 When control choices are made once for each state entered, dynamic programming and related methods can be used to develop optimal control strategies [9]. [sent-23, score-0.361]

9 Another fundamental and well-studied problem for CTMCs is to compute, given an initial state and time, the state distribution or most likely state at a later time. [sent-25, score-0.825]

10 These specify a system of ordinary differential equations the describe how the probabilities of being in each state change over time. [sent-28, score-0.349]

11 The process obtained by choosing the number of transitions, and then producing a trajectory with that many transitions from the DTMC, has the same state distribution as the original CTMC. [sent-31, score-0.766]

12 These approaches act by simply simulating trajectories from the CTMC to produce empirical, numerical estimates of state distributions or other features of the dynamics. [sent-34, score-0.354]

13 2 Definitions A CTMC is defined by four things: (i) a finite state set S, (ii) initial state probabilities, Ps for s ∈ S, (iii) state transition probabilities Pss for s, s ∈ S, and (iv) state dwell time parameters λs for each s ∈ S. [sent-38, score-1.767]

14 Let St ∈ S denote the state of the system at time t ∈ [0, +∞). [sent-39, score-0.389]

15 The rules for the evolution of the system are that it starts in state S0 , which is chosen according to the distribution Ps . [sent-40, score-0.346]

16 At any time t, when the system is in state St = s, the system stays in state s for a random amount of time that is exponentially distributed with parameter λs . [sent-41, score-0.796]

17 When the system finally leaves state s, the next state of the system is s = s with probability Pss . [sent-42, score-0.652]

18 A trajectory of the CTMC is a sequence of states along with the dwell times in all but the last state U = (s0 , t0 , s1 , t1 , . [sent-43, score-1.487]

19 The meaning of this trajectory is that the system started in state s0 , where it stayed for time t0 , then transitioned to state s1 , where it stayed for time t1 , and so on. [sent-47, score-1.114]

20 Eventually, the system reaches state sk , where it remains. [sent-48, score-0.475]

21 , skt −1 , tkt −1 , skt ) be a random variable describing the trajectory of the system up until time t. [sent-52, score-0.661]

22 In particular, this means that there are kt state transitions up until time t (where kt is itself a random variable), the system enters state skt sometime at or before time t, and remains in state skt until sometime after time t. [sent-53, score-1.364]

23 Otherwise, the terms inside the first parentheses account for the likelihood of the dwell times and the state transitions in the sequence, and the term inside the second parentheses 2 accounts for the probability that the dwell time in the final state does not complete before time t. [sent-55, score-2.343]

24 When the system is observed and it is in state s ∈ S, the observer sees observation o ∈ O with probability Pso . [sent-61, score-0.396]

25 Given a trajectory of the system U = (s0 , t0 , s1 , t1 , . [sent-67, score-0.476]

26 , tk−1 , sk ), let U(t) denote the state of the system at time t implied by that sequence. [sent-70, score-0.538]

27 The first step in this development is to show that we can analytically optimize the dwell times if we are given the state sequence. [sent-72, score-0.962]

28 Following that, we develop a dynamic program to find optimal state sequences, assuming that the dwell times are set to their optimal values relative to the state sequence. [sent-74, score-1.239]

29 1 Maximum likelihood dwell times Consider a particular trajectory U = (s0 , t0 , s1 , t1 , . [sent-76, score-1.203]

30 Let us assume that S0 = s0 , as we have no need to consider U starting from the wrong state, and let us maximize l(Ut = U|S0 ) with respect to the dwell times. [sent-81, score-0.682]

31 This is the set of all feasible dwell times for the states up until state sk . [sent-86, score-1.185]

32 That is, the system transitions instantaneously through the states s0 , s1 , . [sent-99, score-0.311]

33 , sk−1 and then 3 dwells in state sk for (at least) time t. [sent-102, score-0.471]

34 Intuitively, this says that if state sj has the largest expected dwell time (corresponding to the smallest λ parameter), then the most likely setting of dwell times is obtained by assuming all of the time t is spent in state sj , and all other transitions happen instantaneously. [sent-105, score-2.296]

35 This is not unintuitive, although it is dissatisfying in the sense that the most likely set of dwell times are not typical in some sense. [sent-106, score-0.83]

36 Nevertheless, being able to solve explicitly for the most likely dwell times for a given state sequence makes it much easier to find the most likely Ut . [sent-109, score-1.205]

37 2 Dynamic programming for the most likely state sequence Substituting back our solution for the ti into Equation (1), and continuing our assumption that s0 = S0 , we obtain  if λsk ≤ λsi for  Πk−1 λsi Psi si+1 e−λsk t i=0 all 0 ≤ i < k max l(Ut = U|S0 ) =  (t0 ,. [sent-112, score-0.565]

38 ,tk−1 )∈Ttk k−1 −(mink−1 λsi )t i=0 otherwise Πi=0 λsi Psi si+1 e k Πk−1 λsi Psi si+1 e−(mini=0 λsi )t i=0 = (9) This leads to a dynamic program for finding the state sequence that maximizes the likelihood. [sent-115, score-0.326]

39 The main difference with a more typical scenario is that to score an extension we need to know not just the score and final state of the shorter path, but also the smallest dwell time parameter along that path. [sent-117, score-1.039]

40 } state transitions, ends at state sk = s, and for which the smallest dwell time parameter of any state along the trajectory is λ. [sent-121, score-2.006]

41 The fact that a trajectory ends at state s implies that the minimum dwell time parameter along the trajectory can be no greater than λs . [sent-125, score-1.778]

42 That is, the length k trajectory must already have a dwell time parameter of λ along it. [sent-129, score-1.147]

43 The next two terms account for the dwell in s and the transition probability to s. [sent-133, score-0.724]

44 The final term accounts for any difference between the smallest dwell time parameters along the k and k + 1 transition trajectories. [sent-134, score-0.829]

45 1 If the reader is not comfortable with a dwell time exactly equal to zero, one may instead take ti = 0 as a shorthand for an infinitesimal but positive dwell time. [sent-135, score-1.568]

46 States are also labeled with their dwell time parameters. [sent-140, score-0.745]

47 Because the set of possible states, S, is finite, so is the set of possible dwell time parameters, λs for s ∈ S. [sent-141, score-0.745]

48 Such a value implies that the most likely state sequence ends at state s after k state transitions. [sent-145, score-0.856]

49 We can use a traceback to reconstitute the full sequence of states, and the result of the previous section to obtain the most likely dwell times. [sent-146, score-0.828]

50 First, suppose that we know the system is in state x at time zero and in state z at time t. [sent-150, score-0.681]

51 If we ignore the issue of dwell times and consider only the transition probabilities, then the path (x, y, z) seems more probable. [sent-152, score-0.813]

52 However, if 3 3 3 we consider the dwell times as well, the story can change. [sent-154, score-0.733]

53 Note 1 that λy = 10 , so that the expected dwell time in state y is 10. [sent-156, score-0.974]

54 If the chain enters state y, the chance of it leaving y before time t = 1 is quite small. [sent-157, score-0.371]

55 It prefers to place all the dwell time t in state y, because that state is most likely to have a long dwell time. [sent-161, score-1.982]

56 However, the total score of this trajectory is still only 0. [sent-162, score-0.4]

57 Next, suppose that we know S0 = a and that we are interested in knowing the most likely trajectory out until time t, regardless of the final state of that trajectory. [sent-169, score-0.768]

58 There is only one possible state sequence containing k transitions for each k = 0, 1, 2, . [sent-171, score-0.418]

59 , and the likelihood of any such sequence turns out to be independent of the dwell times (assuming the dwell times total no more than time t): (Πk−1 λe−λti )e−λ(t−Σi ti ) = e−λt λk i=0 (11) If λ < 1, this implies the optimal trajectory has the system remaining at state a. [sent-174, score-2.515]

60 Intuitively, because the likelihood of a dwell time can be greater than one, the likelihood of a trajectory can be increased by including short dwells in states with high dwell parameters λ. [sent-177, score-2.092]

61 , sk = s0 ), such that Πk−1 Psi si+1 λsi > 1, then maximum likelihood trajectories do not exist. [sent-181, score-0.374]

62 Rather, a sequence of i=0 5 s0 s1 o1 s2 o2 o3 s3 o4 time Figure 2: Abstract example of a continuous-time trajectory of a chain, along with observations taken at fixed time intervals. [sent-182, score-0.631]

63 trajectories with ever-increasing likelihood can be found starting from any state from which the cycle is reachable. [sent-183, score-0.455]

64 , om , τm ) and want to find the most likely trajectory U. [sent-190, score-0.503]

65 The following can be straightforwardly generalized to allow the first observation to take place sometime after the trajectory begins. [sent-192, score-0.485]

66 , tk−1 , sk ) where i tk ≤ τm , so that we do not concern ourselves with extrapolating the trajectory beyond the final observation time. [sent-196, score-0.655]

67 The only differences between the second parentheses and Equation (1) is that we now include the probability of starting in state s0 , and we have implicitly assumed that i tk ≤ τm , as mentioned above. [sent-198, score-0.333]

68 1 Decomposing trajectory likelihood by observation intervals For simplicity, let us further restrict attention to trajectories U that do not include a transition into a state si precisely at any observation time τj . [sent-202, score-1.19]

69 The likelihood of the trajectory can be written in terms of the events in each observation interval. [sent-204, score-0.558]

70 For example, consider the trajectory and observations depicted in Figure 2. [sent-205, score-0.454]

71 In the first interval, the system starts in state s0 and transitions to s1 , where it stays until time τ2 . [sent-206, score-0.547]

72 In the second observation interval, the system never leaves state s1 . [sent-208, score-0.396]

73 Finally, in the third interval, the system continues in state s1 before transitioning to state s2 and then s3 , where it remains until the final observation. [sent-210, score-0.582]

74 , siki ) denote the sequence of states and dwell times of trajectory U during the time interval [τi , τi+1 ). [sent-216, score-1.363]

75 The first dwell time ti0 , if any, is measured with respect to the start of the time interval. [sent-217, score-0.808]

76 The component of the likelihood of the whole trajectory U attributable to the ith time interval is nothing other than l(Uτi+1 −τi = Ui |S0 = si0 ). [sent-218, score-0.598]

77 Thus, the likelihood of the whole trajectory can be written as m−1 l(Uτm = U) = Ps0 Πi=1 l(Uτi+1 −τi = Ui |S0 = si0 ) 6 (14) 4. [sent-219, score-0.47]

78 The terms inside the product account for the likelihood of the ith interval of the trajectory, and the probability of the (i + 1)st observation, given the state at the end of the ith interval of the trajectory. [sent-221, score-0.45]

79 If U is to maximize the conditional likelihood, it had better be that the fragment of the trajectory between those two times, Ui , is a maximum likelihood trajectory from state U(τi ) to state U(τi+1 ) in time τi+1 − τi . [sent-224, score-1.42]

80 If it is not, then an alternative, higher likelihood trajectory fragment could be swapped into U, resulting in a higher conditional likelihood. [sent-225, score-0.493]

81 Let us define Ht (s, s ) = max l(Ut = U |S0 = s, St = s ) U (16) to be the maximum achievable likelihood by any trajectory from state s to state s in time t. [sent-226, score-1.035]

82 ) Specifically, define Ji (s) to be the likelihood of the most likely trajectory covering the time interval [τ1 , τi ], accounting for the first i observations, and ending at state s. [sent-230, score-0.924]

83 s (19) We can then reconstruct the most likely trajectory by finding s that maximizes Jm (s) and tracing back to the beginning. [sent-237, score-0.476]

84 We assume that λa = λb = 1, that the system always starts in state x, and that when we observe the system, we get a real-valued Gaussian observation with standard deviation 1 and means 0, 10, 3, 100 and 100 for states x, y, z, a and b respectively. [sent-241, score-0.47]

85 First, if we assume the time interval between observations is t = 1, and we consider observations OA , then the most likely trajectory has the system in state x up through the 10th observation, after which it instantly transitions to state z and remains there. [sent-244, score-1.407]

86 This makes sense, as the lower observations at the start of the series are more likely in state x. [sent-245, score-0.38]

87 If we consider instead observations OB , which has a high observation at time t = 11, the procedure infers that the system was in state y at that time. [sent-246, score-0.513]

88 Moreover, it predicts that the system switches into y immediately after the 10th observation, and says there until just before the 12th observation, taking advantage of the fact that longer dwell times are more likely in state y than in the other states. [sent-247, score-1.156]

89 If we consider observations OC , which have a spike at t = 5, the transit to state y is moved earlier, and state z is used to explain observations at t = 6 onward, even though the first few are relatively unlikely in that state. [sent-248, score-0.59]

90 return to observations OA , but we assume that the time interval between observations is t = 2, then the most likely trajectory is different than it is for t = 1. [sent-253, score-0.712]

91 Although the same states are used to explain the observations, the most likely trajectory has the system transitioning from x to y immediately after the 10th observation and dwelling there until just before the 11th observation, where the state becomes z. [sent-254, score-0.973]

92 This is because, as explained previously, this is the more likely trajectory from x to z given t = 2. [sent-255, score-0.476]

93 If we assume the time interval between observations is t = 20, then a wider range of observations during the trajectory are attributed to state y. [sent-256, score-0.844]

94 Intuitively, this is because, although the observations are somewhat unlikely under state y, it is extremely unlikely for the system to dwell for so long in state z as to account for all of the observations from the 11th onward. [sent-257, score-1.393]

95 5 Discussion We have provided correct, efficient algorithms for inferring most likely trajectories of CTMCs given either initial or initial and final states of the chain, or given noisy/partial observations of the chain. [sent-258, score-0.414]

96 However, the time complexity of the calculations increases as the time step shrinks, which can be a problem if we are interested in long time intervals and/or there are states with very short expected dwell times, necessitating very small time steps. [sent-263, score-1.008]

97 A related problem on which we are working is to find the most probable state sequence of a continuous-time chain under similar informational assumptions. [sent-264, score-0.336]

98 By this, we mean that the dwell times, rather than being optimized, are marginalized out, so that we are left with only the sequence of states and not the particular times they occurred. [sent-265, score-0.856]

99 In many applications, this state sequence may be of greater interest than the dwell times—especially since, as we have shown, maximum likelihood dwell times are often infinitessimal and hence non-representative of typical system behavior. [sent-266, score-1.908]

100 Because state sequences have probabilities rather than likelihoods, a most probable state sequence will always exist. [sent-268, score-0.554]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('dwell', 0.682), ('trajectory', 0.379), ('state', 0.229), ('ctmc', 0.152), ('sk', 0.149), ('ti', 0.141), ('transitions', 0.14), ('si', 0.139), ('trajectories', 0.107), ('ctmcs', 0.106), ('oa', 0.106), ('system', 0.097), ('likely', 0.097), ('psi', 0.091), ('likelihood', 0.091), ('fk', 0.08), ('ttk', 0.076), ('states', 0.074), ('observation', 0.07), ('ut', 0.067), ('oc', 0.066), ('interval', 0.065), ('time', 0.063), ('ob', 0.061), ('chemical', 0.061), ('dtmc', 0.061), ('dtmcs', 0.061), ('happening', 0.061), ('skt', 0.061), ('chain', 0.058), ('tk', 0.057), ('observations', 0.054), ('pu', 0.054), ('times', 0.051), ('sequence', 0.049), ('dynamic', 0.048), ('parentheses', 0.047), ('kinetics', 0.045), ('ottawa', 0.045), ('pss', 0.045), ('chains', 0.044), ('transition', 0.042), ('initial', 0.041), ('markov', 0.04), ('nal', 0.04), ('path', 0.038), ('sometime', 0.036), ('ps', 0.036), ('ui', 0.035), ('st', 0.035), ('oi', 0.034), ('programming', 0.032), ('dwells', 0.03), ('manufacturing', 0.03), ('molecules', 0.03), ('pxz', 0.03), ('pyz', 0.03), ('boundary', 0.029), ('tj', 0.028), ('cycle', 0.028), ('stochastic', 0.027), ('maximum', 0.027), ('om', 0.027), ('stayed', 0.027), ('organisms', 0.027), ('transitioning', 0.027), ('control', 0.026), ('ion', 0.024), ('pxy', 0.024), ('sequences', 0.024), ('unlikely', 0.024), ('ends', 0.023), ('ji', 0.023), ('probabilities', 0.023), ('along', 0.023), ('seek', 0.023), ('hospital', 0.023), ('fragment', 0.023), ('concise', 0.023), ('arg', 0.022), ('observability', 0.022), ('viterbi', 0.022), ('score', 0.021), ('depicted', 0.021), ('sj', 0.021), ('enters', 0.021), ('evolution', 0.02), ('evolutionary', 0.02), ('scan', 0.02), ('accounts', 0.019), ('simulating', 0.018), ('producing', 0.018), ('nding', 0.018), ('events', 0.018), ('spent', 0.018), ('stays', 0.018), ('robot', 0.018), ('kt', 0.018), ('max', 0.017), ('likelihoods', 0.017)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999994 150 nips-2009-Maximum likelihood trajectories for continuous-time Markov chains

Author: Theodore J. Perkins

Abstract: Continuous-time Markov chains are used to model systems in which transitions between states as well as the time the system spends in each state are random. Many computational problems related to such chains have been solved, including determining state distributions as a function of time, parameter estimation, and control. However, the problem of inferring most likely trajectories, where a trajectory is a sequence of states as well as the amount of time spent in each state, appears unsolved. We study three versions of this problem: (i) an initial value problem, in which an initial state is given and we seek the most likely trajectory until a given final time, (ii) a boundary value problem, in which initial and final states and times are given, and we seek the most likely trajectory connecting them, and (iii) trajectory inference under partial observability, analogous to finding maximum likelihood trajectories for hidden Markov models. We show that maximum likelihood trajectories are not always well-defined, and describe a polynomial time test for well-definedness. When well-definedness holds, we show that each of the three problems can be solved in polynomial time, and we develop efficient dynamic programming algorithms for doing so. 1

2 0.08965607 41 nips-2009-Bayesian Source Localization with the Multivariate Laplace Prior

Author: Marcel V. Gerven, Botond Cseke, Robert Oostenveld, Tom Heskes

Abstract: We introduce a novel multivariate Laplace (MVL) distribution as a sparsity promoting prior for Bayesian source localization that allows the specification of constraints between and within sources. We represent the MVL distribution as a scale mixture that induces a coupling between source variances instead of their means. Approximation of the posterior marginals using expectation propagation is shown to be very efficient due to properties of the scale mixture representation. The computational bottleneck amounts to computing the diagonal elements of a sparse matrix inverse. Our approach is illustrated using a mismatch negativity paradigm for which MEG data and a structural MRI have been acquired. We show that spatial coupling leads to sources which are active over larger cortical areas as compared with an uncoupled prior. 1

3 0.086311862 53 nips-2009-Complexity of Decentralized Control: Special Cases

Author: Martin Allen, Shlomo Zilberstein

Abstract: The worst-case complexity of general decentralized POMDPs, which are equivalent to partially observable stochastic games (POSGs) is very high, both for the cooperative and competitive cases. Some reductions in complexity have been achieved by exploiting independence relations in some models. We show that these results are somewhat limited: when these independence assumptions are relaxed in very small ways, complexity returns to that of the general case. 1

4 0.083737351 242 nips-2009-The Infinite Partially Observable Markov Decision Process

Author: Finale Doshi-velez

Abstract: The Partially Observable Markov Decision Process (POMDP) framework has proven useful in planning domains where agents must balance actions that provide knowledge and actions that provide reward. Unfortunately, most POMDPs are complex structures with a large number of parameters. In many real-world problems, both the structure and the parameters are difficult to specify from domain knowledge alone. Recent work in Bayesian reinforcement learning has made headway in learning POMDP models; however, this work has largely focused on learning the parameters of the POMDP model. We define an infinite POMDP (iPOMDP) model that does not require knowledge of the size of the state space; instead, it assumes that the number of visited states will grow as the agent explores its world and only models visited states explicitly. We demonstrate the iPOMDP on several standard problems. 1

5 0.081976332 132 nips-2009-Learning in Markov Random Fields using Tempered Transitions

Author: Ruslan Salakhutdinov

Abstract: Markov random fields (MRF’s), or undirected graphical models, provide a powerful framework for modeling complex dependencies among random variables. Maximum likelihood learning in MRF’s is hard due to the presence of the global normalizing constant. In this paper we consider a class of stochastic approximation algorithms of the Robbins-Monro type that use Markov chain Monte Carlo to do approximate maximum likelihood learning. We show that using MCMC operators based on tempered transitions enables the stochastic approximation algorithm to better explore highly multimodal distributions, which considerably improves parameter estimates in large, densely-connected MRF’s. Our results on MNIST and NORB datasets demonstrate that we can successfully learn good generative models of high-dimensional, richly structured data that perform well on digit and object recognition tasks.

6 0.07179787 225 nips-2009-Sparsistent Learning of Varying-coefficient Models with Structural Changes

7 0.069439292 203 nips-2009-Replacing supervised classification learning by Slow Feature Analysis in spiking neural networks

8 0.066297084 250 nips-2009-Training Factor Graphs with Reinforcement Learning for Efficient MAP Inference

9 0.060432043 217 nips-2009-Sharing Features among Dynamical Systems with Beta Processes

10 0.060378134 60 nips-2009-Convergent Temporal-Difference Learning with Arbitrary Smooth Function Approximation

11 0.060210817 97 nips-2009-Free energy score space

12 0.05939129 54 nips-2009-Compositionality of optimal control laws

13 0.059082292 57 nips-2009-Conditional Random Fields with High-Order Features for Sequence Labeling

14 0.058421358 218 nips-2009-Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining

15 0.052679654 61 nips-2009-Convex Relaxation of Mixture Regression with Efficient Algorithms

16 0.052584857 145 nips-2009-Manifold Embeddings for Model-Based Reinforcement Learning under Partial Observability

17 0.050634906 141 nips-2009-Local Rules for Global MAP: When Do They Work ?

18 0.050392896 137 nips-2009-Learning transport operators for image manifolds

19 0.050343521 15 nips-2009-A Rate Distortion Approach for Semi-Supervised Conditional Random Fields

20 0.046513181 247 nips-2009-Time-rescaling methods for the estimation and assessment of non-Poisson neural encoding models


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.123), (1, -0.007), (2, 0.078), (3, -0.084), (4, -0.053), (5, -0.033), (6, 0.064), (7, 0.004), (8, -0.019), (9, -0.042), (10, 0.016), (11, 0.074), (12, 0.02), (13, -0.004), (14, -0.005), (15, 0.006), (16, -0.004), (17, -0.082), (18, -0.098), (19, -0.076), (20, 0.043), (21, 0.031), (22, -0.072), (23, 0.029), (24, -0.021), (25, -0.026), (26, 0.015), (27, 0.147), (28, 0.036), (29, 0.029), (30, -0.072), (31, 0.03), (32, -0.126), (33, -0.092), (34, -0.002), (35, -0.044), (36, 0.141), (37, 0.011), (38, -0.058), (39, -0.056), (40, -0.029), (41, 0.027), (42, -0.053), (43, 0.068), (44, -0.022), (45, 0.047), (46, -0.01), (47, 0.039), (48, 0.027), (49, -0.114)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95550877 150 nips-2009-Maximum likelihood trajectories for continuous-time Markov chains

Author: Theodore J. Perkins

Abstract: Continuous-time Markov chains are used to model systems in which transitions between states as well as the time the system spends in each state are random. Many computational problems related to such chains have been solved, including determining state distributions as a function of time, parameter estimation, and control. However, the problem of inferring most likely trajectories, where a trajectory is a sequence of states as well as the amount of time spent in each state, appears unsolved. We study three versions of this problem: (i) an initial value problem, in which an initial state is given and we seek the most likely trajectory until a given final time, (ii) a boundary value problem, in which initial and final states and times are given, and we seek the most likely trajectory connecting them, and (iii) trajectory inference under partial observability, analogous to finding maximum likelihood trajectories for hidden Markov models. We show that maximum likelihood trajectories are not always well-defined, and describe a polynomial time test for well-definedness. When well-definedness holds, we show that each of the three problems can be solved in polynomial time, and we develop efficient dynamic programming algorithms for doing so. 1

2 0.57307506 197 nips-2009-Randomized Pruning: Efficiently Calculating Expectations in Large Dynamic Programs

Author: Alexandre Bouchard-côté, Slav Petrov, Dan Klein

Abstract: Pruning can massively accelerate the computation of feature expectations in large models. However, any single pruning mask will introduce bias. We present a novel approach which employs a randomized sequence of pruning masks. Formally, we apply auxiliary variable MCMC sampling to generate this sequence of masks, thereby gaining theoretical guarantees about convergence. Because each mask is generally able to skip large portions of an underlying dynamic program, our approach is particularly compelling for high-degree algorithms. Empirically, we demonstrate our method on bilingual parsing, showing decreasing bias as more masks are incorporated, and outperforming fixed tic-tac-toe pruning. 1

3 0.4986867 41 nips-2009-Bayesian Source Localization with the Multivariate Laplace Prior

Author: Marcel V. Gerven, Botond Cseke, Robert Oostenveld, Tom Heskes

Abstract: We introduce a novel multivariate Laplace (MVL) distribution as a sparsity promoting prior for Bayesian source localization that allows the specification of constraints between and within sources. We represent the MVL distribution as a scale mixture that induces a coupling between source variances instead of their means. Approximation of the posterior marginals using expectation propagation is shown to be very efficient due to properties of the scale mixture representation. The computational bottleneck amounts to computing the diagonal elements of a sparse matrix inverse. Our approach is illustrated using a mismatch negativity paradigm for which MEG data and a structural MRI have been acquired. We show that spatial coupling leads to sources which are active over larger cortical areas as compared with an uncoupled prior. 1

4 0.4629752 54 nips-2009-Compositionality of optimal control laws

Author: Emanuel Todorov

Abstract: We present a theory of compositionality in stochastic optimal control, showing how task-optimal controllers can be constructed from certain primitives. The primitives are themselves feedback controllers pursuing their own agendas. They are mixed in proportion to how much progress they are making towards their agendas and how compatible their agendas are with the present task. The resulting composite control law is provably optimal when the problem belongs to a certain class. This class is rather general and yet has a number of unique properties – one of which is that the Bellman equation can be made linear even for non-linear or discrete dynamics. This gives rise to the compositionality developed here. In the special case of linear dynamics and Gaussian noise our framework yields analytical solutions (i.e. non-linear mixtures of LQG controllers) without requiring the final cost to be quadratic. More generally, a natural set of control primitives can be constructed by applying SVD to Green’s function of the Bellman equation. We illustrate the theory in the context of human arm movements. The ideas of optimality and compositionality are both very prominent in the field of motor control, yet they have been difficult to reconcile. Our work makes this possible.

5 0.45949319 250 nips-2009-Training Factor Graphs with Reinforcement Learning for Efficient MAP Inference

Author: Khashayar Rohanimanesh, Sameer Singh, Andrew McCallum, Michael J. Black

Abstract: Large, relational factor graphs with structure defined by first-order logic or other languages give rise to notoriously difficult inference problems. Because unrolling the structure necessary to represent distributions over all hypotheses has exponential blow-up, solutions are often derived from MCMC. However, because of limitations in the design and parameterization of the jump function, these samplingbased methods suffer from local minima—the system must transition through lower-scoring configurations before arriving at a better MAP solution. This paper presents a new method of explicitly selecting fruitful downward jumps by leveraging reinforcement learning (RL). Rather than setting parameters to maximize the likelihood of the training data, parameters of the factor graph are treated as a log-linear function approximator and learned with methods of temporal difference (TD); MAP inference is performed by executing the resulting policy on held out test data. Our method allows efficient gradient updates since only factors in the neighborhood of variables affected by an action need to be computed—we bypass the need to compute marginals entirely. Our method yields dramatic empirical success, producing new state-of-the-art results on a complex joint model of ontology alignment, with a 48% reduction in error over state-of-the-art in that domain. 1

6 0.458085 188 nips-2009-Perceptual Multistability as Markov Chain Monte Carlo Inference

7 0.4572587 60 nips-2009-Convergent Temporal-Difference Learning with Arbitrary Smooth Function Approximation

8 0.4383145 132 nips-2009-Learning in Markov Random Fields using Tempered Transitions

9 0.42580041 247 nips-2009-Time-rescaling methods for the estimation and assessment of non-Poisson neural encoding models

10 0.4203153 57 nips-2009-Conditional Random Fields with High-Order Features for Sequence Labeling

11 0.40686071 242 nips-2009-The Infinite Partially Observable Markov Decision Process

12 0.40306926 51 nips-2009-Clustering sequence sets for motif discovery

13 0.4011986 159 nips-2009-Multi-Step Dyna Planning for Policy Evaluation and Control

14 0.3950499 140 nips-2009-Linearly constrained Bayesian matrix factorization for blind source separation

15 0.39276242 217 nips-2009-Sharing Features among Dynamical Systems with Beta Processes

16 0.39168099 225 nips-2009-Sparsistent Learning of Varying-coefficient Models with Structural Changes

17 0.38991198 218 nips-2009-Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining

18 0.38104635 56 nips-2009-Conditional Neural Fields

19 0.37534335 53 nips-2009-Complexity of Decentralized Control: Special Cases

20 0.37328941 203 nips-2009-Replacing supervised classification learning by Slow Feature Analysis in spiking neural networks


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(7, 0.014), (21, 0.014), (24, 0.033), (25, 0.058), (35, 0.042), (36, 0.124), (39, 0.053), (58, 0.057), (61, 0.058), (71, 0.107), (81, 0.014), (82, 0.269), (86, 0.057)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.79107237 58 nips-2009-Constructing Topological Maps using Markov Random Fields and Loop-Closure Detection

Author: Roy Anati, Kostas Daniilidis

Abstract: We present a system which constructs a topological map of an environment given a sequence of images. This system includes a novel image similarity score which uses dynamic programming to match images using both the appearance and relative positions of local features simultaneously. Additionally, an MRF is constructed to model the probability of loop-closures. A locally optimal labeling is found using Loopy-BP. Finally we outline a method to generate a topological map from loop closure data. Results, presented on four urban sequences and one indoor sequence, outperform the state of the art. 1

same-paper 2 0.77127552 150 nips-2009-Maximum likelihood trajectories for continuous-time Markov chains

Author: Theodore J. Perkins

Abstract: Continuous-time Markov chains are used to model systems in which transitions between states as well as the time the system spends in each state are random. Many computational problems related to such chains have been solved, including determining state distributions as a function of time, parameter estimation, and control. However, the problem of inferring most likely trajectories, where a trajectory is a sequence of states as well as the amount of time spent in each state, appears unsolved. We study three versions of this problem: (i) an initial value problem, in which an initial state is given and we seek the most likely trajectory until a given final time, (ii) a boundary value problem, in which initial and final states and times are given, and we seek the most likely trajectory connecting them, and (iii) trajectory inference under partial observability, analogous to finding maximum likelihood trajectories for hidden Markov models. We show that maximum likelihood trajectories are not always well-defined, and describe a polynomial time test for well-definedness. When well-definedness holds, we show that each of the three problems can be solved in polynomial time, and we develop efficient dynamic programming algorithms for doing so. 1

3 0.62937385 250 nips-2009-Training Factor Graphs with Reinforcement Learning for Efficient MAP Inference

Author: Khashayar Rohanimanesh, Sameer Singh, Andrew McCallum, Michael J. Black

Abstract: Large, relational factor graphs with structure defined by first-order logic or other languages give rise to notoriously difficult inference problems. Because unrolling the structure necessary to represent distributions over all hypotheses has exponential blow-up, solutions are often derived from MCMC. However, because of limitations in the design and parameterization of the jump function, these samplingbased methods suffer from local minima—the system must transition through lower-scoring configurations before arriving at a better MAP solution. This paper presents a new method of explicitly selecting fruitful downward jumps by leveraging reinforcement learning (RL). Rather than setting parameters to maximize the likelihood of the training data, parameters of the factor graph are treated as a log-linear function approximator and learned with methods of temporal difference (TD); MAP inference is performed by executing the resulting policy on held out test data. Our method allows efficient gradient updates since only factors in the neighborhood of variables affected by an action need to be computed—we bypass the need to compute marginals entirely. Our method yields dramatic empirical success, producing new state-of-the-art results on a complex joint model of ontology alignment, with a 48% reduction in error over state-of-the-art in that domain. 1

4 0.58198762 215 nips-2009-Sensitivity analysis in HMMs with application to likelihood maximization

Author: Pierre-arnaud Coquelin, Romain Deguest, Rémi Munos

Abstract: This paper considers a sensitivity analysis in Hidden Markov Models with continuous state and observation spaces. We propose an Infinitesimal Perturbation Analysis (IPA) on the filtering distribution with respect to some parameters of the model. We describe a methodology for using any algorithm that estimates the filtering density, such as Sequential Monte Carlo methods, to design an algorithm that estimates its gradient. The resulting IPA estimator is proven to be asymptotically unbiased, consistent and has computational complexity linear in the number of particles. We consider an application of this analysis to the problem of identifying unknown parameters of the model given a sequence of observations. We derive an IPA estimator for the gradient of the log-likelihood, which may be used in a gradient method for the purpose of likelihood maximization. We illustrate the method with several numerical experiments.

5 0.58193201 40 nips-2009-Bayesian Nonparametric Models on Decomposable Graphs

Author: Francois Caron, Arnaud Doucet

Abstract: Over recent years Dirichlet processes and the associated Chinese restaurant process (CRP) have found many applications in clustering while the Indian buffet process (IBP) is increasingly used to describe latent feature models. These models are attractive because they ensure exchangeability (over samples). We propose here extensions of these models where the dependency between samples is given by a known decomposable graph. These models have appealing properties and can be easily learned using Monte Carlo techniques. 1 Motivation The CRP and IBP have found numerous applications in machine learning over recent years [5, 10]. We consider here the case where the data we are interested in are ‘locally’ dependent; these dependencies being represented by a known graph G where each data point/object is associated to a vertex. These local dependencies can correspond to any conceptual or real (e.g. space, time) metric. For example, in the context of clustering, we might want to propose a prior distribution on partitions enforcing that data which are ‘close’ in the graph are more likely to be in the same cluster. Similarly, in the context of latent feature models, we might be interested in a prior distribution on features enforcing that data which are ‘close’ in the graph are more likely to possess similar features. The ‘standard’ CRP and IBP correspond to the case where the graph G is complete; that is it is fully connected. In this paper, we generalize the CRP and IBP to decomposable graphs. The resulting generalized versions of the CRP and IBP enjoy attractive properties. Each clique of the graph follows marginally a CRP or an IBP process and explicit expressions for the joint prior distribution on the graph is available. It makes it easy to learn those models using straightforward generalizations of Markov chain Monte Carlo (MCMC) or Sequential Monte Carlo (SMC) algorithms proposed to perform inference for the CRP and IBP [5, 10, 14]. The rest of the paper is organized as follows. In Section 2, we review the popular Dirichlet multinomial allocation model and the Dirichlet Process (DP) partition distribution. We propose an extension of these two models to decomposable graphical models. In Section 3 we discuss nonparametric latent feature models, reviewing briefly the construction in [5] and extending it to decomposable graphs. We demonstrate these models in Section 4 on two applications: an alternative to the hierarchical DP model [12] and a time-varying matrix factorization problem. 2 Prior distributions for partitions on decomposable graphs Assume we have n observations. When performing clustering, we associate to each of this observation an allocation variable zi ∈ [K] = {1, . . . , K}. Let Πn be the partition of [n] = {1, . . . , n} defined by the equivalence relation i ↔ j ⇔ zi = zj . The resulting partition Πn = {A1 , . . . , An(Πn ) } 1 is an unordered collection of disjoint non-empty subsets Aj of [n], j = 1, . . . , n(Πn ), where ∪j Aj = [n] and n(Πn ) is the number of subsets for partition Πn . We also denote by Pn be the set of all partitions of [n] and let nj , j = 1, . . . , n(Πn ), be the size of the subset Aj . Each allocation variable zi is associated to a vertex/site of an undirected graph G, which is assumed to be known. In the standard case where the graph G is complete, we first review briefly here two popular prior distributions on z1:n , equivalently on Πn . We then extend these models to undirected decomposable graphs; see [2, 8] for an introduction to decomposable graphs. Finally we briefly discuss the directed case. Note that the models proposed here are completely different from the hyper multinomial-Dirichlet in [2] and its recent DP extension [6]. 2.1 Dirichlet multinomial allocation model and DP partition distribution Assume for the time being that K is finite. When the graph is complete, a popular choice for the allocation variables is to consider a Dirichlet multinomial allocation model [11] θ θ , . . . , ), zi |π ∼ π (1) K K where D is the standard Dirichlet distribution and θ > 0. Integrating out π, we obtain the following Dirichlet multinomial prior distribution π ∼ D( Pr(z1:n ) = K j=1 Γ(θ) Γ(nj + θ K) (2) θ Γ(θ + n)Γ( K )K and then, using the straightforward equality Pr(Πn ) = PK where PK = {Πn ∈ Pn |n(Πn ) ≤ K}, we obtain K! (K−n(Πn ))! Pr(z1:n ) valid for for all Πn ∈ n(Π ) Pr(Πn ) = θ Γ(θ) j=1n Γ(nj + K ) K! . θ (K − n(Πn ))! Γ(θ + n)Γ( K )n(Πn ) (3) DP may be seen as a generalization of the Dirichlet multinomial model when the number of components K → ∞; see for example [10]. In this case the distribution over the partition Πn of [n] is given by [11] n(Π ) θn(Πn ) j=1n Γ(nj ) . (4) Pr(Πn ) = n i=1 (θ + i − 1) Let Π−k = {A1,−k , . . . , An(Π−k ),−k } be the partition induced by removing item k to Πn and nj,−k be the size of cluster j for j = 1, . . . , n(Π−k ). It follows from (4) that an item k is assigned to an existing cluster j, j = 1, . . . , n(Π−k ), with probability proportional to nj,−k / (n − 1 + θ) and forms a new cluster with probability θ/ (n − 1 + θ). This property is the basis of the CRP. We now extend the Dirichlet multinomial allocation and the DP partition distribution models to decomposable graphs. 2.2 Markov combination of Dirichlet multinomial and DP partition distributions Let G be a decomposable undirected graph, C = {C1 , . . . , Cp } a perfect ordering of the cliques and S = {S2 , . . . , Cp } the associated separators. It can be easily checked that if the marginal distribution of zC for each clique C ∈ C is defined by (2) then these distributions are consistent as they yield the same distribution (2) over the separators. Therefore, the unique Markov distribution over G with Dirichlet multinomial distribution over the cliques is defined by [8] Pr(zC ) S∈S Pr(zS ) C∈C Pr(z1:n ) = (5) where for each complete set B ⊆ G, we have Pr(zB ) given by (2). It follows that we have for any Πn ∈ PK Γ(θ) K! Pr(Πn ) = (K − n(Πn ))! C∈C Γ(θ) S∈S 2 K j=1 θ Γ(nj,C + K ) θ Γ(θ+nC )Γ( K )K K j=1 θ Γ(nj,S + K ) θ Γ(θ+nS )Γ( K )K (6) where for each complete set B ⊆ G, nj,B is the number of items associated to cluster j, j = 1, . . . , K in B and nB is the total number of items in B. Within each complete set B, the allocation variables define a partition distributed according to the Dirichlet-multinomial distribution. We now extend this approach to DP partition distributions; that is we derive a joint distribution over Πn such that the distribution of ΠB over each complete set B of the graph is given by (4) with θ > 0. Such a distribution satisfies the consistency condition over the separators as the restriction of any partition distributed according to (4) still follows (4) [7]. G Proposition. Let Pn be the set of partitions Πn ∈ Pn such that for each decomposition A, B, and any (i, j) ∈ A × B, i ↔ j ⇒ ∃k ∈ A ∩ B such that k ↔ i ↔ j. As K → ∞, the prior distribution G over partitions (6) is given for each Πn ∈ Pn by Pr(Πn ) = θn(Πn ) n(ΠC ) Γ(nj,C ) j=1 nC i=1 (θ+i−1) n(ΠS ) Γ(nj,S ) j=1 nS (θ+i−1) i=1 C∈C S∈S (7) where n(ΠB ) is the number of clusters in the complete set B. Proof. From (6), we have θ n(ΠC ) K(K − 1) . . . (K − n(Πn ) + 1) Pr(Πn ) = K C∈C n(ΠC )− S∈S n(ΠS ) C∈C θ n(ΠS ) S∈S n(ΠC ) θ Γ(nj,C + K ) j=1 nC (θ+i−1) i=1 n(ΠS ) θ Γ(nj,S + K ) j=1 nS (θ+i−1) i=1 Thus when K → ∞, we obtain (7) if n(Πn ) = C∈C n(ΠC ) − S∈S n(ΠS ) and 0 otherwise. We have n(Πn ) ≤ C∈C n(ΠC ) − S∈S n(ΠS ) for any Πn ∈ Pn and the subset of Pn verifying G n(Πn ) = C∈C n(ΠC ) − S∈S n(ΠS ) corresponds to the set Pn . Example. Let the notation i ∼ j (resp. i j) indicates an edge (resp. no edge) between two sites. Let n = 3 and G be the decomposable graph defined by the relations 1 ∼ 2, 2 ∼ 3 and 1 3. G The set P3 is then equal to {{{1, 2, 3}}; {{1, 2}, {3}}; {{1}, {2, 3}}; {{1}, {2}, {3}}}. Note that G the partition {{1, 3}, {2}} does not belong to P3 . Indeed, as there is no edge between 1 and 3, they cannot be in the same cluster if 2 is in another cluster. The cliques are C1 = {1, 2} and C2 = {2, 3} Pr(ΠC1 ) Pr(ΠC2 ) hence we can and the separator is S2 = {2}. The distribution is given by Pr(Π3 ) = Pr(ΠS ) 2 check that we obtain Pr({1, 2, 3}) = (θ + 1)−2 , Pr({1, 2}, {3}) = Pr({1, 2}, {3}) = θ(θ + 1)−2 and Pr({1}, {2}, {3}) = θ2 (θ + 1)−2 . Let now define the full conditional distributions. Based on (7) the conditional assignment of an item k is proportional to the conditional over the cliques divided by the conditional over the separators. G Let denote G−k the undirected graph obtained by removing vertex k from G. Suppose that Πn ∈ Pn . G−k If Π−k ∈ Pn−1 , then do not change the value of item k. Otherwise, item k is assigned to cluster j / where j = 1, . . . , n(Π−k ) with probability proportional to {C∈C|n−k,j,C >0} n−k,j,C {S∈S|n−k,j,S >0} n−k,j,S (8) and to a new cluster with probability proportional to θ, where n−k,j,C is the number of items in the set C \ {k} belonging to cluster j. The updating process is illustrated by the Chinese wedding party process1 in Fig. 1. The results of this section can be extended to the Pitman-Yor process, and more generally to species sampling models. Example (continuing). Given Π−2 = {A1 = {1}, A2 = {3}}, we have −1 Pr( item 2 assigned to A1 = {1}| Π−2 ) = Pr( item 2 assigned to A2 = {3}| Π−2 ) = (θ + 2) −1 and Pr( item 2 assigned to new cluster A3 | Π−2 ) = θ (θ + 2) . Given Π−2 = {A1 = {1, 3}}, item 2 is assigned to A1 with probability 1. 1 Note that this representation describes the full conditionals while the CRP represents the sequential updat- ing. 3 (a) (b) (d) (c) (e) Figure 1: Chinese wedding party. Consider a group of n guests attending a wedding party. Each of the n guests may belong to one or several cliques, i.e. maximal groups of people such that everybody knows everybody. The belonging of each guest to the different cliques is represented by color patches on the figures, and the graphical representation of the relationship between the guests is represented by the graphical model (e). (a) Suppose that the guests are already seated such that two guests cannot be together at the same table is they are not part of the same clique, or if there does not exist a group of other guests such that they are related (“Any friend of yours is a friend of mine”). (b) The guest number k leaves his table and either (c) joins a table where there are guests from the same clique as him, with probability proportional to the product of the number of guests from each clique over the product of the number of guests belonging to several cliques on that table or (d) he joins a new table with probability proportional to θ. 2.3 Monte Carlo inference 2.3.1 MCMC algorithm Using the full conditionals, a single site Gibbs sampler can easily be designed to approximate the posterior distribution Pr(Πn |z1:n ). Given a partition Πn , an item k is taken out of the partition. If G−k Π−k ∈ Pn−1 , item k keeps the same value. Otherwise, the item will be assigned to a cluster j, / j = 1, . . . , n(Π−k ), with probability proportional to p(z{k}∪Aj,−k ) × p(zAj,−k ) {C∈C|n−k,j,C >0} n−k,j,C {S∈S|n−k,j,S >0} n−k,j,S (9) and the item will be assigned to a new cluster with probability proportional to p(z{k} ) × θ. Similarly to [3], we can also define a procedure to sample from p(θ|n(Πn ) = k)). We assume that θ ∼ G(a, b) and use p auxiliary variables x1 , . . . , xp . The procedure is as follows. • For j = 1, . . . , p, sample xj |k, θ ∼ Beta(θ + nSj , nCj − nSj ) • Sample θ|k, x1:p ∼ G(a + k, b − j log xj ) 2.3.2 Sequential Monte Carlo We have so far only treated the case of an undirected decomposable graph G. We can formulate a sequential updating rule for the corresponding perfect directed version D of G. Indeed, let (a1 , . . . a|V | ) be a perfect ordering and pa(ak ) be the set of parents of ak which is by definition complete. Let Πk−1 = {A1,k−1 , . . . , An(Πk−1 ),k−1 } denote the partition of the first k−1 vertices a1:k−1 and let nj,pa(ak ) be the number of elements with value j in the set pa(ak ), j = 1, . . . , n(Πk−1 ). Then the vertex ak joins the set j with probability nj,pa(ak ) / θ + cluster with probability θ/ θ + q q nq,pa(ak ) and creates a new nq,pa(ak ) . One can then design a particle filter/SMC method in a similar fashion as [4]. Consider a set of (i) (i) (i) (i) N N particles Πk−1 with weights wk−1 ∝ Pr(Πk−1 , z1:k−1 ) ( i=1 wk−1 = 1) that approximate (i) the posterior distribution Pr(Πk−1 |z1:k−1 ). For each particle i, there are n(Πk−1 ) + 1 possible 4 (i,j) allocations for component ak . We denote Πk the partition obtained by associating component ak (i,j) to cluster j. The weight associated to Πk is given by  nj,pa(ak ) (i)  if j = 1, . . . , n(Πk−1 ) θ+ q nq,pa(ak ) (i,j) (i) p(z{ak }∪Aj,k−1 ) wk−1 = wk−1 × (10) (i) θ  θ+ n p(zAj,k−1 ) if j = n(Πk−1 ) + 1 q q,pa(ak ) (i,j) Then we can perform a deterministic resampling step by keeping the N particles Πk with highest (i,j) (i) (i) weights wk−1 . Let Πk be the resampled particles and wk the associated normalized weights. 3 Prior distributions for infinite binary matrices on decomposable graphs Assume we have n objects; each of these objects being associated to the vertex of a graph G. To K each object is associated a K-dimensional binary vector zn = (zn,1 , . . . , zn,K ) ∈ {0, 1} where zn,i = 1 if object n possesses feature i and zn,i = 0 otherwise. These vectors zt form a binary n × K matrix denoted Z1:n . We denote by ξ1:n the associated equivalence class of left-ordered matrices and let EK be the set of left-ordered matrices with at most K features. In the standard case where the graph G is complete, we review briefly here two popular prior distributions on Z1:n , equivalently on ξ1:n : the Beta-Bernoulli model and the IBP [5]. We then extend these models to undirected decomposable graphs. This can be used for example to define a time-varying IBP as illustrated in Section 4. 3.1 Beta-Bernoulli and IBP distributions The Beta-Bernoulli distribution over the allocation Z1:n is K Pr(Z1:n ) = α + K )Γ(n − nj + 1) α Γ(n + 1 + K ) α K Γ(nj j=1 (11) where nj is the number of objects having feature j. It follows that Pr(ξ1:n ) = K K! 2n −1 h=0 α K Γ(nj α + K )Γ(n − nj + 1) α Γ(n + 1 + K ) Kh ! j=1 (12) where Kh is the number of features possessing the history h (see [5] for details). The nonparametric model is obtained by taking the limit when K → ∞ Pr(ξ1:n ) = αK K+ + 2n −1 h=1 Kh ! exp(−αHn ) where K + is the total number of features and Hn = 3.2 (n − nj )!(nj − 1)! n! j=1 n 1 k=1 k . (13) The IBP follows from (13). Markov combination of Beta-Bernoulli and IBP distributions Let G be a decomposable undirected graph, C = {C1 , . . . , Cp } a perfect ordering of the cliques and S = {S2 , . . . , Cp } the associated separators. As in the Dirichlet-multinomial case, it is easily seen that if for each clique C ∈ C, the marginal distribution is defined by (11), then these distributions are consistent as they yield the same distribution (11) over the separators. Therefore, the unique Markov distribution over G with Beta-Bernoulli distribution over the cliques is defined by [8] Pr(ZC ) S∈S Pr(ZS ) C∈C Pr(Z1:n ) = (14) where Pr(ZB ) given by (11) for each complete set B ⊆ G. The prior over ξ1:n is thus given, for ξ1:n ∈ EK , by Pr(ξ1:n ) = K! 2n −1 h=0 Kh ! α K α Γ(nj,C + K )Γ(nC −nj,C +1) α Γ(nC +1+ K ) α α Γ(nj,S + K )Γ(nS −nj,S +1) K K α j=1 Γ(nS +1+ K ) K j=1 C∈C S∈S 5 (15) where for each complete set B ⊆ G, nj,B is the number of items having feature j, j = 1, . . . , K in the set B and nB is the whole set of objects in set B. Taking the limit when K → ∞, we obtain after a few calculations Pr(ξ1:n ) = α + K[n] exp [−α ( C HnC − 2n −1 h=1 Kh ! HnS )] × C∈C + KC (nC −nj,C )!(nj,C −1)! j=1 nC ! S∈S S + KS (nS −nj,S )!(nj,S −1)! j=1 nS ! + + + + if K[n] = C KC − S KS and 0 otherwise, where KB is the number of different features possessed by objects in B. G Let En be the subset of En such that for each decomposition A, B and any (u, v) ∈ A × B: {u and v possess feature j} ⇒ ∃k ∈ A ∩ B such that {k possesses feature j}. Let ξ−k be the left-ordered + matrix obtained by removing object k from ξn and K−k be the total number of different features in G−k + ξ−k . For each feature j = 1, . . . , K−k , if ξ−k ∈ En−1 then we have   b C∈C nj,C if i = 1 S∈C nj,S Pr(ξk,j = i) = (16)  b C∈C (nC −nj,C ) if i = 0 (nS −nj,S ) S∈C nS where b is the appropriate normalizing constant then the customer k tries Poisson α {S∈S|k∈S} nC {C∈C|k∈C} new dishes. We can easily generalize this construction to a directed version D of G using arguments similar to those presented in Section 2; see Section 4 for an application to time-varying matrix factorization. 4 4.1 Applications Sharing clusters among relative groups: An alternative to HDP Consider that we are given d groups with nj data yi,j in each group, i = 1, . . . , nj , j = 1, . . . , d. We consider latent cluster variables zi,j that define the partition of the data. We will use alternatively the notation θi,j = Uzi,j in the following. Hierarchical Dirichlet Process [12] (HDP) is a very popular model for sharing clusters among related groups. It is based on a hierarchy of DPs G0 ∼ DP (γ, H), Gj |G0 ∼ DP (α, G0 ) j = 1, . . . d θi,j |Gj ∼ Gj , yi,j |θi,j ∼ f (θi,j ) i = 1, . . . , nj . Under conjugacy assumptions, G0 , Gj and U can be integrated out and we can approximate the marginal posterior of (zi,j ) given y = (yi,j ) with Gibbs sampling using the Chinese restaurant franchise to sample from the full conditional p(zi,j |z−{i,j} , y). Using the graph formulation defined in Section 2, we propose an alternative to HDP. Let θ0,1 , . . . , θ0,N be N auxiliary variables belonging to what we call group 0. We define each clique Cj (j = 1, . . . , d) to be composed of elements from group j and elements from group 0. This defines a decomposable graphical model whose separator is given by the elements of group 0. We can rewrite the model in a way quite similar to HDP G0 ∼ DP (α, H), θ0,i |G0 ∼ G0 i = 1, ..., N α α Gj |θ0,1 , . . . , θ0,N ∼ DP (α + N, α+N H + α+N θi,j |Gj ∼ Gj , yi,j |θi,j ∼ f (θi,j ) i = 1, . . . , nj N i=1 δθ0,i ) j = 1, . . . d, N For any subset A and j = k ∈ {1, . . . , p} we have corr(Gj (A), Gk (A)) = α+N . Again, under conjugacy conditions, we can integrate out G0 , Gj and U and approximate the marginal posterior distribution over the partition using the Chinese wedding party process defined in Section 2. Note that for latent variables zi,j , j = 1, . . . , d, associated to data, this is the usual CRP update. As in HDP, multiple layers can be added to the model. Figures 2 (a) and (b) resp. give the graphical DP alternative to HDP and 2-layer HDP. 6 z0 root z0 root corpora docs z1 z2 z1 z2 z3 z1,1 z1,2 z2,1 z2,2 z2,3 docs (a) Graphical DP alternative to HDP (b) Graphical DP alternative to 2-layer HDP Figure 2: Hierarchical Graphs of dependency with (a) one layer and (b) two layers of hierarchy. If N = 0, then Gj ∼ DP (α, H) for all j and this is equivalent to setting γ → ∞ in HDP. If N → ∞ then Gj = G0 for all j, G0 ∼ DP (α, H). This is equivalent to setting α → ∞ in the HDP. One interesting feature of the model is that, contrary to HDP, the marginal distribution of Gj at any layer of the tree is DP (α, H). As a consequence, the total number of clusters scales logarithmically (as in the usual DP) with the size of each group, whereas it scales doubly logarithmically in HDP. Contrary to HDP, there are at most N clusters shared between different groups. Our model is in that sense reminiscent of [9] where only a limited number of clusters can be shared. Note however that contrary to [9] we have a simple CRP-like process. The proposed methodology can be straightforwardly extended to the infinite HMM [12]. The main issue of the proposed model is the setting of the number N of auxiliary parameters. Another issue is that to achieve high correlation, we need a large number of auxiliary variables. Nonetheless, the computational time used to sample from auxiliary variables is negligible compared to the time used for latent variables associated to data. Moreover, it can be easily parallelized. The model proposed offers a far richer framework and ensures that at each level of the tree, the marginal distribution of the partition is given by a DP partition model. 4.2 Time-varying matrix factorization Let X1:n be an observed matrix of dimension n × D. We want to find a representation of this matrix in terms of two latent matrices Z1:n of dimension n × K and Y of dimension K × D. Here Z1:n 2 is a binary matrix whereas Y is a matrix of latent features. By assuming that Y ∼ N 0, σY IK×D and 2 X1:n = Z1:n Y + σX εn where εn ∼ N 0, σX In×D , we obtain p(X1:n |Z1:n ) ∝ −D/2 2 2 + Z+T Z+ + σX /σY IKn 1:n 1:n + (n−Kn )D σX exp − + Kn D σY 2 2 + where Σ−1 = I − Z+ Z+T Z+ + σX /σY IKn n 1:n 1:n 1:n −1 1 T −1 2 tr X1:n Σn X1:n 2σX (17) + Z+T , Kn the number of non-zero columns of 1:n + Z1:n and Z+ is the first Kn columns of Z1:n . To avoid having to set K, [5, 14] assume that Z1:n 1:n follows an IBP. The resulting posterior distribution p(Z1:n |X1:n ) can be estimated through MCMC [5] or SMC [14]. We consider here a different model where the object Xt is assumed to arrive at time index t and we want a prior distribution on Z1:n ensuring that objects close in time are more likely to possess similar features. To achieve this, we consider the simple directed graphical model D of Fig. 3 where the site numbering corresponds to a time index in that case and a perfect numbering of D is (1, 2, . . .). The set of parents pa(t) is composed of the r preceding sites {{t − r}, . . . , {t − 1}}. The time-varying IBP to sample from p(Z1:n ) associated to this directed graph follows from (16) and proceeds as follows. At time t = 1 + new new • Sample K1 ∼Poisson(α), set z1,i = 1 for i = 1, ..., K1 and set K1 = Knew . At times t = 2, . . . , r n + new ∼Poisson( α ). • For k = 1, . . . Kt , sample zt,k ∼ Ber( 1:t−1,k ) and Kt t t 7   ?  ? - t−r - t−r+1 - . . . - t−1 - t - t+1        6 6 Figure 3: Directed graph. At times t = r + 1, . . . , n n + α new ∼Poisson( r+1 ). • For k = 1, . . . Kt , sample zt,k ∼ Ber( t−r:t−1,k ) and Kt r+1 + Here Kt is the total number of features appearing from time max(1, t − r) to t − 1 and nt−r:t−1,k the restriction of n1:t−1 to the r last customers. Using (17) and the prior distribution of Z1:n which can be sampled using the time-varying IBP described above, we can easily design an SMC method to sample from p(Z1:n |X1:n ). We do not detail it here. Note that contrary to [14], our algorithm does not require inverting a matrix whose dimension grows linearly with the size of the data but only a matrix of dimension r × r. In order to illustrate the model and SMC algorithm, we create 200 6 × 6 images using a ground truth Y consisting of 4 different 6 × 6 latent images. The 200 × 4 binary matrix was generated from Pr(zt,k = 1) = πt,k , where πt = ( .6 .5 0 0 ) if t = 1, . . . , 30, πt = ( .4 .8 .4 0 ) if t = 31, . . . , 50 and πt = ( 0 .3 .6 .6 ) if t = 51, . . . , 200. The order of the model is set to r = 50. The feature occurences Z1:n and true features Y and their estimates are represented in Figure 4. Two spurious features are detected by the model (features 2 and 5 on Fig. 3(c)) but quickly discarded (Fig. 4(d)). The algorithm is able to correctly estimate the varying prior occurences of the features over time. Feature1 Feature2 Feature1 Feature2 Feature3 20 20 40 40 60 60 Feature4 80 100 Feature4 Feature5 Feature6 Time Feature3 Time 80 100 120 120 140 140 160 160 180 200 180 1 2 3 200 4 Feature (a) 1 2 3 4 5 6 Feature (b) (c) (d) Figure 4: (a) True features, (b) True features occurences, (c) MAP estimate ZM AP and (d) associated E[Y|ZM AP ] t=20 t=50 t=20 t=50 t=100 t=200 t=100 t=200 (a) (b) Figure 5: (a) E[Xt |πt , Y] and (b) E[Xt |X1:t−1 ] at t = 20, 50, 100, 200. 5 Related work and Discussion The fixed-lag version of the time-varying DP of Caron et al. [1] is a special case of the proposed model when G is given by Fig. 3. The bivariate DP of Walker and Muliere [13] is also a special case when G has only two cliques. In this paper, we have assumed that the structure of the graph was known beforehand and we have shown that many flexible models arise from this framework. It would be interesting in the future to investigate the case where the graphical structure is unknown and must be estimated from the data. Acknowledgment The authors thank the reviewers for their comments that helped to improve the writing of the paper. 8 References [1] F. Caron, M. Davy, and A. Doucet. Generalized Polya urn for time-varying Dirichlet process mixtures. In Uncertainty in Artificial Intelligence, 2007. [2] A.P. Dawid and S.L. Lauritzen. Hyper Markov laws in the statistical analysis of decomposable graphical models. The Annals of Statistics, 21:1272–1317, 1993. [3] M.D. Escobar and M. West. Bayesian density estimation and inference using mixtures. Journal of the American Statistical Association, 90:577–588, 1995. [4] P. Fearnhead. Particle filters for mixture models with an unknown number of components. Statistics and Computing, 14:11–21, 2004. [5] T.L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In Advances in Neural Information Processing Systems, 2006. [6] D. Heinz. Building hyper dirichlet processes for graphical models. Electonic Journal of Statistics, 3:290–315, 2009. [7] J.F.C. Kingman. Random partitions in population genetics. Proceedings of the Royal Society of London, 361:1–20, 1978. [8] S.L. Lauritzen. Graphical Models. Oxford University Press, 1996. [9] P. M¨ ller, F. Quintana, and G. Rosner. A method for combining inference across related nonu parametric Bayesian models. Journal of the Royal Statistical Society B, 66:735–749, 2004. [10] R.M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9:249–265, 2000. [11] J. Pitman. Exchangeable and partially exchangeable random partitions. Probability theory and related fields, 102:145–158, 1995. [12] Y.W. Teh, M.I. Jordan, M.J. Beal, and D.M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101:1566–1581, 2006. [13] S. Walker and P. Muliere. A bivariate Dirichlet process. Statistics and Probability Letters, 64:1–7, 2003. [14] F. Wood and T.L. Griffiths. Particle filtering for nonparametric Bayesian matrix factorization. In Advances in Neural Information Processing Systems, 2007. 9

6 0.58192343 56 nips-2009-Conditional Neural Fields

7 0.57772154 217 nips-2009-Sharing Features among Dynamical Systems with Beta Processes

8 0.57701218 132 nips-2009-Learning in Markov Random Fields using Tempered Transitions

9 0.57612902 260 nips-2009-Zero-shot Learning with Semantic Output Codes

10 0.57515311 226 nips-2009-Spatial Normalized Gamma Processes

11 0.57477301 130 nips-2009-Learning from Multiple Partially Observed Views - an Application to Multilingual Text Categorization

12 0.57464784 18 nips-2009-A Stochastic approximation method for inference in probabilistic graphical models

13 0.57216161 97 nips-2009-Free energy score space

14 0.5695855 113 nips-2009-Improving Existing Fault Recovery Policies

15 0.56932342 204 nips-2009-Replicated Softmax: an Undirected Topic Model

16 0.56835216 192 nips-2009-Posterior vs Parameter Sparsity in Latent Variable Models

17 0.5683133 129 nips-2009-Learning a Small Mixture of Trees

18 0.56778979 41 nips-2009-Bayesian Source Localization with the Multivariate Laplace Prior

19 0.56608725 174 nips-2009-Nonparametric Latent Feature Models for Link Prediction

20 0.56437963 71 nips-2009-Distribution-Calibrated Hierarchical Classification