nips nips2009 nips2009-145 knowledge-graph by maker-knowledge-mining

145 nips-2009-Manifold Embeddings for Model-Based Reinforcement Learning under Partial Observability


Source: pdf

Author: Keith Bush, Joelle Pineau

Abstract: Interesting real-world datasets often exhibit nonlinear, noisy, continuous-valued states that are unexplorable, are poorly described by first principles, and are only partially observable. If partial observability can be overcome, these constraints suggest the use of model-based reinforcement learning. We experiment with manifold embeddings to reconstruct the observable state-space in the context of offline, model-based reinforcement learning. We demonstrate that the embedding of a system can change as a result of learning, and we argue that the best performing embeddings well-represent the dynamics of both the uncontrolled and adaptively controlled system. We apply this approach to learn a neurostimulation policy that suppresses epileptic seizures on animal brain slices. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 If partial observability can be overcome, these constraints suggest the use of model-based reinforcement learning. [sent-6, score-0.261]

2 We experiment with manifold embeddings to reconstruct the observable state-space in the context of offline, model-based reinforcement learning. [sent-7, score-0.578]

3 We demonstrate that the embedding of a system can change as a result of learning, and we argue that the best performing embeddings well-represent the dynamics of both the uncontrolled and adaptively controlled system. [sent-8, score-0.679]

4 We apply this approach to learn a neurostimulation policy that suppresses epileptic seizures on animal brain slices. [sent-9, score-0.833]

5 1 Introduction The accessibility of large quantities of off-line discrete-time dynamic data—state-action sequences drawn from real-world domains—represents an untapped opportunity for widespread adoption of reinforcement learning. [sent-10, score-0.191]

6 If we assume that the reward function is part of the problem description, then to learn from this data we must ensure the Markov property is preserved before we approximate the optimal policy with respect to the reward function in a model-free or model-based way. [sent-13, score-0.415]

7 As part of model-based reinforcement learning, function approximation additionally assumes that similar actions map to nearby future states from nearby current states [10]. [sent-16, score-0.238]

8 Impressive performance and scalability of local modelbased approaches [1, 2] and global model-free approaches [6, 17] have been achieved by exploiting the locality of dynamics in fully observable state-space representations of challenging real-world problems. [sent-17, score-0.314]

9 In partially observable systems, however, locality is not preserved without additional context. [sent-18, score-0.262]

10 Rather, we desire a general framework for reconstructing state-spaces of partially observable systems which guarantees the preservation of locality. [sent-20, score-0.238]

11 Nonlinear dynamic analysis has long used manifold embeddings to reconstruct locally Euclidean state-spaces of unforced, partially observable systems [24, 18] and has identified ways of finding these embeddings non-parametrically [7, 12]. [sent-21, score-0.768]

12 Dynamicists have also used embeddings as generative models of partially observable unforced systems [16] by numerically integrating over the resultant embedding. [sent-22, score-0.489]

13 1 Recent advances have extended the theory of manifold embeddings to encompass deterministically and stochastically forced systems [21, 22]. [sent-23, score-0.339]

14 A natural next step is to apply these latest theoretical tools to reconstruct and control partially observable forced systems. [sent-24, score-0.337]

15 We do this by first identifying an appropriate embedding for the system of interest and then leveraging the resultant locality to perform reinforcement learning in a modelbased way. [sent-25, score-0.697]

16 We believe it may be more practical to address reinforcement learning under partial observability in a model-based way because it facilitates reasoning about domain knowledge and off-line validation of the embedding parameters. [sent-26, score-0.651]

17 First, we study the use of embeddings to learn control policies in a partially observable variant of the well-known Mountain Car domain. [sent-28, score-0.524]

18 Second, we demonstrate the embedding-driven, model-based technique to learn an effective and efficient neurostimulation policy for the treatment of epilepsy. [sent-29, score-0.516]

19 2 Methods In this section we combine reinforcement learning, partial observability, and manifold embeddings into a single mathematical formalism. [sent-31, score-0.453]

20 We then describe non-parametric means of identifying the manifold embedding of a system and how the resultant embedding may be used as a local model. [sent-32, score-0.894]

21 forced system) having a state vector, s ∈ RM , which evolves according to a nonlinear differential equation but is discretized in time and integrated numerically according to the map, f . [sent-38, score-0.196]

22 Consider an agent that interacts with the environment by selecting action, a, according to a policy function, π. [sent-39, score-0.358]

23 Consider also that there exists a reward function, g, which informs the agent of the scalar goodness of taking an action with respect to the goal of some multi-step decision task. [sent-40, score-0.191]

24 Nonlinear dynamic systems theory provides a means of reconstructing complete state observability from incomplete state via the method of delayed embeddings, formalized by Takens’ Theorem [24]. [sent-50, score-0.265]

25 Assuming that the state update f and the policy π are deterministic functions, Equation 1 may be substituted into Equation 2 to compose a new function, φ, s(t + 1) = f (s(t), π(s(t))) , = φ(s(t)), (6) which specifies the discrete time evolution of the agent acting on the environment. [sent-53, score-0.456]

26 , s(t − (E − 1))], E > 2M, s ˜ ˜ (8) such that sE lies on a subset of RE which is an embedding of s. [sent-58, score-0.357]

27 Because embeddings preserve the connectivity of the original vector-space, in the context of RL the mapping ψ, sE (t + 1) = ψ(sE (t)), (9) may be substituted for f (Eqn. [sent-59, score-0.264]

28 3 Non-parametric Identification of Manifold Embeddings Takens’ Theorem does not define how to compute the embedding dimension of arbitrary sequences of observations, nor does it provide a test to determine if the theorem is applicable. [sent-62, score-0.399]

29 Finding high-quality embedding parameters of challenging domains, such as chaotic and noise-corrupted nonlinear signals, occupy much of the fields of subspace identification and nonlinear dynamic analysis. [sent-65, score-0.512]

30 As will be seen in succeeding sections, this method finds embeddings which are both accurate in theoretical tests and useful in practice. [sent-69, score-0.22]

31 Given a sequence of state ob˜ ˆ servations ˜ of length S, we choose a sufficiently large fixed embedding dimension, E. [sent-71, score-0.411]

32 The Tmin value of the first local maxima of this sequence is the approximate embedding window, Tmin , of ˜. [sent-87, score-0.387]

33 The approximate embedding dimension, E, is s the number of non-trivial singular values of σ(Tmin ) where we define non-trivial as a value greater ˆ than the long-term trend of σE with respect to Tmin . [sent-88, score-0.422]

34 4 Generative Local Models from Embeddings The preservation of locality and dynamics afforded by the embedding allows an approximation of the underlying dynamic system. [sent-94, score-0.564]

35 Consider a dataset D as a set of temporally aligned sequences of state observations s(t), action ˜ ˜ Applying the spectral embedding observations a(t), and reward observations r(t), t ∈ {1, . [sent-98, score-0.667]

36 This noise biases the derivative estimate in RE , via the embedding rule (Eqn. [sent-119, score-0.357]

37 5 Summary of Approach Our approach is to combine the practices of dynamic analysis and RL to construct useful policies in partially observable, real-world domains via off-line learning. [sent-123, score-0.213]

38 During the learning phase, we identify the optimal policy on the local model with respect to the rewards, R(m(t)) ≡ r(t), via batch Q-learning. [sent-126, score-0.365]

39 For a state vector x(i) in RE at simulation time i, and an associated action, a(i), the reward and Q-value of this state can be indexed by either Equation 11 or 12, depending on whether the action is discrete or continuous. [sent-128, score-0.304]

40 3 Case Study: Mountain Car The Mountain Car problem is a second-order, nonlinear dynamic system with low-dimensional, continuous-valued state and action spaces. [sent-130, score-0.288]

41 (b) Learning performance as a function of embedding parameters and quantity of training data. [sent-171, score-0.357]

42 During the modeling phase, we record this domain under a random control policy for 10,000 time-steps (∆t = 0. [sent-178, score-0.415]

43 We then compute the spectral embedding of the observations (Tmin = [0. [sent-181, score-0.415]

44 We conclude that the embedding of Mountain Car under the random policy requires dimension E = 3 with a maximum embedding window of Tmin = 1. [sent-188, score-1.114]

45 To evaluate learning phase outcomes with respect to modeling phase outcomes, we perform an experiment where we model the randomly collected observations using embedding parameters drawn from the product of the sets Tmin = {0. [sent-190, score-0.536]

46 We use batch Q-learning to identify the optimal policy in a model-based way—in Equation 5 the transition between state-action pair and the resulting state-reward pair is drawn from the model (η = 0. [sent-197, score-0.335]

47 After learning converges, we execute the learned policy on the real system for 10,000 time-steps, recording the mean path-to-goal length over all goals reached. [sent-199, score-0.405]

48 We summarize the results of these experiments by log-scale plots, Figures 1(b) and (c), for embeddings of dimension two and three, respectively. [sent-201, score-0.262]

49 We compare learning performance against three measures: the maximum performing policy achievable given the dynamics of the system (path-togoal = 63 steps), the best (99th percentile) learned policy for each quantity of training data for each embedding dimension, and the random policy. [sent-202, score-1.125]

50 Performance positively relates to the quantity of off-line training data for all embedding parameters. [sent-205, score-0.357]

51 Learning performance of 3-dimensional embeddings dominate 5 all but the shortest 2-dimensional embeddings. [sent-208, score-0.22]

52 These observations indicate that the parameters of the embedding ultimately determine the effectiveness of RL under partial observability. [sent-209, score-0.421]

53 What is surprising is that the best performing parameter configurations are linked to dynamic characteristics of the system under both a random policy and the learned policy. [sent-211, score-0.454]

54 To support this claim we collected 1,000 sample observations of the best policy (E = 3, Tmin = 0. [sent-212, score-0.336]

55 We computed and plotted the embedding spectrum and first two dimensions of the embedding in Figure 1(d). [sent-215, score-0.788]

56 We compare these results to similar plots for the random policy in Figure 1(a). [sent-216, score-0.307]

57 We observe that the spectrum of the learned system has shifted such that the optimal embedding parameters require a shorter embedding window, Tmin = 0. [sent-217, score-0.886]

58 We confirm this by observing the embedding directly, Figure 1(d). [sent-225, score-0.357]

59 Unlike the random policy, which includes both an unstable spiral fixed point and limit cycle structure and requires a 3-dimensional embedding to preserve locality, the learned policy exhibits a 2-dimensional unstable spiral fixed point. [sent-226, score-0.762]

60 An agent may learn to project into a 2-dimensional plane of the 3-dimensional space, thus decreasing its embedding dimension if the training data supports a 2-dimensional policy. [sent-229, score-0.45]

61 It can also select between 2-dimensional embeddings having window sizes of Tmin = {0. [sent-233, score-0.271]

62 Researchers now recognize seizures as artifacts of abnormal neural dynamics and rely heavily on the nonlinear dynamic systems analysis and control literature to understand and treat seizures [4]. [sent-240, score-0.647]

63 For example, fixed frequency electrical stimulation of slices of the rat hippocampus under artificially induced epilepsy have been demonstrated to suppress the frequency, duration, or amplitude of seizures [9, 5]. [sent-242, score-0.565]

64 Next generation epilepsy treatments, derived from machine learning, promise maximal seizure suppression via minimal electrical stimulation by adapting control policies to patients’ unique neural dynamics. [sent-243, score-0.627]

65 Without first principles, neuroscientists have only vague notions of what effective neurostimulation treatments should look like. [sent-245, score-0.221]

66 Even if effective policies could be envisioned, exploration of the vast space of policy parameters is impractical without computational models. [sent-246, score-0.396]

67 Given labeled field potential recordings of brain slices under fixed-frequency electrical stimulation policies of 0. [sent-248, score-0.37]

68 0 Hz is currently known to be the most robust suppression policy for the brain slice model we use [9, 5]). [sent-253, score-0.45]

69 We first compute the ˆ embedding spectrum of our dataset assuming E = 15, presented in Figure 2(b). [sent-256, score-0.431]

70 Using our knowledge of the interaction between embedding parameters and learning we select the embedding dimension E = 3 and embedding window Tmin = 1. [sent-257, score-1.164]

71 Note, the strong maxima of σ2 at Tmin = 110 seconds is the result of periodicity of seizures in our small training dataset. [sent-259, score-0.27]

72 We select a shorter embedding window and rely on integration of the local model to unmask long-term dynamics. [sent-261, score-0.47]

73 6 3rd Principal Component Figure 2: Graphical summary of the modeling phase of our adaptive neurostimulation study. [sent-289, score-0.26]

74 (b) The embedding spectrum of the fixed-frequency stimulation dataset. [sent-292, score-0.61]

75 is an artifact of the periodicity of seizures in the dataset. [sent-294, score-0.27]

76 (c) The resultant neurostimulation model constructed from embedding the dataset with parameters (E = 3, Tmin = 1. [sent-298, score-0.59]

77 Rather than building the model directly from the embedding (E = 3, Tmin = 1. [sent-303, score-0.357]

78 05), we perform a change ˆ of basis on the embedding (E = 15, Tmin = 1. [sent-304, score-0.357]

79 Also, unlike the previous case study, we convert stimulation events in the training data from discrete frequencies to a continuous scale of time-elapsed-since-stimulation. [sent-307, score-0.202]

80 We structure the reward function to penalize each electrical stimulation by −1 and each visited seizure state by −20. [sent-319, score-0.477]

81 The policy learned by the agent also reduces the percent of seizure states to 5. [sent-326, score-0.596]

82 In simulation, therefore, the learned policy achieves the goal. [sent-330, score-0.359]

83 We then deployed the learned policy on real brain slices to test on-line seizure suppression performance. [sent-331, score-0.675]

84 The policy was tested over four trials on two unique brain slices extracted from the same animal. [sent-332, score-0.404]

85 In all trials seizures were effectively suppressed after a short transient period, during which the policy and slice achieved equilibrium. [sent-338, score-0.57]

86 (Note: seizures occurring at the onset of stimulation are common artifacts of neurostimulation). [sent-339, score-0.398]

87 7 (b) Policy Phase 1 2 mV (a) Control Phase 60 sec Stimulations (c) Recovery Phase * (d) Policy Phase 2 Figure 3: Field potential trace of a real seizure suppression experiment using a policy learned from simulation. [sent-341, score-0.613]

88 (a) A control phase used to determine baseline seizure activity. [sent-344, score-0.288]

89 (c) A recovery phase to ensure slice viability after stimulation and recompute baseline seizure activity. [sent-346, score-0.513]

90 Approaches for efficient function approximation, basis function construction, and discovery of embeddings has been the topic of significant investigations [3, 11, 20, 15, 13]. [sent-350, score-0.22]

91 Most of this work has been limited to the fully observable (MDP) case and has not been extended to partially observable environments. [sent-351, score-0.311]

92 The question of state space representation in partially observable domains was tackled under the POMDP framework [14] and recently in the PSR framework [19]. [sent-352, score-0.276]

93 The contribution of our work is to integrate embeddings with model-based RL to solve real-world problems. [sent-358, score-0.22]

94 We do this by leveraging locality preserving qualities of embeddings to construct dynamic models of the system to be controlled. [sent-359, score-0.39]

95 While not improving the quality of off-line learning that is possible, these models permit embedding validation and reasoning over the domain, either to constrain the learning problem or to anticipate the effects of the learned policy on the dynamics of the controlled system. [sent-360, score-0.797]

96 To demonstrate our approach, we applied it to learn a neurostimulation treatment of epilepsy, a challenging real-world domain. [sent-361, score-0.209]

97 We showed that the policy learned off-line from an embedding-based, local model can be successfully transferred on-line. [sent-362, score-0.389]

98 Looking to the future, we anticipate the ability to adjust the embedding a priori using a non-parametric policy gradient approach over the local model. [sent-364, score-0.719]

99 Automatic basis function construction for approximate dynamic programming and reinforcement learning. [sent-428, score-0.191]

100 False neighbors and false strands: A reliable minimum embedding dimension algorithm. [sent-433, score-0.399]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('tmin', 0.531), ('embedding', 0.357), ('policy', 0.307), ('embeddings', 0.22), ('seizures', 0.219), ('neurostimulation', 0.185), ('stimulation', 0.179), ('seizure', 0.162), ('reinforcement', 0.142), ('se', 0.13), ('observable', 0.124), ('mountain', 0.089), ('hz', 0.086), ('action', 0.086), ('rl', 0.085), ('epilepsy', 0.084), ('takens', 0.084), ('observability', 0.084), ('phase', 0.075), ('locality', 0.075), ('spectrum', 0.074), ('car', 0.066), ('policies', 0.066), ('singular', 0.065), ('forced', 0.063), ('partially', 0.063), ('suppression', 0.057), ('dynamics', 0.056), ('manifold', 0.056), ('slices', 0.055), ('reward', 0.054), ('state', 0.054), ('nonlinear', 0.053), ('learned', 0.052), ('window', 0.051), ('agent', 0.051), ('control', 0.051), ('epileptic', 0.051), ('periodicity', 0.051), ('dynamic', 0.049), ('resultant', 0.048), ('system', 0.046), ('tx', 0.046), ('slice', 0.044), ('substituted', 0.044), ('guration', 0.044), ('dimension', 0.042), ('brain', 0.042), ('treatments', 0.036), ('reconstruct', 0.036), ('partial', 0.035), ('domains', 0.035), ('sec', 0.035), ('rewards', 0.034), ('aamas', 0.034), ('barriers', 0.034), ('manchester', 0.034), ('panuccio', 0.034), ('psr', 0.034), ('stimulations', 0.034), ('unforced', 0.034), ('comprise', 0.033), ('domain', 0.033), ('integration', 0.032), ('montreal', 0.031), ('simulation', 0.03), ('local', 0.03), ('jong', 0.029), ('mcgill', 0.029), ('atkeson', 0.029), ('viability', 0.029), ('suppresses', 0.029), ('lasting', 0.029), ('neurological', 0.029), ('modelbased', 0.029), ('stefan', 0.029), ('spectral', 0.029), ('observations', 0.029), ('electrical', 0.028), ('principal', 0.028), ('batch', 0.028), ('preservation', 0.027), ('principles', 0.027), ('termed', 0.027), ('indexed', 0.026), ('dynamical', 0.026), ('equation', 0.026), ('anticipate', 0.025), ('tuples', 0.025), ('trajectory', 0.025), ('record', 0.024), ('states', 0.024), ('recompute', 0.024), ('reconstructing', 0.024), ('delay', 0.024), ('treatment', 0.024), ('nearby', 0.024), ('frequencies', 0.023), ('exploration', 0.023), ('spiral', 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999988 145 nips-2009-Manifold Embeddings for Model-Based Reinforcement Learning under Partial Observability

Author: Keith Bush, Joelle Pineau

Abstract: Interesting real-world datasets often exhibit nonlinear, noisy, continuous-valued states that are unexplorable, are poorly described by first principles, and are only partially observable. If partial observability can be overcome, these constraints suggest the use of model-based reinforcement learning. We experiment with manifold embeddings to reconstruct the observable state-space in the context of offline, model-based reinforcement learning. We demonstrate that the embedding of a system can change as a result of learning, and we argue that the best performing embeddings well-represent the dynamics of both the uncontrolled and adaptively controlled system. We apply this approach to learn a neurostimulation policy that suppresses epileptic seizures on animal brain slices. 1

2 0.22287776 113 nips-2009-Improving Existing Fault Recovery Policies

Author: Guy Shani, Christopher Meek

Abstract: An automated recovery system is a key component in a large data center. Such a system typically employs a hand-made controller created by an expert. While such controllers capture many important aspects of the recovery process, they are often not systematically optimized to reduce costs such as server downtime. In this paper we describe a passive policy learning approach for improving existing recovery policies without exploration. We explain how to use data gathered from the interactions of the hand-made controller with the system, to create an improved controller. We suggest learning an indefinite horizon Partially Observable Markov Decision Process, a model for decision making under uncertainty, and solve it using a point-based algorithm. We describe the complete process, starting with data gathering, model learning, model checking procedures, and computing a policy. 1

3 0.19403619 250 nips-2009-Training Factor Graphs with Reinforcement Learning for Efficient MAP Inference

Author: Khashayar Rohanimanesh, Sameer Singh, Andrew McCallum, Michael J. Black

Abstract: Large, relational factor graphs with structure defined by first-order logic or other languages give rise to notoriously difficult inference problems. Because unrolling the structure necessary to represent distributions over all hypotheses has exponential blow-up, solutions are often derived from MCMC. However, because of limitations in the design and parameterization of the jump function, these samplingbased methods suffer from local minima—the system must transition through lower-scoring configurations before arriving at a better MAP solution. This paper presents a new method of explicitly selecting fruitful downward jumps by leveraging reinforcement learning (RL). Rather than setting parameters to maximize the likelihood of the training data, parameters of the factor graph are treated as a log-linear function approximator and learned with methods of temporal difference (TD); MAP inference is performed by executing the resulting policy on held out test data. Our method allows efficient gradient updates since only factors in the neighborhood of variables affected by an action need to be computed—we bypass the need to compute marginals entirely. Our method yields dramatic empirical success, producing new state-of-the-art results on a complex joint model of ontology alignment, with a 48% reduction in error over state-of-the-art in that domain. 1

4 0.12877941 242 nips-2009-The Infinite Partially Observable Markov Decision Process

Author: Finale Doshi-velez

Abstract: The Partially Observable Markov Decision Process (POMDP) framework has proven useful in planning domains where agents must balance actions that provide knowledge and actions that provide reward. Unfortunately, most POMDPs are complex structures with a large number of parameters. In many real-world problems, both the structure and the parameters are difficult to specify from domain knowledge alone. Recent work in Bayesian reinforcement learning has made headway in learning POMDP models; however, this work has largely focused on learning the parameters of the POMDP model. We define an infinite POMDP (iPOMDP) model that does not require knowledge of the size of the state space; instead, it assumes that the number of visited states will grow as the agent explores its world and only models visited states explicitly. We demonstrate the iPOMDP on several standard problems. 1

5 0.12175519 12 nips-2009-A Generalized Natural Actor-Critic Algorithm

Author: Tetsuro Morimura, Eiji Uchibe, Junichiro Yoshimoto, Kenji Doya

Abstract: Policy gradient Reinforcement Learning (RL) algorithms have received substantial attention, seeking stochastic policies that maximize the average (or discounted cumulative) reward. In addition, extensions based on the concept of the Natural Gradient (NG) show promising learning efficiency because these regard metrics for the task. Though there are two candidate metrics, Kakade’s Fisher Information Matrix (FIM) for the policy (action) distribution and Morimura’s FIM for the stateaction joint distribution, but all RL algorithms with NG have followed Kakade’s approach. In this paper, we describe a generalized Natural Gradient (gNG) that linearly interpolates the two FIMs and propose an efficient implementation for the gNG learning based on a theory of the estimating function, the generalized Natural Actor-Critic (gNAC) algorithm. The gNAC algorithm involves a near optimal auxiliary function to reduce the variance of the gNG estimates. Interestingly, the gNAC can be regarded as a natural extension of the current state-of-the-art NAC algorithm [1], as long as the interpolating parameter is appropriately selected. Numerical experiments showed that the proposed gNAC algorithm can estimate gNG efficiently and outperformed the NAC algorithm.

6 0.11679803 209 nips-2009-Robust Value Function Approximation Using Bilinear Programming

7 0.11649932 53 nips-2009-Complexity of Decentralized Control: Special Cases

8 0.097841553 218 nips-2009-Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining

9 0.092450134 134 nips-2009-Learning to Explore and Exploit in POMDPs

10 0.091344588 60 nips-2009-Convergent Temporal-Difference Learning with Arbitrary Smooth Function Approximation

11 0.083500713 106 nips-2009-Heavy-Tailed Symmetric Stochastic Neighbor Embedding

12 0.075098813 221 nips-2009-Solving Stochastic Games

13 0.068508469 200 nips-2009-Reconstruction of Sparse Circuits Using Multi-neuronal Excitation (RESCUME)

14 0.068287641 52 nips-2009-Code-specific policy gradient rules for spiking neurons

15 0.063089356 137 nips-2009-Learning transport operators for image manifolds

16 0.061894268 159 nips-2009-Multi-Step Dyna Planning for Policy Evaluation and Control

17 0.056941144 54 nips-2009-Compositionality of optimal control laws

18 0.056878578 107 nips-2009-Help or Hinder: Bayesian Models of Social Goal Inference

19 0.05582073 127 nips-2009-Learning Label Embeddings for Nearest-Neighbor Multi-class Classification with an Application to Speech Recognition

20 0.055455331 169 nips-2009-Nonlinear Learning using Local Coordinate Coding


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.154), (1, -0.01), (2, 0.174), (3, -0.134), (4, -0.262), (5, 0.026), (6, 0.038), (7, -0.012), (8, 0.004), (9, 0.055), (10, 0.12), (11, 0.129), (12, 0.035), (13, 0.069), (14, -0.054), (15, -0.061), (16, -0.029), (17, -0.002), (18, -0.031), (19, -0.041), (20, 0.001), (21, 0.001), (22, 0.15), (23, 0.001), (24, -0.131), (25, 0.04), (26, -0.122), (27, 0.052), (28, 0.021), (29, 0.054), (30, -0.016), (31, -0.048), (32, 0.057), (33, 0.056), (34, -0.069), (35, -0.018), (36, -0.025), (37, 0.102), (38, -0.041), (39, 0.008), (40, 0.09), (41, -0.008), (42, 0.183), (43, 0.024), (44, -0.086), (45, -0.044), (46, 0.008), (47, -0.009), (48, -0.018), (49, -0.06)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96301681 145 nips-2009-Manifold Embeddings for Model-Based Reinforcement Learning under Partial Observability

Author: Keith Bush, Joelle Pineau

Abstract: Interesting real-world datasets often exhibit nonlinear, noisy, continuous-valued states that are unexplorable, are poorly described by first principles, and are only partially observable. If partial observability can be overcome, these constraints suggest the use of model-based reinforcement learning. We experiment with manifold embeddings to reconstruct the observable state-space in the context of offline, model-based reinforcement learning. We demonstrate that the embedding of a system can change as a result of learning, and we argue that the best performing embeddings well-represent the dynamics of both the uncontrolled and adaptively controlled system. We apply this approach to learn a neurostimulation policy that suppresses epileptic seizures on animal brain slices. 1

2 0.83325469 113 nips-2009-Improving Existing Fault Recovery Policies

Author: Guy Shani, Christopher Meek

Abstract: An automated recovery system is a key component in a large data center. Such a system typically employs a hand-made controller created by an expert. While such controllers capture many important aspects of the recovery process, they are often not systematically optimized to reduce costs such as server downtime. In this paper we describe a passive policy learning approach for improving existing recovery policies without exploration. We explain how to use data gathered from the interactions of the hand-made controller with the system, to create an improved controller. We suggest learning an indefinite horizon Partially Observable Markov Decision Process, a model for decision making under uncertainty, and solve it using a point-based algorithm. We describe the complete process, starting with data gathering, model learning, model checking procedures, and computing a policy. 1

3 0.72694725 159 nips-2009-Multi-Step Dyna Planning for Policy Evaluation and Control

Author: Hengshuai Yao, Shalabh Bhatnagar, Dongcui Diao, Richard S. Sutton, Csaba Szepesvári

Abstract: In this paper we introduce a multi-step linear Dyna-style planning algorithm. The key element of the multi-step linear Dyna is a multi-step linear model that enables multi-step projection of a sampled feature and multi-step planning based on the simulated multi-step transition experience. We propose two multi-step linear models. The first iterates the one-step linear model, but is generally computationally complex. The second interpolates between the one-step model and the infinite-step model (which turns out to be the LSTD solution), and can be learned efficiently online. Policy evaluation on Boyan Chain shows that multi-step linear Dyna learns a policy faster than single-step linear Dyna, and generally learns faster as the number of projection steps increases. Results on Mountain-car show that multi-step linear Dyna leads to much better online performance than single-step linear Dyna and model-free algorithms; however, the performance of multi-step linear Dyna does not always improve as the number of projection steps increases. Our results also suggest that previous attempts on extending LSTD for online control were unsuccessful because LSTD looks infinite steps into the future, and suffers from the model errors in non-stationary (control) environments.

4 0.69826943 12 nips-2009-A Generalized Natural Actor-Critic Algorithm

Author: Tetsuro Morimura, Eiji Uchibe, Junichiro Yoshimoto, Kenji Doya

Abstract: Policy gradient Reinforcement Learning (RL) algorithms have received substantial attention, seeking stochastic policies that maximize the average (or discounted cumulative) reward. In addition, extensions based on the concept of the Natural Gradient (NG) show promising learning efficiency because these regard metrics for the task. Though there are two candidate metrics, Kakade’s Fisher Information Matrix (FIM) for the policy (action) distribution and Morimura’s FIM for the stateaction joint distribution, but all RL algorithms with NG have followed Kakade’s approach. In this paper, we describe a generalized Natural Gradient (gNG) that linearly interpolates the two FIMs and propose an efficient implementation for the gNG learning based on a theory of the estimating function, the generalized Natural Actor-Critic (gNAC) algorithm. The gNAC algorithm involves a near optimal auxiliary function to reduce the variance of the gNG estimates. Interestingly, the gNAC can be regarded as a natural extension of the current state-of-the-art NAC algorithm [1], as long as the interpolating parameter is appropriately selected. Numerical experiments showed that the proposed gNAC algorithm can estimate gNG efficiently and outperformed the NAC algorithm.

5 0.67453676 134 nips-2009-Learning to Explore and Exploit in POMDPs

Author: Chenghui Cai, Xuejun Liao, Lawrence Carin

Abstract: A fundamental objective in reinforcement learning is the maintenance of a proper balance between exploration and exploitation. This problem becomes more challenging when the agent can only partially observe the states of its environment. In this paper we propose a dual-policy method for jointly learning the agent behavior and the balance between exploration exploitation, in partially observable environments. The method subsumes traditional exploration, in which the agent takes actions to gather information about the environment, and active learning, in which the agent queries an oracle for optimal actions (with an associated cost for employing the oracle). The form of the employed exploration is dictated by the specific problem. Theoretical guarantees are provided concerning the optimality of the balancing of exploration and exploitation. The effectiveness of the method is demonstrated by experimental results on benchmark problems.

6 0.64810508 60 nips-2009-Convergent Temporal-Difference Learning with Arbitrary Smooth Function Approximation

7 0.62848586 250 nips-2009-Training Factor Graphs with Reinforcement Learning for Efficient MAP Inference

8 0.55186021 209 nips-2009-Robust Value Function Approximation Using Bilinear Programming

9 0.53946018 218 nips-2009-Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining

10 0.4884333 242 nips-2009-The Infinite Partially Observable Markov Decision Process

11 0.46234113 106 nips-2009-Heavy-Tailed Symmetric Stochastic Neighbor Embedding

12 0.41827974 16 nips-2009-A Smoothed Approximate Linear Program

13 0.40949306 54 nips-2009-Compositionality of optimal control laws

14 0.38040805 53 nips-2009-Complexity of Decentralized Control: Special Cases

15 0.37771702 137 nips-2009-Learning transport operators for image manifolds

16 0.35430387 150 nips-2009-Maximum likelihood trajectories for continuous-time Markov chains

17 0.34374782 127 nips-2009-Learning Label Embeddings for Nearest-Neighbor Multi-class Classification with an Application to Speech Recognition

18 0.32220769 14 nips-2009-A Parameter-free Hedging Algorithm

19 0.32211715 221 nips-2009-Solving Stochastic Games

20 0.304975 169 nips-2009-Nonlinear Learning using Local Coordinate Coding


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(7, 0.015), (21, 0.01), (24, 0.024), (25, 0.063), (35, 0.075), (36, 0.085), (39, 0.057), (57, 0.011), (58, 0.077), (61, 0.058), (64, 0.194), (71, 0.079), (81, 0.047), (86, 0.092), (91, 0.012)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.8586992 123 nips-2009-Large Scale Nonparametric Bayesian Inference: Data Parallelisation in the Indian Buffet Process

Author: Finale Doshi-velez, Shakir Mohamed, Zoubin Ghahramani, David A. Knowles

Abstract: Nonparametric Bayesian models provide a framework for flexible probabilistic modelling of complex datasets. Unfortunately, the high-dimensional averages required for Bayesian methods can be slow, especially with the unbounded representations used by nonparametric models. We address the challenge of scaling Bayesian inference to the increasingly large datasets found in real-world applications. We focus on parallelisation of inference in the Indian Buffet Process (IBP), which allows data points to have an unbounded number of sparse latent features. Our novel MCMC sampler divides a large data set between multiple processors and uses message passing to compute the global likelihoods and posteriors. This algorithm, the first parallel inference scheme for IBP-based models, scales to datasets orders of magnitude larger than have previously been possible. 1

same-paper 2 0.85432523 145 nips-2009-Manifold Embeddings for Model-Based Reinforcement Learning under Partial Observability

Author: Keith Bush, Joelle Pineau

Abstract: Interesting real-world datasets often exhibit nonlinear, noisy, continuous-valued states that are unexplorable, are poorly described by first principles, and are only partially observable. If partial observability can be overcome, these constraints suggest the use of model-based reinforcement learning. We experiment with manifold embeddings to reconstruct the observable state-space in the context of offline, model-based reinforcement learning. We demonstrate that the embedding of a system can change as a result of learning, and we argue that the best performing embeddings well-represent the dynamics of both the uncontrolled and adaptively controlled system. We apply this approach to learn a neurostimulation policy that suppresses epileptic seizures on animal brain slices. 1

3 0.81315809 40 nips-2009-Bayesian Nonparametric Models on Decomposable Graphs

Author: Francois Caron, Arnaud Doucet

Abstract: Over recent years Dirichlet processes and the associated Chinese restaurant process (CRP) have found many applications in clustering while the Indian buffet process (IBP) is increasingly used to describe latent feature models. These models are attractive because they ensure exchangeability (over samples). We propose here extensions of these models where the dependency between samples is given by a known decomposable graph. These models have appealing properties and can be easily learned using Monte Carlo techniques. 1 Motivation The CRP and IBP have found numerous applications in machine learning over recent years [5, 10]. We consider here the case where the data we are interested in are ‘locally’ dependent; these dependencies being represented by a known graph G where each data point/object is associated to a vertex. These local dependencies can correspond to any conceptual or real (e.g. space, time) metric. For example, in the context of clustering, we might want to propose a prior distribution on partitions enforcing that data which are ‘close’ in the graph are more likely to be in the same cluster. Similarly, in the context of latent feature models, we might be interested in a prior distribution on features enforcing that data which are ‘close’ in the graph are more likely to possess similar features. The ‘standard’ CRP and IBP correspond to the case where the graph G is complete; that is it is fully connected. In this paper, we generalize the CRP and IBP to decomposable graphs. The resulting generalized versions of the CRP and IBP enjoy attractive properties. Each clique of the graph follows marginally a CRP or an IBP process and explicit expressions for the joint prior distribution on the graph is available. It makes it easy to learn those models using straightforward generalizations of Markov chain Monte Carlo (MCMC) or Sequential Monte Carlo (SMC) algorithms proposed to perform inference for the CRP and IBP [5, 10, 14]. The rest of the paper is organized as follows. In Section 2, we review the popular Dirichlet multinomial allocation model and the Dirichlet Process (DP) partition distribution. We propose an extension of these two models to decomposable graphical models. In Section 3 we discuss nonparametric latent feature models, reviewing briefly the construction in [5] and extending it to decomposable graphs. We demonstrate these models in Section 4 on two applications: an alternative to the hierarchical DP model [12] and a time-varying matrix factorization problem. 2 Prior distributions for partitions on decomposable graphs Assume we have n observations. When performing clustering, we associate to each of this observation an allocation variable zi ∈ [K] = {1, . . . , K}. Let Πn be the partition of [n] = {1, . . . , n} defined by the equivalence relation i ↔ j ⇔ zi = zj . The resulting partition Πn = {A1 , . . . , An(Πn ) } 1 is an unordered collection of disjoint non-empty subsets Aj of [n], j = 1, . . . , n(Πn ), where ∪j Aj = [n] and n(Πn ) is the number of subsets for partition Πn . We also denote by Pn be the set of all partitions of [n] and let nj , j = 1, . . . , n(Πn ), be the size of the subset Aj . Each allocation variable zi is associated to a vertex/site of an undirected graph G, which is assumed to be known. In the standard case where the graph G is complete, we first review briefly here two popular prior distributions on z1:n , equivalently on Πn . We then extend these models to undirected decomposable graphs; see [2, 8] for an introduction to decomposable graphs. Finally we briefly discuss the directed case. Note that the models proposed here are completely different from the hyper multinomial-Dirichlet in [2] and its recent DP extension [6]. 2.1 Dirichlet multinomial allocation model and DP partition distribution Assume for the time being that K is finite. When the graph is complete, a popular choice for the allocation variables is to consider a Dirichlet multinomial allocation model [11] θ θ , . . . , ), zi |π ∼ π (1) K K where D is the standard Dirichlet distribution and θ > 0. Integrating out π, we obtain the following Dirichlet multinomial prior distribution π ∼ D( Pr(z1:n ) = K j=1 Γ(θ) Γ(nj + θ K) (2) θ Γ(θ + n)Γ( K )K and then, using the straightforward equality Pr(Πn ) = PK where PK = {Πn ∈ Pn |n(Πn ) ≤ K}, we obtain K! (K−n(Πn ))! Pr(z1:n ) valid for for all Πn ∈ n(Π ) Pr(Πn ) = θ Γ(θ) j=1n Γ(nj + K ) K! . θ (K − n(Πn ))! Γ(θ + n)Γ( K )n(Πn ) (3) DP may be seen as a generalization of the Dirichlet multinomial model when the number of components K → ∞; see for example [10]. In this case the distribution over the partition Πn of [n] is given by [11] n(Π ) θn(Πn ) j=1n Γ(nj ) . (4) Pr(Πn ) = n i=1 (θ + i − 1) Let Π−k = {A1,−k , . . . , An(Π−k ),−k } be the partition induced by removing item k to Πn and nj,−k be the size of cluster j for j = 1, . . . , n(Π−k ). It follows from (4) that an item k is assigned to an existing cluster j, j = 1, . . . , n(Π−k ), with probability proportional to nj,−k / (n − 1 + θ) and forms a new cluster with probability θ/ (n − 1 + θ). This property is the basis of the CRP. We now extend the Dirichlet multinomial allocation and the DP partition distribution models to decomposable graphs. 2.2 Markov combination of Dirichlet multinomial and DP partition distributions Let G be a decomposable undirected graph, C = {C1 , . . . , Cp } a perfect ordering of the cliques and S = {S2 , . . . , Cp } the associated separators. It can be easily checked that if the marginal distribution of zC for each clique C ∈ C is defined by (2) then these distributions are consistent as they yield the same distribution (2) over the separators. Therefore, the unique Markov distribution over G with Dirichlet multinomial distribution over the cliques is defined by [8] Pr(zC ) S∈S Pr(zS ) C∈C Pr(z1:n ) = (5) where for each complete set B ⊆ G, we have Pr(zB ) given by (2). It follows that we have for any Πn ∈ PK Γ(θ) K! Pr(Πn ) = (K − n(Πn ))! C∈C Γ(θ) S∈S 2 K j=1 θ Γ(nj,C + K ) θ Γ(θ+nC )Γ( K )K K j=1 θ Γ(nj,S + K ) θ Γ(θ+nS )Γ( K )K (6) where for each complete set B ⊆ G, nj,B is the number of items associated to cluster j, j = 1, . . . , K in B and nB is the total number of items in B. Within each complete set B, the allocation variables define a partition distributed according to the Dirichlet-multinomial distribution. We now extend this approach to DP partition distributions; that is we derive a joint distribution over Πn such that the distribution of ΠB over each complete set B of the graph is given by (4) with θ > 0. Such a distribution satisfies the consistency condition over the separators as the restriction of any partition distributed according to (4) still follows (4) [7]. G Proposition. Let Pn be the set of partitions Πn ∈ Pn such that for each decomposition A, B, and any (i, j) ∈ A × B, i ↔ j ⇒ ∃k ∈ A ∩ B such that k ↔ i ↔ j. As K → ∞, the prior distribution G over partitions (6) is given for each Πn ∈ Pn by Pr(Πn ) = θn(Πn ) n(ΠC ) Γ(nj,C ) j=1 nC i=1 (θ+i−1) n(ΠS ) Γ(nj,S ) j=1 nS (θ+i−1) i=1 C∈C S∈S (7) where n(ΠB ) is the number of clusters in the complete set B. Proof. From (6), we have θ n(ΠC ) K(K − 1) . . . (K − n(Πn ) + 1) Pr(Πn ) = K C∈C n(ΠC )− S∈S n(ΠS ) C∈C θ n(ΠS ) S∈S n(ΠC ) θ Γ(nj,C + K ) j=1 nC (θ+i−1) i=1 n(ΠS ) θ Γ(nj,S + K ) j=1 nS (θ+i−1) i=1 Thus when K → ∞, we obtain (7) if n(Πn ) = C∈C n(ΠC ) − S∈S n(ΠS ) and 0 otherwise. We have n(Πn ) ≤ C∈C n(ΠC ) − S∈S n(ΠS ) for any Πn ∈ Pn and the subset of Pn verifying G n(Πn ) = C∈C n(ΠC ) − S∈S n(ΠS ) corresponds to the set Pn . Example. Let the notation i ∼ j (resp. i j) indicates an edge (resp. no edge) between two sites. Let n = 3 and G be the decomposable graph defined by the relations 1 ∼ 2, 2 ∼ 3 and 1 3. G The set P3 is then equal to {{{1, 2, 3}}; {{1, 2}, {3}}; {{1}, {2, 3}}; {{1}, {2}, {3}}}. Note that G the partition {{1, 3}, {2}} does not belong to P3 . Indeed, as there is no edge between 1 and 3, they cannot be in the same cluster if 2 is in another cluster. The cliques are C1 = {1, 2} and C2 = {2, 3} Pr(ΠC1 ) Pr(ΠC2 ) hence we can and the separator is S2 = {2}. The distribution is given by Pr(Π3 ) = Pr(ΠS ) 2 check that we obtain Pr({1, 2, 3}) = (θ + 1)−2 , Pr({1, 2}, {3}) = Pr({1, 2}, {3}) = θ(θ + 1)−2 and Pr({1}, {2}, {3}) = θ2 (θ + 1)−2 . Let now define the full conditional distributions. Based on (7) the conditional assignment of an item k is proportional to the conditional over the cliques divided by the conditional over the separators. G Let denote G−k the undirected graph obtained by removing vertex k from G. Suppose that Πn ∈ Pn . G−k If Π−k ∈ Pn−1 , then do not change the value of item k. Otherwise, item k is assigned to cluster j / where j = 1, . . . , n(Π−k ) with probability proportional to {C∈C|n−k,j,C >0} n−k,j,C {S∈S|n−k,j,S >0} n−k,j,S (8) and to a new cluster with probability proportional to θ, where n−k,j,C is the number of items in the set C \ {k} belonging to cluster j. The updating process is illustrated by the Chinese wedding party process1 in Fig. 1. The results of this section can be extended to the Pitman-Yor process, and more generally to species sampling models. Example (continuing). Given Π−2 = {A1 = {1}, A2 = {3}}, we have −1 Pr( item 2 assigned to A1 = {1}| Π−2 ) = Pr( item 2 assigned to A2 = {3}| Π−2 ) = (θ + 2) −1 and Pr( item 2 assigned to new cluster A3 | Π−2 ) = θ (θ + 2) . Given Π−2 = {A1 = {1, 3}}, item 2 is assigned to A1 with probability 1. 1 Note that this representation describes the full conditionals while the CRP represents the sequential updat- ing. 3 (a) (b) (d) (c) (e) Figure 1: Chinese wedding party. Consider a group of n guests attending a wedding party. Each of the n guests may belong to one or several cliques, i.e. maximal groups of people such that everybody knows everybody. The belonging of each guest to the different cliques is represented by color patches on the figures, and the graphical representation of the relationship between the guests is represented by the graphical model (e). (a) Suppose that the guests are already seated such that two guests cannot be together at the same table is they are not part of the same clique, or if there does not exist a group of other guests such that they are related (“Any friend of yours is a friend of mine”). (b) The guest number k leaves his table and either (c) joins a table where there are guests from the same clique as him, with probability proportional to the product of the number of guests from each clique over the product of the number of guests belonging to several cliques on that table or (d) he joins a new table with probability proportional to θ. 2.3 Monte Carlo inference 2.3.1 MCMC algorithm Using the full conditionals, a single site Gibbs sampler can easily be designed to approximate the posterior distribution Pr(Πn |z1:n ). Given a partition Πn , an item k is taken out of the partition. If G−k Π−k ∈ Pn−1 , item k keeps the same value. Otherwise, the item will be assigned to a cluster j, / j = 1, . . . , n(Π−k ), with probability proportional to p(z{k}∪Aj,−k ) × p(zAj,−k ) {C∈C|n−k,j,C >0} n−k,j,C {S∈S|n−k,j,S >0} n−k,j,S (9) and the item will be assigned to a new cluster with probability proportional to p(z{k} ) × θ. Similarly to [3], we can also define a procedure to sample from p(θ|n(Πn ) = k)). We assume that θ ∼ G(a, b) and use p auxiliary variables x1 , . . . , xp . The procedure is as follows. • For j = 1, . . . , p, sample xj |k, θ ∼ Beta(θ + nSj , nCj − nSj ) • Sample θ|k, x1:p ∼ G(a + k, b − j log xj ) 2.3.2 Sequential Monte Carlo We have so far only treated the case of an undirected decomposable graph G. We can formulate a sequential updating rule for the corresponding perfect directed version D of G. Indeed, let (a1 , . . . a|V | ) be a perfect ordering and pa(ak ) be the set of parents of ak which is by definition complete. Let Πk−1 = {A1,k−1 , . . . , An(Πk−1 ),k−1 } denote the partition of the first k−1 vertices a1:k−1 and let nj,pa(ak ) be the number of elements with value j in the set pa(ak ), j = 1, . . . , n(Πk−1 ). Then the vertex ak joins the set j with probability nj,pa(ak ) / θ + cluster with probability θ/ θ + q q nq,pa(ak ) and creates a new nq,pa(ak ) . One can then design a particle filter/SMC method in a similar fashion as [4]. Consider a set of (i) (i) (i) (i) N N particles Πk−1 with weights wk−1 ∝ Pr(Πk−1 , z1:k−1 ) ( i=1 wk−1 = 1) that approximate (i) the posterior distribution Pr(Πk−1 |z1:k−1 ). For each particle i, there are n(Πk−1 ) + 1 possible 4 (i,j) allocations for component ak . We denote Πk the partition obtained by associating component ak (i,j) to cluster j. The weight associated to Πk is given by  nj,pa(ak ) (i)  if j = 1, . . . , n(Πk−1 ) θ+ q nq,pa(ak ) (i,j) (i) p(z{ak }∪Aj,k−1 ) wk−1 = wk−1 × (10) (i) θ  θ+ n p(zAj,k−1 ) if j = n(Πk−1 ) + 1 q q,pa(ak ) (i,j) Then we can perform a deterministic resampling step by keeping the N particles Πk with highest (i,j) (i) (i) weights wk−1 . Let Πk be the resampled particles and wk the associated normalized weights. 3 Prior distributions for infinite binary matrices on decomposable graphs Assume we have n objects; each of these objects being associated to the vertex of a graph G. To K each object is associated a K-dimensional binary vector zn = (zn,1 , . . . , zn,K ) ∈ {0, 1} where zn,i = 1 if object n possesses feature i and zn,i = 0 otherwise. These vectors zt form a binary n × K matrix denoted Z1:n . We denote by ξ1:n the associated equivalence class of left-ordered matrices and let EK be the set of left-ordered matrices with at most K features. In the standard case where the graph G is complete, we review briefly here two popular prior distributions on Z1:n , equivalently on ξ1:n : the Beta-Bernoulli model and the IBP [5]. We then extend these models to undirected decomposable graphs. This can be used for example to define a time-varying IBP as illustrated in Section 4. 3.1 Beta-Bernoulli and IBP distributions The Beta-Bernoulli distribution over the allocation Z1:n is K Pr(Z1:n ) = α + K )Γ(n − nj + 1) α Γ(n + 1 + K ) α K Γ(nj j=1 (11) where nj is the number of objects having feature j. It follows that Pr(ξ1:n ) = K K! 2n −1 h=0 α K Γ(nj α + K )Γ(n − nj + 1) α Γ(n + 1 + K ) Kh ! j=1 (12) where Kh is the number of features possessing the history h (see [5] for details). The nonparametric model is obtained by taking the limit when K → ∞ Pr(ξ1:n ) = αK K+ + 2n −1 h=1 Kh ! exp(−αHn ) where K + is the total number of features and Hn = 3.2 (n − nj )!(nj − 1)! n! j=1 n 1 k=1 k . (13) The IBP follows from (13). Markov combination of Beta-Bernoulli and IBP distributions Let G be a decomposable undirected graph, C = {C1 , . . . , Cp } a perfect ordering of the cliques and S = {S2 , . . . , Cp } the associated separators. As in the Dirichlet-multinomial case, it is easily seen that if for each clique C ∈ C, the marginal distribution is defined by (11), then these distributions are consistent as they yield the same distribution (11) over the separators. Therefore, the unique Markov distribution over G with Beta-Bernoulli distribution over the cliques is defined by [8] Pr(ZC ) S∈S Pr(ZS ) C∈C Pr(Z1:n ) = (14) where Pr(ZB ) given by (11) for each complete set B ⊆ G. The prior over ξ1:n is thus given, for ξ1:n ∈ EK , by Pr(ξ1:n ) = K! 2n −1 h=0 Kh ! α K α Γ(nj,C + K )Γ(nC −nj,C +1) α Γ(nC +1+ K ) α α Γ(nj,S + K )Γ(nS −nj,S +1) K K α j=1 Γ(nS +1+ K ) K j=1 C∈C S∈S 5 (15) where for each complete set B ⊆ G, nj,B is the number of items having feature j, j = 1, . . . , K in the set B and nB is the whole set of objects in set B. Taking the limit when K → ∞, we obtain after a few calculations Pr(ξ1:n ) = α + K[n] exp [−α ( C HnC − 2n −1 h=1 Kh ! HnS )] × C∈C + KC (nC −nj,C )!(nj,C −1)! j=1 nC ! S∈S S + KS (nS −nj,S )!(nj,S −1)! j=1 nS ! + + + + if K[n] = C KC − S KS and 0 otherwise, where KB is the number of different features possessed by objects in B. G Let En be the subset of En such that for each decomposition A, B and any (u, v) ∈ A × B: {u and v possess feature j} ⇒ ∃k ∈ A ∩ B such that {k possesses feature j}. Let ξ−k be the left-ordered + matrix obtained by removing object k from ξn and K−k be the total number of different features in G−k + ξ−k . For each feature j = 1, . . . , K−k , if ξ−k ∈ En−1 then we have   b C∈C nj,C if i = 1 S∈C nj,S Pr(ξk,j = i) = (16)  b C∈C (nC −nj,C ) if i = 0 (nS −nj,S ) S∈C nS where b is the appropriate normalizing constant then the customer k tries Poisson α {S∈S|k∈S} nC {C∈C|k∈C} new dishes. We can easily generalize this construction to a directed version D of G using arguments similar to those presented in Section 2; see Section 4 for an application to time-varying matrix factorization. 4 4.1 Applications Sharing clusters among relative groups: An alternative to HDP Consider that we are given d groups with nj data yi,j in each group, i = 1, . . . , nj , j = 1, . . . , d. We consider latent cluster variables zi,j that define the partition of the data. We will use alternatively the notation θi,j = Uzi,j in the following. Hierarchical Dirichlet Process [12] (HDP) is a very popular model for sharing clusters among related groups. It is based on a hierarchy of DPs G0 ∼ DP (γ, H), Gj |G0 ∼ DP (α, G0 ) j = 1, . . . d θi,j |Gj ∼ Gj , yi,j |θi,j ∼ f (θi,j ) i = 1, . . . , nj . Under conjugacy assumptions, G0 , Gj and U can be integrated out and we can approximate the marginal posterior of (zi,j ) given y = (yi,j ) with Gibbs sampling using the Chinese restaurant franchise to sample from the full conditional p(zi,j |z−{i,j} , y). Using the graph formulation defined in Section 2, we propose an alternative to HDP. Let θ0,1 , . . . , θ0,N be N auxiliary variables belonging to what we call group 0. We define each clique Cj (j = 1, . . . , d) to be composed of elements from group j and elements from group 0. This defines a decomposable graphical model whose separator is given by the elements of group 0. We can rewrite the model in a way quite similar to HDP G0 ∼ DP (α, H), θ0,i |G0 ∼ G0 i = 1, ..., N α α Gj |θ0,1 , . . . , θ0,N ∼ DP (α + N, α+N H + α+N θi,j |Gj ∼ Gj , yi,j |θi,j ∼ f (θi,j ) i = 1, . . . , nj N i=1 δθ0,i ) j = 1, . . . d, N For any subset A and j = k ∈ {1, . . . , p} we have corr(Gj (A), Gk (A)) = α+N . Again, under conjugacy conditions, we can integrate out G0 , Gj and U and approximate the marginal posterior distribution over the partition using the Chinese wedding party process defined in Section 2. Note that for latent variables zi,j , j = 1, . . . , d, associated to data, this is the usual CRP update. As in HDP, multiple layers can be added to the model. Figures 2 (a) and (b) resp. give the graphical DP alternative to HDP and 2-layer HDP. 6 z0 root z0 root corpora docs z1 z2 z1 z2 z3 z1,1 z1,2 z2,1 z2,2 z2,3 docs (a) Graphical DP alternative to HDP (b) Graphical DP alternative to 2-layer HDP Figure 2: Hierarchical Graphs of dependency with (a) one layer and (b) two layers of hierarchy. If N = 0, then Gj ∼ DP (α, H) for all j and this is equivalent to setting γ → ∞ in HDP. If N → ∞ then Gj = G0 for all j, G0 ∼ DP (α, H). This is equivalent to setting α → ∞ in the HDP. One interesting feature of the model is that, contrary to HDP, the marginal distribution of Gj at any layer of the tree is DP (α, H). As a consequence, the total number of clusters scales logarithmically (as in the usual DP) with the size of each group, whereas it scales doubly logarithmically in HDP. Contrary to HDP, there are at most N clusters shared between different groups. Our model is in that sense reminiscent of [9] where only a limited number of clusters can be shared. Note however that contrary to [9] we have a simple CRP-like process. The proposed methodology can be straightforwardly extended to the infinite HMM [12]. The main issue of the proposed model is the setting of the number N of auxiliary parameters. Another issue is that to achieve high correlation, we need a large number of auxiliary variables. Nonetheless, the computational time used to sample from auxiliary variables is negligible compared to the time used for latent variables associated to data. Moreover, it can be easily parallelized. The model proposed offers a far richer framework and ensures that at each level of the tree, the marginal distribution of the partition is given by a DP partition model. 4.2 Time-varying matrix factorization Let X1:n be an observed matrix of dimension n × D. We want to find a representation of this matrix in terms of two latent matrices Z1:n of dimension n × K and Y of dimension K × D. Here Z1:n 2 is a binary matrix whereas Y is a matrix of latent features. By assuming that Y ∼ N 0, σY IK×D and 2 X1:n = Z1:n Y + σX εn where εn ∼ N 0, σX In×D , we obtain p(X1:n |Z1:n ) ∝ −D/2 2 2 + Z+T Z+ + σX /σY IKn 1:n 1:n + (n−Kn )D σX exp − + Kn D σY 2 2 + where Σ−1 = I − Z+ Z+T Z+ + σX /σY IKn n 1:n 1:n 1:n −1 1 T −1 2 tr X1:n Σn X1:n 2σX (17) + Z+T , Kn the number of non-zero columns of 1:n + Z1:n and Z+ is the first Kn columns of Z1:n . To avoid having to set K, [5, 14] assume that Z1:n 1:n follows an IBP. The resulting posterior distribution p(Z1:n |X1:n ) can be estimated through MCMC [5] or SMC [14]. We consider here a different model where the object Xt is assumed to arrive at time index t and we want a prior distribution on Z1:n ensuring that objects close in time are more likely to possess similar features. To achieve this, we consider the simple directed graphical model D of Fig. 3 where the site numbering corresponds to a time index in that case and a perfect numbering of D is (1, 2, . . .). The set of parents pa(t) is composed of the r preceding sites {{t − r}, . . . , {t − 1}}. The time-varying IBP to sample from p(Z1:n ) associated to this directed graph follows from (16) and proceeds as follows. At time t = 1 + new new • Sample K1 ∼Poisson(α), set z1,i = 1 for i = 1, ..., K1 and set K1 = Knew . At times t = 2, . . . , r n + new ∼Poisson( α ). • For k = 1, . . . Kt , sample zt,k ∼ Ber( 1:t−1,k ) and Kt t t 7   ?  ? - t−r - t−r+1 - . . . - t−1 - t - t+1        6 6 Figure 3: Directed graph. At times t = r + 1, . . . , n n + α new ∼Poisson( r+1 ). • For k = 1, . . . Kt , sample zt,k ∼ Ber( t−r:t−1,k ) and Kt r+1 + Here Kt is the total number of features appearing from time max(1, t − r) to t − 1 and nt−r:t−1,k the restriction of n1:t−1 to the r last customers. Using (17) and the prior distribution of Z1:n which can be sampled using the time-varying IBP described above, we can easily design an SMC method to sample from p(Z1:n |X1:n ). We do not detail it here. Note that contrary to [14], our algorithm does not require inverting a matrix whose dimension grows linearly with the size of the data but only a matrix of dimension r × r. In order to illustrate the model and SMC algorithm, we create 200 6 × 6 images using a ground truth Y consisting of 4 different 6 × 6 latent images. The 200 × 4 binary matrix was generated from Pr(zt,k = 1) = πt,k , where πt = ( .6 .5 0 0 ) if t = 1, . . . , 30, πt = ( .4 .8 .4 0 ) if t = 31, . . . , 50 and πt = ( 0 .3 .6 .6 ) if t = 51, . . . , 200. The order of the model is set to r = 50. The feature occurences Z1:n and true features Y and their estimates are represented in Figure 4. Two spurious features are detected by the model (features 2 and 5 on Fig. 3(c)) but quickly discarded (Fig. 4(d)). The algorithm is able to correctly estimate the varying prior occurences of the features over time. Feature1 Feature2 Feature1 Feature2 Feature3 20 20 40 40 60 60 Feature4 80 100 Feature4 Feature5 Feature6 Time Feature3 Time 80 100 120 120 140 140 160 160 180 200 180 1 2 3 200 4 Feature (a) 1 2 3 4 5 6 Feature (b) (c) (d) Figure 4: (a) True features, (b) True features occurences, (c) MAP estimate ZM AP and (d) associated E[Y|ZM AP ] t=20 t=50 t=20 t=50 t=100 t=200 t=100 t=200 (a) (b) Figure 5: (a) E[Xt |πt , Y] and (b) E[Xt |X1:t−1 ] at t = 20, 50, 100, 200. 5 Related work and Discussion The fixed-lag version of the time-varying DP of Caron et al. [1] is a special case of the proposed model when G is given by Fig. 3. The bivariate DP of Walker and Muliere [13] is also a special case when G has only two cliques. In this paper, we have assumed that the structure of the graph was known beforehand and we have shown that many flexible models arise from this framework. It would be interesting in the future to investigate the case where the graphical structure is unknown and must be estimated from the data. Acknowledgment The authors thank the reviewers for their comments that helped to improve the writing of the paper. 8 References [1] F. Caron, M. Davy, and A. Doucet. Generalized Polya urn for time-varying Dirichlet process mixtures. In Uncertainty in Artificial Intelligence, 2007. [2] A.P. Dawid and S.L. Lauritzen. Hyper Markov laws in the statistical analysis of decomposable graphical models. The Annals of Statistics, 21:1272–1317, 1993. [3] M.D. Escobar and M. West. Bayesian density estimation and inference using mixtures. Journal of the American Statistical Association, 90:577–588, 1995. [4] P. Fearnhead. Particle filters for mixture models with an unknown number of components. Statistics and Computing, 14:11–21, 2004. [5] T.L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In Advances in Neural Information Processing Systems, 2006. [6] D. Heinz. Building hyper dirichlet processes for graphical models. Electonic Journal of Statistics, 3:290–315, 2009. [7] J.F.C. Kingman. Random partitions in population genetics. Proceedings of the Royal Society of London, 361:1–20, 1978. [8] S.L. Lauritzen. Graphical Models. Oxford University Press, 1996. [9] P. M¨ ller, F. Quintana, and G. Rosner. A method for combining inference across related nonu parametric Bayesian models. Journal of the Royal Statistical Society B, 66:735–749, 2004. [10] R.M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9:249–265, 2000. [11] J. Pitman. Exchangeable and partially exchangeable random partitions. Probability theory and related fields, 102:145–158, 1995. [12] Y.W. Teh, M.I. Jordan, M.J. Beal, and D.M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101:1566–1581, 2006. [13] S. Walker and P. Muliere. A bivariate Dirichlet process. Statistics and Probability Letters, 64:1–7, 2003. [14] F. Wood and T.L. Griffiths. Particle filtering for nonparametric Bayesian matrix factorization. In Advances in Neural Information Processing Systems, 2007. 9

4 0.70336592 113 nips-2009-Improving Existing Fault Recovery Policies

Author: Guy Shani, Christopher Meek

Abstract: An automated recovery system is a key component in a large data center. Such a system typically employs a hand-made controller created by an expert. While such controllers capture many important aspects of the recovery process, they are often not systematically optimized to reduce costs such as server downtime. In this paper we describe a passive policy learning approach for improving existing recovery policies without exploration. We explain how to use data gathered from the interactions of the hand-made controller with the system, to create an improved controller. We suggest learning an indefinite horizon Partially Observable Markov Decision Process, a model for decision making under uncertainty, and solve it using a point-based algorithm. We describe the complete process, starting with data gathering, model learning, model checking procedures, and computing a policy. 1

5 0.70244616 250 nips-2009-Training Factor Graphs with Reinforcement Learning for Efficient MAP Inference

Author: Khashayar Rohanimanesh, Sameer Singh, Andrew McCallum, Michael J. Black

Abstract: Large, relational factor graphs with structure defined by first-order logic or other languages give rise to notoriously difficult inference problems. Because unrolling the structure necessary to represent distributions over all hypotheses has exponential blow-up, solutions are often derived from MCMC. However, because of limitations in the design and parameterization of the jump function, these samplingbased methods suffer from local minima—the system must transition through lower-scoring configurations before arriving at a better MAP solution. This paper presents a new method of explicitly selecting fruitful downward jumps by leveraging reinforcement learning (RL). Rather than setting parameters to maximize the likelihood of the training data, parameters of the factor graph are treated as a log-linear function approximator and learned with methods of temporal difference (TD); MAP inference is performed by executing the resulting policy on held out test data. Our method allows efficient gradient updates since only factors in the neighborhood of variables affected by an action need to be computed—we bypass the need to compute marginals entirely. Our method yields dramatic empirical success, producing new state-of-the-art results on a complex joint model of ontology alignment, with a 48% reduction in error over state-of-the-art in that domain. 1

6 0.6938259 19 nips-2009-A joint maximum-entropy model for binary neural population patterns and continuous signals

7 0.6936444 155 nips-2009-Modelling Relational Data using Bayesian Clustered Tensor Factorization

8 0.69192445 162 nips-2009-Neural Implementation of Hierarchical Bayesian Inference by Importance Sampling

9 0.68559474 132 nips-2009-Learning in Markov Random Fields using Tempered Transitions

10 0.685031 70 nips-2009-Discriminative Network Models of Schizophrenia

11 0.68471217 28 nips-2009-An Additive Latent Feature Model for Transparent Object Recognition

12 0.68283105 99 nips-2009-Functional network reorganization in motor cortex can be explained by reward-modulated Hebbian learning

13 0.68112415 3 nips-2009-AUC optimization and the two-sample problem

14 0.68108588 18 nips-2009-A Stochastic approximation method for inference in probabilistic graphical models

15 0.68091047 41 nips-2009-Bayesian Source Localization with the Multivariate Laplace Prior

16 0.67974007 158 nips-2009-Multi-Label Prediction via Sparse Infinite CCA

17 0.67882156 204 nips-2009-Replicated Softmax: an Undirected Topic Model

18 0.67772037 168 nips-2009-Non-stationary continuous dynamic Bayesian networks

19 0.67666674 215 nips-2009-Sensitivity analysis in HMMs with application to likelihood maximization

20 0.67659634 174 nips-2009-Nonparametric Latent Feature Models for Link Prediction