nips nips2007 nips2007-162 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Chris Atkeson, Benjamin Stephens
Abstract: We combine three threads of research on approximate dynamic programming: sparse random sampling of states, value function and policy approximation using local models, and using local trajectory optimizers to globally optimize a policy and associated value function. Our focus is on finding steady state policies for deterministic time invariant discrete time control problems with continuous states and actions often found in robotics. In this paper we show that we can now solve problems we couldn’t solve previously. 1
Reference: text
sentIndex sentText sentNum sentScore
1 edu/∼bstephe1 Abstract We combine three threads of research on approximate dynamic programming: sparse random sampling of states, value function and policy approximation using local models, and using local trajectory optimizers to globally optimize a policy and associated value function. [sent-10, score-1.83]
2 Our focus is on finding steady state policies for deterministic time invariant discrete time control problems with continuous states and actions often found in robotics. [sent-11, score-0.678]
3 Dynamic programming provides a way to find globally optimal control laws, given a one step cost (a. [sent-14, score-0.389]
4 We focus on control problems with continuous states and actions, deterministic time invariant discrete time dynamics xk+1 = f(xk , uk ), and a time invariant one step cost function L(x, u). [sent-18, score-0.518]
5 We explore approximating the value function and policy using many local models. [sent-23, score-0.538]
6 An example problem: We use one link pendulum swingup as an example problem in this introduction to provide the reader with a visualizable example of a value function and policy. [sent-24, score-0.748]
7 In one link pendulum swingup a motor at the base of the pendulum swings a rigid arm from the downward stable equilibrium to the upright unstable equilibrium and balances the arm there (Figure 1). [sent-25, score-0.88]
8 Figure 2 shows the value function and policy generated by dynamic programming. [sent-33, score-0.51]
9 8 −1 −1 −1 −1 −1 −1 −1 −1 −1 −1 Figure 1: Configurations from the simulated one link pendulum optimal trajectory every half a second and at the end of the trajectory. [sent-116, score-0.792]
10 Figure 2:right shows such a randomly generated set of states superimposed on a contour plot of the value function for one link swingup. [sent-118, score-0.606]
11 One technique that we apply here is to represent full trajectories from each sampled state to the goal, and to refine each trajectory using local trajectory optimization [8]. [sent-120, score-1.13]
12 Figure 2:right shows a set of optimized trajectories from the sampled states to the goal. [sent-121, score-0.364]
13 One key aspect of the local trajectory optimizer we use is that it provides a local quadratic model of the value function and a local linear model of the policy at the sampled state. [sent-122, score-1.271]
14 Dynamic programming can be done for linear quadratic regulator (LQR) problems even with hundreds of dimensions and it is not necessary to build a grid of states [9]. [sent-128, score-0.499]
15 Another goal of our work is to develop methods that can take advantage of situations where only a small amount of global interaction is necessary to enable local planners capable of solving local problems to find globally optimal solutions. [sent-135, score-0.573]
16 2 Related Work Random state selection: Random grids and random sampling are well known in numerical integration, finite element methods, and partial differential equations. [sent-136, score-0.34]
17 Rust applied random sampling of states to dynamic programming [1, 10]. [sent-137, score-0.523]
18 He showed that random sampling of states can avoid the curse of dimensionality for stochastic dynamic programming problems with a finite set of discrete actions. [sent-138, score-0.587]
19 The practicality and usefulness of random sampling of states in deterministic dynamic programming with continuous actions (the focus of our paper) remains an open question. [sent-141, score-0.652]
20 Alternatives to random sampling of states are irregular or adaptive grids [13], but in our experience they still require too many representational resources as the problem dimensionality increases. [sent-143, score-0.379]
21 In reinforcement learning random sampling of states is sometimes used to provide training data for function approximation of the value function. [sent-144, score-0.441]
22 In model-free approaches exploration is used to find actions and states that lead to better outcomes. [sent-146, score-0.371]
23 The optimal trajectory is shown as a yellow line in the value function plot, and as a black line with a yellow border in the policy plot. [sent-151, score-0.813]
24 Right: Random states (dots) and trajectories (black lines) used to plan one link swingup, superimposed on a contour map of the value function. [sent-154, score-0.647]
25 In the field of Partially Observable Markov Decision Processes (POMDPs) there has been some work on randomly sampling belief states, and also using local models of the value function and its first derivative at each randomly sampled belief state (for example [2, 3, 4, 5, 6, 7]). [sent-155, score-0.597]
26 Thrun explored random sampling of belief states where the underlying states and actions were continuous [7]. [sent-156, score-0.729]
27 He used a nearest neighbor scheme to perform value function interpolation, and a coverage test to decide whether to accept a new random state (is a new random state far enough from existing states? [sent-157, score-0.568]
28 In robot planning for obstacle avoidance random sampling of states is now quite popular [14]. [sent-160, score-0.435]
29 3 Combining Random State Sampling With Local Optimization The process of using the Bellman equation to update a representation of the value function by minimizing over all actions at a state is referred to as value iteration. [sent-164, score-0.387]
30 Standard value iteration represents the value function and associated policy using multidimensional tables, with each entry in the table corresponding to a particular state. [sent-165, score-0.477]
31 In our approach we randomly select states, and associate with each state a local quadratic model of the value function and a local linear model of the policy. [sent-166, score-0.566]
32 In our current implementation the value function and policy are represented through a combination of sampled and parametric representations, building global approximations by combining local models. [sent-169, score-0.631]
33 The second is our analog of using the Bellman equation: use the cost of a trajectory starting from the state under consideration and following the current global policy. [sent-173, score-0.619]
34 As in a Bellman update, there is a way to globally optimize the value of a state by considering many possible “actions”. [sent-176, score-0.361]
35 In our approach we consider many local policies associated with different stored states. [sent-177, score-0.434]
36 Taking advantage of goal states: For problems with goal states there are several ways to speed up convergence. [sent-178, score-0.366]
37 In cases where LQR techniques apply [9], we use the policy obtained by solving the corresponding LQR control problem at the goal as the default policy everywhere, to which the policy computed by dynamic programming is added. [sent-179, score-1.368]
38 [15] plots an example of a default policy and the policy generated by dynamic programming for comparison. [sent-180, score-0.942]
39 If the dynamic programming process can get within the LQR radius of the goal, it can use only the default policy to go the rest of the way. [sent-184, score-0.607]
40 In this paper the swingup 3 problems use an LQR default policy, which was limited for each action dimension to ±5Nm. [sent-187, score-0.411]
41 We note that for the swingup problems shown here the default LQR policy is capable of balancing the inverted pendulum at the goal, but is not capable of swinging up the pendulum to the goal. [sent-189, score-1.152]
42 We also initially only generate the value function and policy in the region near the goal. [sent-190, score-0.406]
43 We propose a hybrid tabular and parametric approach: parametric local models of the value function and policy are represented at sampled locations. [sent-195, score-0.597]
44 The local linear model for the policy is: derivative of the local model (and the value function) at x p ˆ u p (x) = u0 − K p x (2) p where u0 is the constant term of the local policy, and K p is the first derivative of the local policy and also the gain matrix for a local linear controller. [sent-198, score-1.489]
45 Creating the local model: These local models of the value function can be created using Differential Dynamic Programming (DDP) [17, 18, 8, 16]. [sent-199, score-0.335]
46 This local trajectory optimization process is similar to linear quadratic regulator design in that a local model of the value function is produced. [sent-200, score-0.791]
47 In DDP, value function and policy models are produced at each point along a trajectory. [sent-201, score-0.406]
48 After the backward sweep, forward integration can be used to update the trajectory itself: ui = ui − ∆ui − new Ki (xi − xi ). [sent-206, score-0.461]
49 In problems that have a goal state, we can generate a trajectory from each stored state all the way to the goal. [sent-208, score-0.816]
50 The cost of this trajectory is an upper bound on the true value of the state, and is used to bound the estimated value for the old state. [sent-209, score-0.581]
51 Utilizing the local models: For the purpose of explaining our algorithm, let’s assume we already have a set of sampled states, each of which has a local model of the value function and the policy. [sent-210, score-0.394]
52 We use a kd-tree to efficiently find nearest neighbors, but there are many other approaches that will find nearby stored states efficiently. [sent-213, score-0.576]
53 In the future we will investigate using other methods to combine local model predictions from nearby stored states: distance weighted averaging (kernel regression), linear locally weighted regression, and quadratic locally weighted regression for value functions. [sent-214, score-0.634]
54 Creating new random states: For tasks with a goal state, we initialize the set of stored states by storing the goal state itself. [sent-215, score-0.799]
55 We have explored a number of distributions to select additional states from: uniform within bounds on the states; Gaussian with the mean at the goal; sampling near existing states; and sampling from an underlying low resolution regular grid. [sent-216, score-0.47]
56 We have also explored changing the distribution we generate candidate states from as the algorithm progresses, for example using a mixture of Gaussians with the Gaussians centered on existing stored states. [sent-220, score-0.562]
57 Acceptance criteria for candidate states: We have several criteria to accept or reject states to be permanently stored. [sent-222, score-0.441]
58 Otherwise, a trajectory is created from the candidate state using the current approximated policy, and then locally optimized. [sent-227, score-0.638]
59 If the value of that trajectory is above Vlimit , the candidate state is rejected. [sent-228, score-0.65]
60 If the value of the trajectory is within 10% of the predicted value, the candidate state is rejected. [sent-229, score-0.65]
61 For problems with a goal state, if the trajectory does not reach the goal the candidate state is rejected. [sent-231, score-0.715]
62 It is possible to take the distance to the nearest sampled state into account in the acceptance criteria for new samples. [sent-237, score-0.375]
63 Rejecting states that are too close to existing states will increase the error in representing the value function, but may be a way for preventing too many samples near complex regions of the value functions that have little practical effect. [sent-239, score-0.602]
64 For example, we often do not need much accuracy in representing the value function near policy discontinuities where the value function has discontinuities in its spatial derivative and “creases”. [sent-240, score-0.685]
65 Creating a trajectory from a state: We create a trajectory from a candidate state or refine a trajectory from a stored state in the same way. [sent-243, score-1.671]
66 The first step is to use the current approximated policy until the goal or a time limit is reached. [sent-244, score-0.388]
67 In the current implementation this involves finding the stored state nearest to the current state in the trajectory and using its locally linear policy to compute the action on each time step. [sent-245, score-1.357]
68 In the current implementation we do not save the trajectory but only the local models from its start. [sent-248, score-0.52]
69 2 Figure 3: Configurations from the simulated two link pendulum optimal swing up trajectory every fifth of a second and the end of the trajectory. [sent-345, score-0.792]
70 trajectory is more than the currently stored value for the state, we reject the new value, as the values all come from actual trajectories and are upper bounds for the true value. [sent-346, score-0.813]
71 Combining parallel greedy local optimizers to perform global optimization: As currently described, the algorithm finds a locally optimal policy, but not necessarily a globally optimal policy. [sent-348, score-0.539]
72 For example, the stored states could be divided into two sets of nearest neighbors. [sent-349, score-0.542]
73 One set could have a suboptimal policy, but have no interaction with the other set of states that had a globally optimal policy since no nearest neighbor relations joined the two sets. [sent-350, score-0.896]
74 We expect the locally optimal policies to be fairly good because we 1) gradually increase the solved volume and 2) use local optimizers. [sent-351, score-0.399]
75 Given local optimization of actions, gradually increasing the solved volume will result in a globally optimal policy if the boundary of this volume never touches a non adjacent section of itself. [sent-352, score-0.784]
76 Figures 2 and 2 show the creases in the value function (discontinuities in the spatial derivative) and corresponding discontinuities in the policy that typically result when the constant cost contour touches a non adjacent section of itself as Vlimit is increased. [sent-353, score-0.694]
77 In theory, the approach we have described will produce a globally optimal policy if it has infinite resolution and all the stored states form a densely connected set in terms of nearest neighbor relations [8]. [sent-354, score-1.18]
78 By enforcing consistency of the local value function models across all nearest neighbor pairs, we can create a globally consistent value function estimate. [sent-355, score-0.557]
79 We can enforce consistency of nearest neighbor value functions by 1) using the policy of one state of a pair to reoptimize the trajectory of the other state of the pair and vice versa, and 2) adding more stored states in between nearest neighbors that continue to disagree [8]. [sent-358, score-1.738]
80 To increase the likelihood of finding a globally optimal policy with a limited resolution of stored states, we need an analog to exploration and to global minimization with respect to actions found in the Bellman equation. [sent-361, score-0.986]
81 We approximate this process by periodically reoptimizing each stored state using the policies of other stored states. [sent-362, score-0.676]
82 As more and more states are stored, and many alternative stored states are considered in optimizing any given stored state, a wide range of actions are considered for each state. [sent-363, score-1.015]
83 Each state could use the policy of a nearest neighbor, or a randomly chosen neighbor with the distribution being distance dependent, or just choosing another state randomly with no consideration of distance (what we currently do). [sent-366, score-0.871]
84 [8] describes how to follow a policy of another stored state if its trajectory is stored, or can be recomputed as needed. [sent-367, score-1.068]
85 In this work we explored a different approach that does not require each stored state to save its trajectory or recompute it. [sent-368, score-0.792]
86 To “follow” the policy of another state, we follow the locally linear policy for that state until the trajectory begins to go away from the state. [sent-369, score-1.234]
87 2 Figure 4: Configurations from the simulated three link pendulum optimal trajectory every tenth of a second and at the end of the trajectory. [sent-468, score-0.792]
88 4 Results In addition to the one link swingup example presented in the introduction, we present results on two link swingup (4 dimensional state) and three link swingup (6 dimensional state). [sent-469, score-1.422]
89 One link pendulum swingup: For the one link swingup case, the random state approach found a globally optimal trajectory (the same trajectory found by our grid based approaches [15]) after adding only 63 random states. [sent-472, score-2.044]
90 Figure 2:right shows the distribution of states and their trajectories superimposed on a contour map of the value function for one link swingup. [sent-473, score-0.647]
91 Two link pendulum swingup: For the two link swingup case, the random state approach finds what we believe is a globally optimal trajectory (the same trajectory found by our grid based approaches [15]) after storing an average of 12000 random states (Figure 3). [sent-474, score-2.33]
92 In this case the state has four dimensions (a position and velocity for each joint) and a two dimensional action (a torque at each joint). [sent-475, score-0.335]
93 The approximately 12000 sampled states should be compared to the millions of states used in grid-based approaches. [sent-481, score-0.519]
94 A 60x60x60x60 grid with almost 13 million states failed to find a trajectory as good as this one, while a 100x100x100x100 grid with 100 million states did find the same trajectory. [sent-482, score-0.945]
95 In 13 runs with different random number generator seeds, the mean number of states stored at convergence is 11430. [sent-483, score-0.491]
96 Three link pendulum swingup: For the three link swingup case, the random state approach found a good trajectory after storing less than 22000 random states (Figure 4). [sent-485, score-1.716]
97 We were not able to solve this problem using regular grid-based approaches due to limited state resolution: 22x22x22x22x38x44 = 391,676,032 states filled our largest memory. [sent-487, score-0.408]
98 1 2 3 1 2 3 5 Conclusion We have combined random sampling of states and local trajectory optimization to create a promising approach to practical dynamic programming for robot control problems. [sent-490, score-1.104]
99 Using local trajectory optimizers to speed up global optimization in dynamic programming. [sent-540, score-0.666]
100 Nonparametric representation of a policies and value functions: A trajectory based approach. [sent-588, score-0.504]
wordName wordTfidf (topN-words)
[('trajectory', 0.359), ('policy', 0.335), ('swingup', 0.292), ('states', 0.23), ('stored', 0.228), ('lqr', 0.203), ('pendulum', 0.203), ('link', 0.182), ('fi', 0.159), ('state', 0.146), ('globally', 0.144), ('vxi', 0.136), ('local', 0.132), ('qi', 0.129), ('dynamic', 0.104), ('actions', 0.099), ('vlimit', 0.097), ('default', 0.089), ('torque', 0.085), ('nearest', 0.084), ('discontinuities', 0.082), ('bellman', 0.08), ('cost', 0.08), ('programming', 0.079), ('sampling', 0.077), ('trajectories', 0.075), ('candidate', 0.074), ('policies', 0.074), ('value', 0.071), ('uu', 0.068), ('grid', 0.063), ('locally', 0.059), ('sampled', 0.059), ('luu', 0.058), ('lxx', 0.058), ('reoptimization', 0.058), ('tvxx', 0.058), ('ux', 0.058), ('resolution', 0.056), ('storing', 0.056), ('controller', 0.056), ('neighbor', 0.055), ('velocity', 0.054), ('goal', 0.053), ('robot', 0.052), ('quadratic', 0.051), ('atkeson', 0.051), ('ddp', 0.051), ('torques', 0.051), ('ui', 0.051), ('position', 0.05), ('optimal', 0.048), ('dynamics', 0.048), ('gradually', 0.048), ('contour', 0.048), ('criteria', 0.047), ('regulator', 0.046), ('differential', 0.045), ('derivative', 0.044), ('reject', 0.043), ('planning', 0.043), ('exploration', 0.042), ('superimposed', 0.041), ('xx', 0.041), ('taylor', 0.04), ('cga', 0.039), ('creases', 0.039), ('lx', 0.039), ('prm', 0.039), ('rrts', 0.039), ('touches', 0.039), ('xtvxx', 0.039), ('acceptance', 0.039), ('grids', 0.039), ('solved', 0.038), ('control', 0.038), ('optimizers', 0.037), ('currently', 0.037), ('sweep', 0.036), ('nearby', 0.034), ('curse', 0.034), ('thread', 0.034), ('couldn', 0.034), ('vxx', 0.034), ('randomly', 0.034), ('global', 0.034), ('ki', 0.034), ('random', 0.033), ('squared', 0.033), ('solve', 0.032), ('creating', 0.031), ('invariant', 0.031), ('problems', 0.03), ('reinforcement', 0.03), ('explored', 0.03), ('continuous', 0.03), ('converged', 0.03), ('rust', 0.029), ('save', 0.029), ('lu', 0.029)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000001 162 nips-2007-Random Sampling of States in Dynamic Programming
Author: Chris Atkeson, Benjamin Stephens
Abstract: We combine three threads of research on approximate dynamic programming: sparse random sampling of states, value function and policy approximation using local models, and using local trajectory optimizers to globally optimize a policy and associated value function. Our focus is on finding steady state policies for deterministic time invariant discrete time control problems with continuous states and actions often found in robotics. In this paper we show that we can now solve problems we couldn’t solve previously. 1
2 0.23800966 163 nips-2007-Receding Horizon Differential Dynamic Programming
Author: Yuval Tassa, Tom Erez, William D. Smart
Abstract: The control of high-dimensional, continuous, non-linear dynamical systems is a key problem in reinforcement learning and control. Local, trajectory-based methods, using techniques such as Differential Dynamic Programming (DDP), are not directly subject to the curse of dimensionality, but generate only local controllers. In this paper,we introduce Receding Horizon DDP (RH-DDP), an extension to the classic DDP algorithm, which allows us to construct stable and robust controllers based on a library of local-control trajectories. We demonstrate the effectiveness of our approach on a series of high-dimensional problems using a simulated multi-link swimming robot. These experiments show that our approach effectively circumvents dimensionality issues, and is capable of dealing with problems of (at least) 24 state and 9 action dimensions. 1
3 0.19414766 5 nips-2007-A Game-Theoretic Approach to Apprenticeship Learning
Author: Umar Syed, Robert E. Schapire
Abstract: We study the problem of an apprentice learning to behave in an environment with an unknown reward function by observing the behavior of an expert. We follow on the work of Abbeel and Ng [1] who considered a framework in which the true reward function is assumed to be a linear combination of a set of known and observable features. We give a new algorithm that, like theirs, is guaranteed to learn a policy that is nearly as good as the expert’s, given enough examples. However, unlike their algorithm, we show that ours may produce a policy that is substantially better than the expert’s. Moreover, our algorithm is computationally faster, is easier to implement, and can be applied even in the absence of an expert. The method is based on a game-theoretic view of the problem, which leads naturally to a direct application of the multiplicative-weights algorithm of Freund and Schapire [2] for playing repeated matrix games. In addition to our formal presentation and analysis of the new algorithm, we sketch how the method can be applied when the transition function itself is unknown, and we provide an experimental demonstration of the algorithm on a toy video-game environment. 1
4 0.19352019 168 nips-2007-Reinforcement Learning in Continuous Action Spaces through Sequential Monte Carlo Methods
Author: Alessandro Lazaric, Marcello Restelli, Andrea Bonarini
Abstract: Learning in real-world domains often requires to deal with continuous state and action spaces. Although many solutions have been proposed to apply Reinforcement Learning algorithms to continuous state problems, the same techniques can be hardly extended to continuous action spaces, where, besides the computation of a good approximation of the value function, a fast method for the identification of the highest-valued action is needed. In this paper, we propose a novel actor-critic approach in which the policy of the actor is estimated through sequential Monte Carlo methods. The importance sampling step is performed on the basis of the values learned by the critic, while the resampling step modifies the actor’s policy. The proposed approach has been empirically compared to other learning algorithms into several domains; in this paper, we report results obtained in a control problem consisting of steering a boat across a river. 1
5 0.18890733 91 nips-2007-Fitted Q-iteration in continuous action-space MDPs
Author: András Antos, Csaba Szepesvári, Rémi Munos
Abstract: We consider continuous state, continuous action batch reinforcement learning where the goal is to learn a good policy from a sufficiently rich trajectory generated by some policy. We study a variant of fitted Q-iteration, where the greedy action selection is replaced by searching for a policy in a restricted set of candidate policies by maximizing the average action values. We provide a rigorous analysis of this algorithm, proving what we believe is the first finite-time bound for value-function based algorithms for continuous state and action problems. 1 Preliminaries We will build on the results from [1, 2, 3] and for this reason we use the same notation as these papers. The unattributed results cited in this section can be found in the book [4]. A discounted MDP is defined by a quintuple (X , A, P, S, γ), where X is the (possible infinite) state space, A is the set of actions, P : X × A → M (X ) is the transition probability kernel with P (·|x, a) defining the next-state distribution upon taking action a from state x, S(·|x, a) gives the corresponding distribution of immediate rewards, and γ ∈ (0, 1) is the discount factor. Here X is a measurable space and M (X ) denotes the set of all probability measures over X . The Lebesguemeasure shall be denoted by λ. We start with the following mild assumption on the MDP: Assumption A1 (MDP Regularity) X is a compact subset of the dX -dimensional Euclidean space, ˆ A is a compact subset of [−A∞ , A∞ ]dA . The random immediate rewards are bounded by Rmax and that the expected immediate reward function, r(x, a) = rS(dr|x, a), is uniformly bounded by Rmax : r ∞ ≤ Rmax . A policy determines the next action given the past observations. Here we shall deal with stationary (Markovian) policies which choose an action in a stochastic way based on the last observation only. The value of a policy π when it is started from a state x is defined as the total expected discounted ∞ reward that is encountered while the policy is executed: V π (x) = Eπ [ t=0 γ t Rt |X0 = x]. Here Rt ∼ S(·|Xt , At ) is the reward received at time step t, the state, Xt , evolves according to Xt+1 ∼ ∗ Also with: Computer and Automation Research Inst. of the Hungarian Academy of Sciences Kende u. 13-17, Budapest 1111, Hungary. 1 P (·|Xt , At ), where At is sampled from the distribution determined by π. We use Qπ : X × A → R ∞ to denote the action-value function of policy π: Qπ (x, a) = Eπ [ t=0 γ t Rt |X0 = x, A0 = a]. The goal is to find a policy that attains the best possible values, V ∗ (x) = supπ V π (x), at all states ∗ x ∈ X . Here V ∗ is called the optimal value function and a policy π ∗ that satisfies V π (x) = ∗ ∗ ∗ V (x) for all x ∈ X is called optimal. The optimal action-value function Q (x, a) is Q (x, a) = supπ Qπ (x, a). We say that a (deterministic stationary) policy π is greedy w.r.t. an action-value function Q ∈ B(X × A), and we write π = π (·; Q), if, for all x ∈ X , π(x) ∈ argmaxa∈A Q(x, a). ˆ Under mild technical assumptions, such a greedy policy always exists. Any greedy policy w.r.t. Q∗ is optimal. For π : X → A we define its evaluation operator, T π : B(X × A) → B(X × A), by (T π Q)(x, a) = r(x, a) + γ X Q(y, π(y)) P (dy|x, a). It is known that Qπ = T π Qπ . Further, if we let the Bellman operator, T : B(X × A) → B(X × A), defined by (T Q)(x, a) = r(x, a) + γ X supb∈A Q(y, b) P (dy|x, a) then Q∗ = T Q∗ . It is known that V π and Qπ are bounded by Rmax /(1 − γ), just like Q∗ and V ∗ . For π : X → A, the operator E π : B(X × A) → B(X ) is defined by (E π Q)(x) = Q(x, π(x)), while E : B(X × A) → B(X ) is defined by (EQ)(x) = supa∈A Q(x, a). Throughout the paper F ⊂ {f : X × A → R} will denote a subset of real-valued functions over the state-action space X × A and Π ⊂ AX will be a set of policies. For ν ∈ M (X ) and f : X → R p measurable, we let (for p ≥ 1) f p,ν = X |f (x)|p ν(dx). We simply write f ν for f 2,ν . 2 Further, we extend · ν to F by f ν = A X |f |2 (x, a) dν(x) dλA (a), where λA is the uniform distribution over A. We shall use the shorthand notation νf to denote the integral f (x)ν(dx). We denote the space of bounded measurable functions with domain X by B(X ). Further, the space of measurable functions bounded by 0 < K < ∞ shall be denoted by B(X ; K). We let · ∞ denote the supremum norm. 2 Fitted Q-iteration with approximate policy maximization We assume that we are given a finite trajectory, {(Xt , At , Rt )}1≤t≤N , generated by some stochastic stationary policy πb , called the behavior policy: At ∼ πb (·|Xt ), Xt+1 ∼ P (·|Xt , At ), Rt ∼ def S(·|Xt , At ), where πb (·|x) is a density with π0 = inf (x,a)∈X ×A πb (a|x) > 0. The generic recipe for fitted Q-iteration (FQI) [5] is Qk+1 = Regress(Dk (Qk )), (1) where Regress is an appropriate regression procedure and Dk (Qk ) is a dataset defining a regression problem in the form of a list of data-point pairs: Dk (Qk ) = (Xt , At ), Rt + γ max Qk (Xt+1 , b) b∈A 1≤t≤N .1 Fitted Q-iteration can be viewed as approximate value iteration applied to action-value functions. To see this note that value iteration would assign the value (T Qk )(x, a) = r(x, a) + γ maxb∈A Qk (y, b) P (dy|x, a) to Qk+1 (x, a) [6]. Now, remember that the regression function for the jointly distributed random variables (Z, Y ) is defined by the conditional expectation of Y given Z: m(Z) = E [Y |Z]. Since for any fixed function Q, E [Rt + γ maxb∈A Q(Xt+1 , b)|Xt , At ] = (T Q)(Xt , At ), the regression function corresponding to the data Dk (Q) is indeed T Q and hence if FQI solved the regression problem defined by Qk exactly, it would simulate value iteration exactly. However, this argument itself does not directly lead to a rigorous analysis of FQI: Since Qk is obtained based on the data, it is itself a random function. Hence, after the first iteration, the “target” function in FQI becomes random. Furthermore, this function depends on the same data that is used to define the regression problem. Will FQI still work despite these issues? To illustrate the potential difficulties consider a dataset where X1 , . . . , XN is a sequence of independent random variables, which are all distributed uniformly at random in [0, 1]. Further, let M be a random integer greater than N which is independent of the dataset (Xt )N . Let U be another random variable, uniformly t=1 distributed in [0, 1]. Now define the regression problem by Yt = fM,U (Xt ), where fM,U (x) = sgn(sin(2M 2π(x + U ))). Then it is not hard to see that no matter how big N is, no procedure can 1 Since the designer controls Qk , we may assume that it is continuous, hence the maximum exists. 2 estimate the regression function fM,U with a small error (in expectation, or with high probability), even if the procedure could exploit the knowledge of the specific form of fM,U . On the other hand, if we restricted M to a finite range then the estimation problem could be solved successfully. The example shows that if the complexity of the random functions defining the regression problem is uncontrolled then successful estimation might be impossible. Amongst the many regression methods in this paper we have chosen to work with least-squares methods. In this case Equation (1) takes the form N Qk+1 = argmin Q∈F t=1 1 πb (At |Xt ) 2 Q(Xt , At ) − Rt + γ max Qk (Xt+1 , b) b∈A . (2) We call this method the least-squares fitted Q-iteration (LSFQI) method. Here we introduced the weighting 1/πb (At |Xt ) since we do not want to give more weight to those actions that are preferred by the behavior policy. Besides this weighting, the only parameter of the method is the function set F. This function set should be chosen carefully, to keep a balance between the representation power and the number of samples. As a specific example for F consider neural networks with some fixed architecture. In this case the function set is generated by assigning weights in all possible ways to the neural net. Then the above minimization becomes the problem of tuning the weights. Another example is to use linearly parameterized function approximation methods with appropriately selected basis functions. In this case the weight tuning problem would be less demanding. Yet another possibility is to let F be an appropriate restriction of a Reproducing Kernel Hilbert Space (e.g., in a ball). In this case the training procedure becomes similar to LS-SVM training [7]. As indicated above, the analysis of this algorithm is complicated by the fact that the new dataset is defined in terms of the previous iterate, which is already a function of the dataset. Another complication is that the samples in a trajectory are in general correlated and that the bias introduced by the imperfections of the approximation architecture may yield to an explosion of the error of the procedure, as documented in a number of cases in, e.g., [8]. Nevertheless, at least for finite action sets, the tools developed in [1, 3, 2] look suitable to show that under appropriate conditions these problems can be overcome if the function set is chosen in a judicious way. However, the results of these works would become essentially useless in the case of an infinite number of actions since these previous bounds grow to infinity with the number of actions. Actually, we believe that this is not an artifact of the proof techniques of these works, as suggested by the counterexample that involved random targets. The following result elaborates this point further: Proposition 2.1. Let F ⊂ B(X × A). Then even if the pseudo-dimension of F is finite, the fatshattering function of ∨ Fmax = VQ : VQ (·) = max Q(·, a), Q ∈ F a∈A 2 can be infinite over (0, 1/2). Without going into further details, let us just note that the finiteness of the fat-shattering function is a sufficient and necessary condition for learnability and the finiteness of the fat-shattering function is implied by the finiteness of the pseudo-dimension [9].The above proposition thus shows that without imposing further special conditions on F, the learning problem may become infeasible. One possibility is of course to discretize the action space, e.g., by using a uniform grid. However, if the action space has a really high dimensionality, this approach becomes unfeasible (even enumerating 2dA points could be impossible when dA is large). Therefore we prefer alternate solutions. Another possibility is to make the functions in F, e.g., uniformly Lipschitz in their state coordinates. ∨ Then the same property will hold for functions in Fmax and hence by a classical result we can bound the capacity of this set (cf. pp. 353–357 of [10]). One potential problem with this approach is that this way it might be difficult to get a fine control of the capacity of the resulting set. 2 The proof of this and the other results are given in the appendix, available in the extended version of this paper, downloadable from http://hal.inria.fr/inria-00185311/en/. 3 In the approach explored here we modify the fitted Q-iteration algorithm by introducing a policy set Π and a search over this set for an approximately greedy policy in a sense that will be made precise in a minute. Our algorithm thus has four parameters: F, Π, K, Q0 . Here F is as before, Π is a user-chosen set of policies (mappings from X to A), K is the number of iterations and Q0 is an initial value function (a typical choice is Q0 ≡ 0). The algorithm computes a sequence of iterates (Qk , πk ), k = 0, . . . , K, defined by the following equations: ˆ N π0 ˆ = argmax π∈Π Q0 (Xt , π(Xt )), t=1 N Qk+1 = argmin Q∈F t=1 1 Q(Xt , At ) − Rt + γQk (Xt+1 , πk (Xt+1 )) ˆ πb (At |Xt ) 2 , (3) N πk+1 ˆ = argmax π∈Π Qk+1 (Xt , π(Xt )). (4) t=1 Thus, (3) is similar to (2), while (4) defines the policy search problem. The policy search will generally be solved by a gradient procedure or some other appropriate method. The cost of this step will be primarily determined by how well-behaving the iterates Qk+1 are in their action arguments. For example, if they were quadratic and if π was linear then the problem would be a quadratic optimization problem. However, except for special cases3 the action value functions will be more complicated, in which case this step can be expensive. Still, this cost could be similar to that of searching for the maximizing actions for each t = 1, . . . , N if the approximately maximizing actions are similar across similar states. This algorithm, which we could also call a fitted actor-critic algorithm, will be shown to overcome the above mentioned complexity control problem provided that the complexity of Π is controlled appropriately. Indeed, in this case the set of possible regression problems is determined by the set ∨ FΠ = { V : V (·) = Q(·, π(·)), Q ∈ F, π ∈ Π } , ∨ and the proof will rely on controlling the complexity of FΠ by selecting F and Π appropriately. 3 3.1 The main theoretical result Outline of the analysis In order to gain some insight into the behavior of the algorithm, we provide a brief summary of its error analysis. The main result will be presented subsequently. For f ,Q ∈ F and a policy π, we define the tth TD-error as follows: dt (f ; Q, π) = Rt + γQ(Xt+1 , π(Xt+1 )) − f (Xt , At ). Further, we define the empirical loss function by 1 ˆ LN (f ; Q, π) = N N t=1 d2 (f ; Q, π) t , λ(A)πb (At |Xt ) where the normalization with λ(A) is introduced for mathematical convenience. Then (3) can be ˆ written compactly as Qk+1 = argminf ∈F LN (f ; Qk , πk ). ˆ ˆ The algorithm can then be motivated by the observation that for any f ,Q, and π, LN (f ; Q, π) is an unbiased estimate of def 2 L(f ; Q, π) = f − T π Q ν + L∗ (Q, π), (5) where the first term is the error we are interested in and the second term captures the variance of the random samples: L∗ (Q, π) = E [Var [R1 + γQ(X2 , π(X2 ))|X1 , A1 = a]] dλA (a). A 3 Linear quadratic regulation is such a nice case. It is interesting to note that in this special case the obvious choices for F and Π yield zero error in the limit, as can be proven based on the main result of this paper. 4 ˆ This result is stated formally by E LN (f ; Q, π) = L(f ; Q, π). Since the variance term in (5) is independent of f , argminf ∈F L(f ; Q, π) = 2 π argminf ∈F f − T Q ν . Thus, if πk were greedy w.r.t. Qk then argminf ∈F L(f ; Qk , πk ) = ˆ ˆ 2 argminf ∈F f − T Qk ν . Hence we can still think of the procedure as approximate value iteration over the space of action-value functions, projecting T Qk using empirical risk minimization on the space F w.r.t. · ν distances in an approximate manner. Since πk is only approximately greedy, we ˆ will have to deal with both the error coming from the approximate projection and the error coming from the choice of πk . To make this clear, we write the iteration in the form ˆ ˆ ˆ Qk+1 = T πk Qk + εk = T Qk + εk + (T πk Qk − T Qk ) = T Qk + εk , def ˆ ˆ where εk is the error committed while computing T πk Qk , εk = T πk Qk − T Qk is the error committed because the greedy policy is computed approximately and εk = εk + εk is the total error of step k. Hence, in order to show that the procedure is well behaved, one needs to show that both errors are controlled and that when the errors are propagated through these equations, the resulting error stays controlled, too. Since we are ultimately interested in the performance of the policy obtained, we will also need to show that small action-value approximation errors yield small performance losses. For these we need a number of assumptions that concern either the training data, the MDP, or the function sets used for learning. 3.2 Assumptions 3.2.1 Assumptions on the training data We shall assume that the data is rich, is in a steady state, and is fast-mixing, where, informally, mixing means that future depends weakly on the past. Assumption A2 (Sample Path Properties) Assume that {(Xt , At , Rt )}t=1,...,N is the sample path of πb , a stochastic stationary policy. Further, assume that {Xt } is strictly stationary (Xt ∼ ν ∈ M (X )) and exponentially β-mixing with the actual rate given by the parameters (β, b, κ).4 We further assume that the sampling policy πb satisfies π0 = inf (x,a)∈X ×A πb (a|x) > 0. The β-mixing property will be used to establish tail inequalities for certain empirical processes.5 Note that the mixing coefficients do not need to be known. In the case when no mixing condition is satisfied, learning might be impossible. To see this just consider the case when X1 = X2 = . . . = XN . Thus, in this case the learner has many copies of the same random variable and successful generalization is thus impossible. We believe that the assumption that the process is in a steady state is not essential for our result, as when the process reaches its steady state quickly then (at the price of a more involved proof) the result would still hold. 3.2.2 Assumptions on the MDP In order to prevent the uncontrolled growth of the errors as they are propagated through the updates, we shall need some assumptions on the MDP. A convenient assumption is the following one [11]: Assumption A3 (Uniformly stochastic transitions) For all x ∈ X and a ∈ A, assume that P (·|x, a) is absolutely continuous w.r.t. ν and the Radon-Nikodym derivative of P w.r.t. ν is bounded def < +∞. uniformly with bound Cν : Cν = supx∈X ,a∈A dP (·|x,a) dν ∞ Note that by the definition of measure differentiation, Assumption A3 means that P (·|x, a) ≤ Cν ν(·). This assumption essentially requires the transitions to be noisy. We will also prove (weaker) results under the following, weaker assumption: 4 For the definition of β-mixing, see e.g. [2]. We say “empirical process” and “empirical measure”, but note that in this work these are based on dependent (mixing) samples. 5 5 Assumption A4 (Discounted-average concentrability of future-state distributions) Given ρ, ν, m ≥ 1 and an arbitrary sequence of stationary policies {πm }m≥1 , assume that the futuredef state distribution ρP π1 P π2 . . . P πm is absolutely continuous w.r.t. ν. Assume that c(m) = π1 π2 πm def satisfies m≥1 mγ m−1 c(m) < +∞. We shall call Cρ,ν = supπ1 ,...,πm d(ρP Pdν ...P ) ∞ max (1 − γ)2 m≥1 mγ m−1 c(m), (1 − γ) m≥1 γ m c(m) the discounted-average concentrability coefficient of the future-state distributions. The number c(m) measures how much ρ can get amplified in m steps as compared to the reference distribution ν. Hence, in general we expect c(m) to grow with m. In fact, the condition that Cρ,µ is finite is a growth rate condition on c(m). Thanks to discounting, Cρ,µ is finite for a reasonably large class of systems (see the discussion in [11]). A related assumption is needed in the error analysis of the approximate greedy step of the algorithm: Assumption A5 (The random policy “makes no peak-states”) Consider the distribution µ = (ν × λA )P which is the distribution of a state that results from sampling an initial state according to ν and then executing an action which is selected uniformly at random.6 Then Γν = dµ/dν ∞ < +∞. Note that under Assumption A3 we have Γν ≤ Cν . This (very mild) assumption means that after one step, starting from ν and executing this random policy, the probability of the next state being in a set is upper bounded by Γν -times the probability of the starting state being in the same set. def Besides, we assume that A has the following regularity property: Let Py(a, h, ρ) = (a , v) ∈ RdA +1 : a − a 1 ≤ ρ, 0 ≤ v/h ≤ 1 − a − a 1 /ρ denote the pyramid with hight h and base given by the 1 def -ball B(a, ρ) = a ∈ RdA : a − a 1 ≤ρ centered at a. Assumption A6 (Regularity of the action space) We assume that there exists α > 0, such that for all a ∈ A, for all ρ > 0, λ(Py(a, 1, ρ) ∩ (A × R)) λ(A) ≥ min α, λ(Py(a, 1, ρ)) λ(B(a, ρ)) For example, if A is an 1 . -ball itself, then this assumption will be satisfied with α = 2−dA . Without assuming any smoothness of the MDP, learning in infinite MDPs looks hard (see, e.g., [12, 13]). Here we employ the following extra condition: Assumption A7 (Lipschitzness of the MDP in the actions) Assume that the transition probabilities and rewards are Lipschitz w.r.t. their action variable, i.e., there exists LP , Lr > 0 such that for all (x, a, a ) ∈ X × A × A and measurable set B of X , |P (B|x, a) − P (B|x, a )| ≤ LP a − a 1 , |r(x, a) − r(x, a )| ≤ Lr a − a 1 . Note that previously Lipschitzness w.r.t. the state variables was used, e.g., in [11] to construct consistent planning algorithms. 3.2.3 Assumptions on the function sets used by the algorithm These assumptions are less demanding since they are under the control of the user of the algorithm. However, the choice of these function sets will greatly influence the performance of the algorithm, as we shall see it from the bounds. The first assumption concerns the class F: Assumption A8 (Lipschitzness of candidate action-value functions) Assume F ⊂ B(X × A) and that any elements of F is uniformly Lipschitz in its action-argument in the sense that |Q(x, a) − Q(x, a )| ≤ LA a − a 1 holds for any x ∈ X , a,a ∈ A, and Q ∈ F . 6 Remember that λA denotes the uniform distribution over the action set A. 6 We shall also need to control the capacity of our function sets. We assume that the reader is familiar with the concept of VC-dimension.7 Here we use the pseudo-dimension of function sets that builds upon the concept of VC-dimension: Definition 3.1 (Pseudo-dimension). The pseudo-dimension VF + of F is defined as the VCdimension of the subgraphs of functions in F (hence it is also called the VC-subgraph dimension of F). Since A is multidimensional, we define VΠ+ to be the sum of the pseudo-dimensions of the coordinate projection spaces, Πk of Π: dA V Π+ = VΠ + , k=1 k Πk = { πk : X → R : π = (π1 , . . . , πk , . . . , πdA ) ∈ Π } . Now we are ready to state our assumptions on our function sets: Assumption A9 (Capacity of the function and policy sets) Assume that F ⊂ B(X × A; Qmax ) for Qmax > 0 and VF + < +∞. Also, A ⊂ [−A∞ , A∞ ]dA and VΠ+ < +∞. Besides their capacity, one shall also control the approximation power of the function sets involved. Let us first consider the policy set Π. Introduce e∗ (F, Π) = sup inf ν(EQ − E π Q). Q∈F π∈Π Note that inf π∈Π ν(EQ − E π Q) measures the quality of approximating νEQ by νE π Q. Hence, e∗ (F, Π) measures the worst-case approximation error of νEQ as Q is changed within F. This can be made small by choosing Π large. Another related quantity is the one-step Bellman-error of F w.r.t. Π. This is defined as follows: For a fixed policy π, the one-step Bellman-error of F w.r.t. T π is defined as E1 (F; π) = sup inf Q∈F Q ∈F Q − T πQ ν . Taking again a pessimistic approach, the one-step Bellman-error of F is defined as E1 (F, Π) = sup E1 (F; π). π∈Π Typically by increasing F, E1 (F, Π) can be made smaller (this is discussed at some length in [3]). However, it also holds for both Π and F that making them bigger will increase their capacity (pseudo-dimensions) which leads to an increase of the estimation errors. Hence, F and Π must be selected to balance the approximation and estimation errors, just like in supervised learning. 3.3 The main result Theorem 3.2. Let πK be a greedy policy w.r.t. QK , i.e. πK (x) ∈ argmaxa∈A QK (x, a). Then under Assumptions A1, A2, and A5–A9, for all δ > 0 we have with probability at least 1 − δ: given Assumption A3 (respectively A4), V ∗ − V πK ∞ (resp. V ∗ − V πK 1,ρ ), is bounded by d 1+1 κ+1 A 4κ (log N + log(K/δ)) + γK , C E1 (F, Π) + e∗ (F, Π) + 1/4 N where C depends on dA , VF + , (VΠ+ )dA , γ, κ, b, β, Cν (resp. Cρ,ν ), Γν , LA , LP ,Lr , α, λ(A), π0 , k=1 k κ+1 ˆ Qmax , Rmax , Rmax , and A∞ . In particular, C scales with V 4κ(dA +1) , where V = 2VF + + VΠ+ plays the role of the “combined effective” dimension of F and Π. 7 Readers not familiar with VC-dimension are suggested to consult a book, such as the one by Anthony and Bartlett [14]. 7 4 Discussion We have presented what we believe is the first finite-time bounds for continuous-state and actionspace RL that uses value functions. Further, this is the first analysis of fitted Q-iteration, an algorithm that has proved to be useful in a number of cases, even when used with non-averagers for which no previous theoretical analysis existed (e.g., [15, 16]). In fact, our main motivation was to show that there is a systematic way of making these algorithms work and to point at possible problem sources the same time. We discussed why it can be difficult to make these algorithms work in practice. We suggested that either the set of action-value candidates has to be carefully controlled (e.g., assuming uniform Lipschitzness w.r.t. the state variables), or a policy search step is needed, just like in actorcritic algorithms. The bound in this paper is similar in many respects to a previous bound of a Bellman-residual minimization algorithm [2]. It looks that the techniques developed here can be used to obtain results for that algorithm when it is applied to continuous action spaces. Finally, although we have not explored them here, consistency results for FQI can be obtained from our results using standard methods, like the methods of sieves. We believe that the methods developed here will eventually lead to algorithms where the function approximation methods are chosen based on the data (similar to adaptive regression methods) so as to optimize performance, which in our opinion is one of the biggest open questions in RL. Currently we are exploring this possibility. Acknowledgments Andr´ s Antos would like to acknowledge support for this project from the Hungarian Academy of Sciences a (Bolyai Fellowship). Csaba Szepesv´ ri greatly acknowledges the support received from the Alberta Ingenuity a Fund, NSERC, the Computer and Automation Research Institute of the Hungarian Academy of Sciences. References [1] A. Antos, Cs. Szepesv´ ri, and R. Munos. Learning near-optimal policies with Bellman-residual minia mization based fitted policy iteration and a single sample path. In COLT-19, pages 574–588, 2006. [2] A. Antos, Cs. Szepesv´ ri, and R. Munos. Learning near-optimal policies with Bellman-residual minia mization based fitted policy iteration and a single sample path. Machine Learning, 2007. (accepted). [3] A. Antos, Cs. Szepesv´ ri, and R. Munos. Value-iteration based fitted policy iteration: learning with a a single trajectory. In IEEE ADPRL, pages 330–337, 2007. [4] D. P. Bertsekas and S.E. Shreve. Stochastic Optimal Control (The Discrete Time Case). Academic Press, New York, 1978. [5] D. Ernst, P. Geurts, and L. Wehenkel. Tree-based batch mode reinforcement learning. Journal of Machine Learning Research, 6:503–556, 2005. [6] R.S. Sutton and A.G. Barto. Reinforcement Learning: An Introduction. Bradford Book. MIT Press, 1998. [7] N. Cristianini and J. Shawe-Taylor. An introduction to support vector machines (and other kernel-based learning methods). Cambridge University Press, 2000. [8] J.A. Boyan and A.W. Moore. Generalization in reinforcement learning: Safely approximating the value function. In NIPS-7, pages 369–376, 1995. [9] P.L. Bartlett, P.M. Long, and R.C. Williamson. Fat-shattering and the learnability of real-valued functions. Journal of Computer and System Sciences, 52:434–452, 1996. [10] A.N. Kolmogorov and V.M. Tihomirov. -entropy and -capacity of sets in functional space. American Mathematical Society Translations, 17(2):277–364, 1961. [11] R. Munos and Cs. Szepesv´ ri. Finite time bounds for sampling based fitted value iteration. Technical a report, Computer and Automation Research Institute of the Hungarian Academy of Sciences, Kende u. 13-17, Budapest 1111, Hungary, 2006. [12] A.Y. Ng and M. Jordan. PEGASUS: A policy search method for large MDPs and POMDPs. In Proceedings of the 16th Conference in Uncertainty in Artificial Intelligence, pages 406–415, 2000. [13] P.L. Bartlett and A. Tewari. Sample complexity of policy search with known dynamics. In NIPS-19. MIT Press, 2007. [14] M. Anthony and P. L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, 1999. [15] M. Riedmiller. Neural fitted Q iteration – first experiences with a data efficient neural reinforcement learning method. In 16th European Conference on Machine Learning, pages 317–328, 2005. [16] S. Kalyanakrishnan and P. Stone. Batch reinforcement learning in a complex domain. In AAMAS-07, 2007. 8
6 0.1833733 102 nips-2007-Incremental Natural Actor-Critic Algorithms
7 0.1759776 34 nips-2007-Bayesian Policy Learning with Trans-Dimensional MCMC
8 0.13558713 215 nips-2007-What makes some POMDP problems easy to approximate?
9 0.12844783 148 nips-2007-Online Linear Regression and Its Application to Model-Based Reinforcement Learning
10 0.12476208 124 nips-2007-Managing Power Consumption and Performance of Computing Systems Using Reinforcement Learning
11 0.12123439 185 nips-2007-Stable Dual Dynamic Programming
12 0.11893582 98 nips-2007-Hierarchical Apprenticeship Learning with Application to Quadruped Locomotion
13 0.11449058 30 nips-2007-Bayes-Adaptive POMDPs
14 0.11296435 204 nips-2007-Theoretical Analysis of Heuristic Search Methods for Online POMDPs
15 0.097826332 17 nips-2007-A neural network implementing optimal state estimation based on dynamic spike train decoding
16 0.096541218 151 nips-2007-Optimistic Linear Programming gives Logarithmic Regret for Irreducible MDPs
17 0.09191817 52 nips-2007-Competition Adds Complexity
18 0.082704633 86 nips-2007-Exponential Family Predictive Representations of State
19 0.081793055 100 nips-2007-Hippocampal Contributions to Control: The Third Way
20 0.066130556 174 nips-2007-Selecting Observations against Adversarial Objectives
topicId topicWeight
[(0, -0.237), (1, -0.296), (2, 0.078), (3, -0.096), (4, -0.219), (5, 0.086), (6, 0.018), (7, -0.017), (8, 0.006), (9, 0.071), (10, -0.08), (11, -0.012), (12, 0.043), (13, -0.019), (14, 0.026), (15, 0.041), (16, -0.038), (17, 0.007), (18, 0.063), (19, -0.068), (20, 0.018), (21, 0.048), (22, -0.093), (23, 0.043), (24, -0.032), (25, -0.107), (26, 0.038), (27, 0.042), (28, -0.056), (29, -0.011), (30, -0.007), (31, -0.106), (32, 0.116), (33, 0.049), (34, -0.123), (35, -0.093), (36, -0.0), (37, -0.042), (38, 0.029), (39, 0.085), (40, -0.08), (41, 0.032), (42, 0.007), (43, -0.031), (44, -0.053), (45, -0.032), (46, 0.133), (47, -0.028), (48, -0.021), (49, -0.096)]
simIndex simValue paperId paperTitle
same-paper 1 0.97377646 162 nips-2007-Random Sampling of States in Dynamic Programming
Author: Chris Atkeson, Benjamin Stephens
Abstract: We combine three threads of research on approximate dynamic programming: sparse random sampling of states, value function and policy approximation using local models, and using local trajectory optimizers to globally optimize a policy and associated value function. Our focus is on finding steady state policies for deterministic time invariant discrete time control problems with continuous states and actions often found in robotics. In this paper we show that we can now solve problems we couldn’t solve previously. 1
2 0.85824764 163 nips-2007-Receding Horizon Differential Dynamic Programming
Author: Yuval Tassa, Tom Erez, William D. Smart
Abstract: The control of high-dimensional, continuous, non-linear dynamical systems is a key problem in reinforcement learning and control. Local, trajectory-based methods, using techniques such as Differential Dynamic Programming (DDP), are not directly subject to the curse of dimensionality, but generate only local controllers. In this paper,we introduce Receding Horizon DDP (RH-DDP), an extension to the classic DDP algorithm, which allows us to construct stable and robust controllers based on a library of local-control trajectories. We demonstrate the effectiveness of our approach on a series of high-dimensional problems using a simulated multi-link swimming robot. These experiments show that our approach effectively circumvents dimensionality issues, and is capable of dealing with problems of (at least) 24 state and 9 action dimensions. 1
3 0.68724757 5 nips-2007-A Game-Theoretic Approach to Apprenticeship Learning
Author: Umar Syed, Robert E. Schapire
Abstract: We study the problem of an apprentice learning to behave in an environment with an unknown reward function by observing the behavior of an expert. We follow on the work of Abbeel and Ng [1] who considered a framework in which the true reward function is assumed to be a linear combination of a set of known and observable features. We give a new algorithm that, like theirs, is guaranteed to learn a policy that is nearly as good as the expert’s, given enough examples. However, unlike their algorithm, we show that ours may produce a policy that is substantially better than the expert’s. Moreover, our algorithm is computationally faster, is easier to implement, and can be applied even in the absence of an expert. The method is based on a game-theoretic view of the problem, which leads naturally to a direct application of the multiplicative-weights algorithm of Freund and Schapire [2] for playing repeated matrix games. In addition to our formal presentation and analysis of the new algorithm, we sketch how the method can be applied when the transition function itself is unknown, and we provide an experimental demonstration of the algorithm on a toy video-game environment. 1
4 0.68623674 124 nips-2007-Managing Power Consumption and Performance of Computing Systems Using Reinforcement Learning
Author: Gerald Tesauro, Rajarshi Das, Hoi Chan, Jeffrey Kephart, David Levine, Freeman Rawson, Charles Lefurgy
Abstract: Electrical power management in large-scale IT systems such as commercial datacenters is an application area of rapidly growing interest from both an economic and ecological perspective, with billions of dollars and millions of metric tons of CO2 emissions at stake annually. Businesses want to save power without sacrificing performance. This paper presents a reinforcement learning approach to simultaneous online management of both performance and power consumption. We apply RL in a realistic laboratory testbed using a Blade cluster and dynamically varying HTTP workload running on a commercial web applications middleware platform. We embed a CPU frequency controller in the Blade servers’ firmware, and we train policies for this controller using a multi-criteria reward signal depending on both application performance and CPU power consumption. Our testbed scenario posed a number of challenges to successful use of RL, including multiple disparate reward functions, limited decision sampling rates, and pathologies arising when using multiple sensor readings as state variables. We describe innovative practical solutions to these challenges, and demonstrate clear performance improvements over both hand-designed policies as well as obvious “cookbook” RL implementations. 1
5 0.65572506 98 nips-2007-Hierarchical Apprenticeship Learning with Application to Quadruped Locomotion
Author: J. Z. Kolter, Pieter Abbeel, Andrew Y. Ng
Abstract: We consider apprenticeship learning—learning from expert demonstrations—in the setting of large, complex domains. Past work in apprenticeship learning requires that the expert demonstrate complete trajectories through the domain. However, in many problems even an expert has difficulty controlling the system, which makes this approach infeasible. For example, consider the task of teaching a quadruped robot to navigate over extreme terrain; demonstrating an optimal policy (i.e., an optimal set of foot locations over the entire terrain) is a highly non-trivial task, even for an expert. In this paper we propose a method for hierarchical apprenticeship learning, which allows the algorithm to accept isolated advice at different hierarchical levels of the control task. This type of advice is often feasible for experts to give, even if the expert is unable to demonstrate complete trajectories. This allows us to extend the apprenticeship learning paradigm to much larger, more challenging domains. In particular, in this paper we apply the hierarchical apprenticeship learning algorithm to the task of quadruped locomotion over extreme terrain, and achieve, to the best of our knowledge, results superior to any previously published work. 1
6 0.64678943 52 nips-2007-Competition Adds Complexity
7 0.62810159 168 nips-2007-Reinforcement Learning in Continuous Action Spaces through Sequential Monte Carlo Methods
8 0.60014999 91 nips-2007-Fitted Q-iteration in continuous action-space MDPs
9 0.58062798 34 nips-2007-Bayesian Policy Learning with Trans-Dimensional MCMC
10 0.56888378 185 nips-2007-Stable Dual Dynamic Programming
11 0.56850219 148 nips-2007-Online Linear Regression and Its Application to Model-Based Reinforcement Learning
12 0.53983217 174 nips-2007-Selecting Observations against Adversarial Objectives
13 0.49081242 100 nips-2007-Hippocampal Contributions to Control: The Third Way
14 0.47945192 102 nips-2007-Incremental Natural Actor-Critic Algorithms
15 0.46600661 178 nips-2007-Simulated Annealing: Rigorous finite-time guarantees for optimization on continuous domains
16 0.44722056 30 nips-2007-Bayes-Adaptive POMDPs
17 0.42772689 4 nips-2007-A Constraint Generation Approach to Learning Stable Linear Dynamical Systems
18 0.42759195 215 nips-2007-What makes some POMDP problems easy to approximate?
19 0.41045007 28 nips-2007-Augmented Functional Time Series Representation and Forecasting with Gaussian Processes
20 0.4051027 171 nips-2007-Scan Strategies for Meteorological Radars
topicId topicWeight
[(5, 0.034), (9, 0.011), (13, 0.019), (16, 0.021), (18, 0.013), (21, 0.045), (31, 0.434), (34, 0.021), (35, 0.021), (47, 0.118), (53, 0.025), (83, 0.078), (85, 0.017), (87, 0.012), (90, 0.049)]
simIndex simValue paperId paperTitle
same-paper 1 0.86588985 162 nips-2007-Random Sampling of States in Dynamic Programming
Author: Chris Atkeson, Benjamin Stephens
Abstract: We combine three threads of research on approximate dynamic programming: sparse random sampling of states, value function and policy approximation using local models, and using local trajectory optimizers to globally optimize a policy and associated value function. Our focus is on finding steady state policies for deterministic time invariant discrete time control problems with continuous states and actions often found in robotics. In this paper we show that we can now solve problems we couldn’t solve previously. 1
2 0.78610736 214 nips-2007-Variational inference for Markov jump processes
Author: Manfred Opper, Guido Sanguinetti
Abstract: Markov jump processes play an important role in a large number of application domains. However, realistic systems are analytically intractable and they have traditionally been analysed using simulation based techniques, which do not provide a framework for statistical inference. We propose a mean field approximation to perform posterior inference and parameter estimation. The approximation allows a practical solution to the inference problem, while still retaining a good degree of accuracy. We illustrate our approach on two biologically motivated systems.
3 0.76693815 144 nips-2007-On Ranking in Survival Analysis: Bounds on the Concordance Index
Author: Harald Steck, Balaji Krishnapuram, Cary Dehing-oberije, Philippe Lambin, Vikas C. Raykar
Abstract: In this paper, we show that classical survival analysis involving censored data can naturally be cast as a ranking problem. The concordance index (CI), which quantifies the quality of rankings, is the standard performance measure for model assessment in survival analysis. In contrast, the standard approach to learning the popular proportional hazard (PH) model is based on Cox’s partial likelihood. We devise two bounds on CI–one of which emerges directly from the properties of PH models–and optimize them directly. Our experimental results suggest that all three methods perform about equally well, with our new approach giving slightly better results. We also explain why a method designed to maximize the Cox’s partial likelihood also ends up (approximately) maximizing the CI. 1
4 0.69775862 80 nips-2007-Ensemble Clustering using Semidefinite Programming
Author: Vikas Singh, Lopamudra Mukherjee, Jiming Peng, Jinhui Xu
Abstract: We consider the ensemble clustering problem where the task is to ‘aggregate’ multiple clustering solutions into a single consolidated clustering that maximizes the shared information among given clustering solutions. We obtain several new results for this problem. First, we note that the notion of agreement under such circumstances can be better captured using an agreement measure based on a 2D string encoding rather than voting strategy based methods proposed in literature. Using this generalization, we first derive a nonlinear optimization model to maximize the new agreement measure. We then show that our optimization problem can be transformed into a strict 0-1 Semidefinite Program (SDP) via novel convexification techniques which can subsequently be relaxed to a polynomial time solvable SDP. Our experiments indicate improvements not only in terms of the proposed agreement measure but also the existing agreement measures based on voting strategies. We discuss evaluations on clustering and image segmentation databases. 1
5 0.5170784 163 nips-2007-Receding Horizon Differential Dynamic Programming
Author: Yuval Tassa, Tom Erez, William D. Smart
Abstract: The control of high-dimensional, continuous, non-linear dynamical systems is a key problem in reinforcement learning and control. Local, trajectory-based methods, using techniques such as Differential Dynamic Programming (DDP), are not directly subject to the curse of dimensionality, but generate only local controllers. In this paper,we introduce Receding Horizon DDP (RH-DDP), an extension to the classic DDP algorithm, which allows us to construct stable and robust controllers based on a library of local-control trajectories. We demonstrate the effectiveness of our approach on a series of high-dimensional problems using a simulated multi-link swimming robot. These experiments show that our approach effectively circumvents dimensionality issues, and is capable of dealing with problems of (at least) 24 state and 9 action dimensions. 1
6 0.47115031 168 nips-2007-Reinforcement Learning in Continuous Action Spaces through Sequential Monte Carlo Methods
7 0.43354094 174 nips-2007-Selecting Observations against Adversarial Objectives
8 0.42161894 105 nips-2007-Infinite State Bayes-Nets for Structured Domains
9 0.41602129 34 nips-2007-Bayesian Policy Learning with Trans-Dimensional MCMC
10 0.41221383 52 nips-2007-Competition Adds Complexity
11 0.4064886 122 nips-2007-Locality and low-dimensions in the prediction of natural experience from fMRI
12 0.40574959 100 nips-2007-Hippocampal Contributions to Control: The Third Way
13 0.39685136 89 nips-2007-Feature Selection Methods for Improving Protein Structure Prediction with Rosetta
14 0.39048037 151 nips-2007-Optimistic Linear Programming gives Logarithmic Regret for Irreducible MDPs
15 0.38897318 76 nips-2007-Efficient Convex Relaxation for Transductive Support Vector Machine
16 0.38779637 16 nips-2007-A learning framework for nearest neighbor search
17 0.38729802 185 nips-2007-Stable Dual Dynamic Programming
18 0.38710001 98 nips-2007-Hierarchical Apprenticeship Learning with Application to Quadruped Locomotion
19 0.38693604 93 nips-2007-GRIFT: A graphical model for inferring visual classification features from human data
20 0.38651395 165 nips-2007-Regret Minimization in Games with Incomplete Information