nips nips2010 nips2010-229 knowledge-graph by maker-knowledge-mining

229 nips-2010-Reward Design via Online Gradient Ascent


Source: pdf

Author: Jonathan Sorg, Richard L. Lewis, Satinder P. Singh

Abstract: Recent work has demonstrated that when artificial agents are limited in their ability to achieve their goals, the agent designer can benefit by making the agent’s goals different from the designer’s. This gives rise to the optimization problem of designing the artificial agent’s goals—in the RL framework, designing the agent’s reward function. Existing attempts at solving this optimal reward problem do not leverage experience gained online during the agent’s lifetime nor do they take advantage of knowledge about the agent’s structure. In this work, we develop a gradient ascent approach with formal convergence guarantees for approximately solving the optimal reward problem online during an agent’s lifetime. We show that our method generalizes a standard policy gradient approach, and we demonstrate its ability to improve reward functions in agents with various forms of limitations. 1 The Optimal Reward Problem In this work, we consider the scenario of an agent designer building an autonomous agent. The designer has his or her own goals which must be translated into goals for the autonomous agent. We represent goals using the Reinforcement Learning (RL) formalism of the reward function. This leads to the optimal reward problem of designing the agent’s reward function so as to maximize the objective reward received by the agent designer. Typically, the designer assigns his or her own reward to the agent. However, there is ample work which demonstrates the benefit of assigning reward which does not match the designer’s. For example, work on reward shaping [11] has shown how to modify rewards to accelerate learning without altering the optimal policy, and PAC-MDP methods [5, 20] including approximate Bayesian methods [7, 19] add bonuses to the objective reward to achieve optimism under uncertainty. These approaches explicitly or implicitly assume that the asymptotic behavior of the agent should be the same as that which would occur using the objective reward function. These methods do not explicitly consider the optimal reward problem; however, they do show improved performance through reward modification. In our recent work that does explicitly consider the optimal reward problem [18], we analyzed an explicit hypothesis about the benefit of reward design—that it helps mitigate the performance loss caused by computational constraints (bounds) on agent architectures. We considered various types of agent limitations—limits on planning depth, failure to account for partial observability, and other erroneous modeling assumptions—and demonstrated the benefits of good reward functions in each case empirically. Crucially, in bounded agents, the optimal reward function often leads to behavior that is different from the asymptotic behavior achieved with the objective reward function. In this work, we develop an algorithm, Policy Gradient for Reward Design (PGRD), for improving reward functions for a family of bounded agents that behave according to repeated local (from the current state) model-based planning. We show that this algorithm is capable of improving the reward functions in agents with computational limitations necessitating small bounds on the depth of planning, and also from the use of an inaccurate model (which may be inaccurate due to computationally-motivated approximations). PGRD has few parameters, improves the reward 1 function online during an agent’s lifetime, takes advantage of knowledge about the agent’s structure (through the gradient computation), and is linear in the number of reward function parameters. Notation. Formally, we consider discrete-time partially-observable environments with a finite number of hidden states s ∈ S, actions a ∈ A, and observations o ∈ O; these finite set assumptions are useful for our theorems, but our algorithm can handle infinite sets in practice. Its dynamics are governed by a state-transition function P (s |s, a) that defines a distribution over next-states s conditioned on current state s and action a, and an observation function Ω(o|s) that defines a distribution over observations o conditioned on current state s. The agent designer’s goals are specified via the objective reward function RO . At each time step, the designer receives reward RO (st ) ∈ [0, 1] based on the current state st of the environment, where the subscript denotes time. The designer’s objective return is the expected mean objective reward N 1 obtained over an infinite horizon, i.e., limN →∞ E N t=0 RO (st ) . In the standard view of RL, the agent uses the same reward function as the designer to align the interests of the agent and the designer. Here we allow for a separate agent reward function R(· ). An agent’s reward function can in general be defined in terms of the history of actions and observations, but is often more pragmatically defined in terms of some abstraction of history. We define the agent’s reward function precisely in Section 2. Optimal Reward Problem. An RL agent attempts to act so as to maximize its own cumulative reward, or return. Crucially, as a result, the sequence of environment-states {st }∞ is affected by t=0 the choice of reward function; therefore, the agent designer’s return is affected as well. The optimal reward problem arises from the fact that while the objective reward function is fixed as part of the problem description, the reward function is a choice to be made by the designer. We capture this choice abstractly by letting the reward be parameterized by some vector of parameters θ chosen from space of parameters Θ. Each θ ∈ Θ specifies a reward function R(· ; θ) which in turn produces a distribution over environment state sequences via whatever RL method the agent uses. The expected N 1 return obtained by the designer for choice θ is U(θ) = limN →∞ E N t=0 RO (st ) R(·; θ) . The optimal reward parameters are given by the solution to the optimal reward problem [16, 17, 18]: θ∗ = arg max U(θ) = arg max lim E θ∈Θ θ∈Θ N →∞ 1 N N RO (st ) R(·; θ) . (1) t=0 Our previous research on solving the optimal reward problem has focused primarily on the properties of the optimal reward function and its correspondence to the agent architecture and the environment [16, 17, 18]. This work has used inefficient exhaustive search methods for finding good approximations to θ∗ (though there is recent work on using genetic algorithms to do this [6, 9, 12]). Our primary contribution in this paper is a new convergent online stochastic gradient method for finding approximately optimal reward functions. To our knowledge, this is the first algorithm that improves reward functions in an online setting—during a single agent’s lifetime. In Section 2, we present the PGRD algorithm, prove its convergence, and relate it to OLPOMDP [2], a policy gradient algorithm. In Section 3, we present experiments demonstrating PGRD’s ability to approximately solve the optimal reward problem online. 2 PGRD: Policy Gradient for Reward Design PGRD builds on the following insight: the agent’s planning algorithm procedurally converts the reward function into behavior; thus, the reward function can be viewed as a specific parameterization of the agent’s policy. Using this insight, PGRD updates the reward parameters by estimating the gradient of the objective return with respect to the reward parameters, θ U(θ), from experience, using standard policy gradient techniques. In fact, we show that PGRD can be viewed as an (independently interesting) generalization of the policy gradient method OLPOMDP [2]. Specifically, we show that OLPOMDP is special case of PGRD when the planning depth d is zero. In this section, we first present the family of local planning agents for which PGRD improves the reward function. Next, we develop PGRD and prove its convergence. Finally, we show that PGRD generalizes OLPOMDP and discuss how adding planning to OLPOMDP affects the space of policies available to the optimization method. 2 1 2 3 4 5 Input: T , θ0 , {αt }∞ , β, γ t=0 o0 , i0 = initializeStart(); for t = 0, 1, 2, 3, . . . do ∀a Qt (a; θt ) = plan(it , ot , T, R(it , ·, ·; θt ), d,γ); at ∼ µ(a|it ; Qt ); rt+1 , ot+1 = takeAction(at ); µ(a |i ;Q ) 6 7 8 9 t zt+1 = βzt + θt t |itt ;Qt ) t ; µ(a θt+1 = θt + αt (rt+1 zt+1 − λθt ) ; it+1 = updateInternalState(it , at , ot+1 ); end Figure 1: PGRD (Policy Gradient for Reward Design) Algorithm A Family of Limited Agents with Internal State. Given a Markov model T defined over the observation space O and action space A, denote T (o |o, a) the probability of next observation o given that the agent takes action a after observing o. Our agents use the model T to plan. We do not assume that the model T is an accurate model of the environment. The use of an incorrect model is one type of agent limitation we examine in our experiments. In general, agents can use non-Markov models defined in terms of the history of observations and actions; we leave this for future work. The agent maintains an internal state feature vector it that is updated at each time step using it+1 = updateInternalState(it , at , ot+1 ). The internal state allows the agent to use reward functions T that depend on the agent’s history. We consider rewards of the form R(it , o, a; θt ) = θt φ(it , o, a), where θt is the reward parameter vector at time t, and φ(it , o, a) is a vector of features based on internal state it , planning state o, and action a. Note that if φ is a vector of binary indicator features, this representation allows for arbitrary reward functions and thus the representation is completely general. Many existing methods use reward functions that depend on history. Reward functions based on empirical counts of observations, as in PAC-MDP approaches [5, 20], provide some examples; see [14, 15, 13] for others. We present a concrete example in our empirical section. At each time step t, the agent’s planning algorithm, plan, performs depth-d planning using the model T and reward function R(it , o, a; θt ) with current internal state it and reward parameters θt . Specifically, the agent computes a d-step Q-value function Qd (it , ot , a; θt ) ∀a ∈ A, where Qd (it , o, a; θt ) = R(it , o, a; θt ) + γ o ∈O T (o |o, a) maxb∈A Qd−1 (it , o , b; θt ) and Q0 (it , o, a; θt ) = R(it , o, a; θt ). We emphasize that the internal state it and reward parameters θt are held invariant while planning. Note that the d-step Q-values are only computed for the current observation ot , in effect by building a depth-d tree rooted at ot . In the d = 0 special case, the planning procedure completely ignores the model T and returns Q0 (it , ot , a; θt ) = R(it , ot , a; θt ). Regardless of the value of d, we treat the end result of planning as providing a scoring function Qt (a; θt ) where the dependence on d, it and ot is dropped from the notation. To allow for gradient calculations, our agents act according to the τ Qt (a;θt ) def Boltzmann (soft-max) stochastic policy parameterized by Q: µ(a|it ; Qt ) = e eτ Qt (b;θt ) , where τ b is a temperature parameter that determines how stochastically the agent selects the action with the highest score. When the planning depth d is small due to computational limitations, the agent cannot account for events beyond the planning depth. We examine this limitation in our experiments. Gradient Ascent. To develop a gradient algorithm for improving the reward function, we need to compute the gradient of the objective return with respect to θ: θ U(θ). The main insight is to break the gradient calculation into the calculation of two gradients. The first is the gradient of the objective return with respect to the policy µ, and the second is the gradient of the policy with respect to the reward function parameters θ. The first gradient is exactly what is computed in standard policy gradient approaches [2]. The second gradient is challenging because the transformation from reward parameters to policy involves a model-based planning procedure. We draw from the work of Neu and Szepesv´ ri [10] which shows that this gradient computation resembles planning itself. We a develop PGRD, presented in Figure 1, explicitly as a generalization of OLPOMDP, a policy gradient algorithm developed by Bartlett and Baxter [2], because of its foundational simplicity relative to other policy-gradient algorithms such as those based on actor-critic methods (e.g., [4]). Notably, the reward parameters are the only parameters being learned in PGRD. 3 PGRD follows the form of OLPOMDP (Algorithm 1 in Bartlett and Baxter [2]) but generalizes it in three places. In Figure 1 line 3, the agent plans to compute the policy, rather than storing the policy directly. In line 6, the gradient of the policy with respect to the parameters accounts for the planning procedure. In line 8, the agent maintains a general notion of internal state that allows for richer parameterization of policies than typically considered (similar to Aberdeen and Baxter [1]). The algorithm takes as parameters a sequence of learning rates {αk }, a decaying-average parameter β, and regularization parameter λ > 0 which keeps the the reward parameters θ bounded throughout learning. Given a sequence of calculations of the gradient of the policy with respect to the parameters, θt µ(at |it ; Qt ), the remainder of the algorithm climbs the gradient of objective return θ U(θ) using OLPOMDP machinery. In the next subsection, we discuss how to compute θt µ(at |it ; Qt ). Computing the Gradient of the Policy with respect to Reward. For the Boltzmann distribution, the gradient of the policy with respect to the reward parameters is given by the equation θt µ(a|it ; Qt ) = τ · µ(a|Qt )[ θt Qt (a|it ; θt ) − θt Qt (b; θt )], where τ is the Boltzmann b∈A temperature (see [10]). Thus, computing θt µ(a|it ; Qt ) reduces to computing θt Qt (a; θt ). The value of Qt depends on the reward parameters θt , the model, and the planning depth. However, as we present below, the process of computing the gradient closely resembles the process of planning itself, and the two computations can be interleaved. Theorem 1 presented below is an adaptation of Proposition 4 from Neu and Szepesv´ ri [10]. It presents the gradient computation for depth-d a planning as well as for infinite-depth discounted planning. We assume that the gradient of the reward function with respect to the parameters is bounded: supθ,o,i,a θ R(i, o, a, θ) < ∞. The proof of the theorem follows directly from Proposition 4 of Neu and Szepesv´ ri [10]. a Theorem 1. Except on a set of measure zero, for any depth d, the gradient θ Qd (o, a; θ) exists and is given by the recursion (where we have dropped the dependence on i for simplicity) d θ Q (o, a; θ) = θ R(o, a; θ) π d−1 (b|o ) T (o |o, a) +γ o ∈O d−1 (o θQ , b; θ), (2) b∈A where θ Q0 (o, a; θ) = θ R(o, a; θ) and π d (a|o) ∈ arg maxa Qd (o, a; θ) is any policy that is greedy with respect to Qd . The result also holds for θ Q∗ (o, a; θ) = θ limd→∞ Qd (o, a; θ). The Q-function will not be differentiable when there are multiple optimal policies. This is reflected in the arbitrary choice of π in the gradient calculation. However, it was shown by Neu and Szepesv´ ri [10] that even for values of θ which are not differentiable, the above computation produces a a valid calculation of a subgradient; we discuss this below in our proof of convergence of PGRD. Convergence of PGRD (Figure 1). Given a particular fixed reward function R(·; θ), transition model T , and planning depth, there is a corresponding fixed randomized policy µ(a|i; θ)—where we have explicitly represented the reward’s dependence on the internal state vector i in the policy parameterization and dropped Q from the notation as it is redundant given that everything else is fixed. Denote the agent’s internal-state update as a (usually deterministic) distribution ψ(i |i, a, o). Given a fixed reward parameter vector θ, the joint environment-state–internal-state transitions can be modeled as a Markov chain with a |S||I| × |S||I| transition matrix M (θ) whose entries are given by M s,i , s ,i (θ) = p( s , i | s, i ; θ) = o,a ψ(i |i, a, o)Ω(o|s )P (s |s, a)µ(a|i; θ). We make the following assumptions about the agent and the environment: Assumption 1. The transition matrix M (θ) of the joint environment-state–internal-state Markov chain has a unique stationary distribution π(θ) = [πs1 ,i1 (θ), πs2 ,i2 (θ), . . . , πs|S| ,i|I| (θ)] satisfying the balance equations π(θ)M (θ) = π(θ), for all θ ∈ Θ. Assumption 2. During its execution, PGRD (Figure 1) does not reach a value of it , and θt at which µ(at |it , Qt ) is not differentiable with respect to θt . It follows from Assumption 1 that the objective return, U(θ), is independent of the start state. The original OLPOMDP convergence proof [2] has a similar condition that only considers environment states. Intuitively, this condition allows PGRD to handle history-dependence of a reward function in the same manner that it handles partial observability in an environment. Assumption 2 accounts for the fact that a planning algorithm may not be fully differentiable everywhere. However, Theorem 1 showed that infinite and bounded-depth planning is differentiable almost everywhere (in a measure theoretic sense). Furthermore, this assumption is perhaps stronger than necessary, as stochastic approximation algorithms, which provide the theory upon which OLPOMDP is based, have been shown to converge using subgradients [8]. 4 In order to state the convergence theorem, we must define the approximate gradient which OLPOMDP def T calculates. Let the approximate gradient estimate be β U(θ) = limT →∞ t=1 rt zt for a fixed θ and θ PGRD parameter β, where zt (in Figure 1) represents a time-decaying average of the θt µ(at |it , Qt ) calculations. It was shown by Bartlett and Baxter [2] that β U(θ) is close to the true value θ U(θ) θ for large values of β. Theorem 2 proves that PGRD converges to a stable equilibrium point based on this approximate gradient measure. This equilibrium point will typically correspond to some local optimum in the return function U(θ). Given our development and assumptions, the theorem is a straightforward extension of Theorem 6 from Bartlett and Baxter [2] (proof omitted). ∞ Theorem 2. Given β ∈ [0, 1), λ > 0, and a sequence of step sizes αt satisfying t=0 αt = ∞ and ∞ 2 t=0 (αt ) < ∞, PGRD produces a sequence of reward parameters θt such that θt → L as t → ∞ a.s., where L is the set of stable equilibrium points of the differential equation ∂θ = β U(θ) − λθ. θ ∂t PGRD generalizes OLPOMDP. As stated above, OLPOMDP, when it uses a Boltzmann distribution in its policy representation (a common case), is a special case of PGRD when the planning depth is zero. First, notice that in the case of depth-0 planning, Q0 (i, o, a; θ) = R(i, o, a, θ), regardless of the transition model and reward parameterization. We can also see from Theorem 1 that 0 θ Q (i, o, a; θ) = θ R(i, o, a; θ). Because R(i, o, a; θ) can be parameterized arbitrarily, PGRD can be configured to match standard OLPOMDP with any policy parameterization that also computes a score function for the Boltzmann distribution. In our experiments, we demonstrate that choosing a planning depth d > 0 can be beneficial over using OLPOMDP (d = 0). In the remainder of this section, we show theoretically that choosing d > 0 does not hurt in the sense that it does not reduce the space of policies available to the policy gradient method. Specifically, we show that when using an expressive enough reward parameterization, PGRD’s space of policies is not restricted relative to OLPOMDP’s space of policies. We prove the result for infinite planning, but the extension to depth-limited planning is straightforward. Theorem 3. There exists a reward parameterization such that, for an arbitrary transition model T , the space of policies representable by PGRD with infinite planning is identical to the space of policies representable by PGRD with depth 0 planning. Proof. Ignoring internal state for now (holding it constant), let C(o, a) be an arbitrary reward function used by PGRD with depth 0 planning. Let R(o, a; θ) be a reward function for PGRD with infinite (d = ∞) planning. The depth-∞ agent uses the planning result Q∗ (o, a; θ) to act, while the depth-0 agent uses the function C(o, a) to act. Therefore, it suffices to show that one can always choose θ such that the planning solution Q∗ (o, a; θ) equals C(o, a). For all o ∈ O, a ∈ A, set R(o, a; θ) = C(o, a) − γ o T (o |o, a) maxa C(o , a ). Substituting Q∗ for C, this is the Bellman optimality equation [22] for infinite-horizon planning. Setting R(o, a; θ) as above is possible if it is parameterized by a table with an entry for each observation–action pair. Theorem 3 also shows that the effect of an arbitrarily poor model can be overcome with a good choice of reward function. This is because a Boltzmann distribution can, allowing for an arbitrary scoring function C, represent any policy. We demonstrate this ability of PGRD in our experiments. 3 Experiments The primary objective of our experiments is to demonstrate that PGRD is able to use experience online to improve the reward function parameters, thereby improving the agent’s obtained objective return. Specifically, we compare the objective return achieved by PGRD to the objective return achieved by PGRD with the reward adaptation turned off. In both cases, the reward function is initialized to the objective reward function. A secondary objective is to demonstrate that when a good model is available, adding the ability to plan—even for small depths—improves performance relative to the baseline algorithm of OLPOMDP (or equivalently PGRD with depth d = 0). Foraging Domain for Experiments 1 to 3: The foraging environment illustrated in Figure 2(a) is a 3 × 3 grid world with 3 dead-end corridors (rows) separated by impassable walls. The agent (bird) has four available actions corresponding to each cardinal direction. Movement in the intended direction fails with probability 0.1, resulting in movement in a random direction. If the resulting direction is 5 Objective Return 0.15 D=6, α=0 & D=6, α=5×10 −5 D=4, α=2×10 −4 D=0, α=5×10 −4 0.1 0.05 0 D=4, α=0 D=0, α=0 1000 2000 3000 4000 5000 Time Steps C) Objective Return B) A) 0.15 D=6, α=0 & D=6, α=5×10 −5 D=3, α=3×10 −3 D=1, α=3×10 −4 0.1 D=3, α=0 0.05 D=0, α=0.01 & D=1, α=0 0 1000 2000 3000 4000 5000 D=0, α=0 Time Steps Figure 2: A) Foraging Domain, B) Performance of PGRD with observation-action reward features, C) Performance of PGRD with recency reward features blocked by a wall or the boundary, the action results in no movement. There is a food source (worm) located in one of the three right-most locations at the end of each corridor. The agent has an eat action, which consumes the worm when the agent is at the worm’s location. After the agent consumes the worm, a new worm appears randomly in one of the other two potential worm locations. Objective Reward for the Foraging Domain: The designer’s goal is to maximize the average number of worms eaten per time step. Thus, the objective reward function RO provides a reward of 1.0 when the agent eats a worm, and a reward of 0 otherwise. The objective return is defined as in Equation (1). Experimental Methodology: We tested PGRD for depth-limited planning agents of depths 0–6. Recall that PGRD for the agent with planning depth 0 is the OLPOMDP algorithm. For each depth, we jointly optimized over the PGRD algorithm parameters, α and β (we use a fixed α throughout learning). We tested values for α on an approximate logarithmic scale in the range (10−6 , 10−2 ) as well as the special value of α = 0, which corresponds to an agent that does not adapt its reward function. We tested β values in the set 0, 0.4, 0.7, 0.9, 0.95, 0.99. Following common practice [3], we set the λ parameter to 0. We explicitly bound the reward parameters and capped the reward function output both to the range [−1, 1]. We used a Boltzmann temperature parameter of τ = 100 and planning discount factor γ = 0.95. Because we initialized θ so that the initial reward function was the objective reward function, PGRD with α = 0 was equivalent to a standard depth-limited planning agent. Experiment 1: A fully observable environment with a correct model learned online. In this experiment, we improve the reward function in an agent whose only limitation is planning depth, using (1) a general reward parameterization based on the current observation and (2) a more compact reward parameterization which also depends on the history of observations. Observation: The agent observes the full state, which is given by the pair o = (l, w), where l is the agent’s location and w is the worm’s location. Learning a Correct Model: Although the theorem of convergence of PGRD relies on the agent having a fixed model, the algorithm itself is readily applied to the case of learning a model online. In this experiment, the agent’s model T is learned online based on empirical transition probabilities between observations (recall this is a fully observable environment). Let no,a,o be the number of times that o was reached after taking action a after observing o. The agent models the probability of seeing o as no,a,o T (o |o, a) = . n o o,a,o Reward Parameterizations: Recall that R(i, o, a; θ) = θT φ(i, o, a), for some φ(i, o, a). (1) In the observation-action parameterization, φ(i, o, a) is a binary feature vector with one binary feature for each observation-action pair—internal state is ignored. This is effectively a table representation over all reward functions indexed by (o, a). As shown in Theorem 3, the observation-action feature representation is capable of producing arbitrary policies over the observations. In large problems, such a parameterization would not be feasible. (2) The recency parameterization is a more compact representation which uses features that rely on the history of observations. The feature vector is φ(i, o, a) = [RO (o, a), 1, φcl (l, i), φcl,a (l, a, i)], where RO (o, a) is the objective reward function defined as above. The feature φcl (l) = 1 − 1/c(l, i), where c(l, i) is the number of time steps since the agent has visited location l, as represented in the agent’s internal state i. Its value is normalized to the range [0, 1) and is high when the agent has not been to location l recently. The feature φcl,a (l, a, i) = 1 − 1/c(l, a, i) is similarly defined with respect to the time since the agent has taken action a in location l. Features based on recency counts encourage persistent exploration [21, 18]. 6 Results & Discussion: Figure 2(b) and Figure 2(c) present results for agents that use the observationaction parameterization and the recency parameterization of the reward function respectively. The horizontal axis is the number of time steps of experience. The vertical axis is the objective return, i.e., the average objective reward per time step. Each curve is an average over 130 trials. The values of d and the associated optimal algorithm parameters for each curve are noted in the figures. First, note that with d = 6, the agent is unbounded, because food is never more than 6 steps away. Therefore, the agent does not benefit from adapting the reward function parameters (given that we initialize to the objective reward function). Indeed, the d = 6, α = 0 agent performs as well as the best reward-optimizing agent. The performance for d = 6 improves with experience because the model improves with experience (and thus from the curves it is seen that the model gets quite accurate in about 1500 time steps). The largest objective return obtained for d = 6 is also the best objective return that can be obtained for any value of d. Several results can be observed in both Figures 2(b) and (c). 1) Each curve that uses α > 0 (solid lines) improves with experience. This is a demonstration of our primary contribution, that PGRD is able to effectively improve the reward function with experience. That the improvement over time is not just due to model learning is seen in the fact that for each value of d < 6 the curve for α > 0 (solid-line) which adapts the reward parameters does significantly better than the corresponding curve for α = 0 (dashed-line); the α = 0 agents still learn the model. 2) For both α = 0 and α > 0 agents, the objective return obtained by agents with equivalent amounts of experience increases monotonically as d is increased (though to maintain readability we only show selected values of d in each figure). This demonstrates our secondary contribution, that the ability to plan in PGRD significantly improves performance over standard OLPOMDP (PGRD with d = 0). There are also some interesting differences between the results for the two different reward function parameterizations. With the observation-action parameterization, we noted that there always exists a setting of θ for all d that will yield optimal objective return. This is seen in Figure 2(b) in that all solid-line curves approach optimal objective return. In contrast, the more compact recency reward parameterization does not afford this guarantee and indeed for small values of d (< 3), the solid-line curves in Figure 2(c) converge to less than optimal objective return. Notably, OLPOMDP (d = 0) does not perform well with this feature set. On the other hand, for planning depths 3 ≤ d < 6, the PGRD agents with the recency parameterization achieve optimal objective return faster than the corresponding PGRD agent with the observation-action parameterization. Finally, we note that this experiment validates our claim that PGRD can improve reward functions that depend on history. Experiment 2: A fully observable environment and poor given model. Our theoretical analysis showed that PGRD with an incorrect model and the observation–action reward parameterization should (modulo local maxima issues) do just as well asymptotically as it would with a correct model. Here we illustrate this theoretical result empirically on the same foraging domain and objective reward function used in Experiment 1. We also test our hypothesis that a poor model should slow down the rate of learning relative to a correct model. Poor Model: We gave the agents a fixed incorrect model of the foraging environment that assumes there are no internal walls separating the 3 corridors. Reward Parameterization: We used the observation–action reward parameterization. With a poor model it is no longer interesting to initialize θ so that the initial reward function is the objective reward function because even for d = 6 such an agent would do poorly. Furthermore, we found that this initialization leads to excessively bad exploration and therefore poor learning of how to modify the reward. Thus, we initialize θ to uniform random values near 0, in the range (−10−3 , 10−3 ). Results: Figure 3(a) plots the objective return as a function of number of steps of experience. Each curve is an average over 36 trials. As hypothesized, the bad model slows learning by a factor of more than 10 (notice the difference in the x-axis scales from those in Figure 2). Here, deeper planning results in slower learning and indeed the d = 0 agent that does not use the model at all learns the fastest. However, also as hypothesized, because they used the expressive observation–action parameterization, agents of all planning depths mitigate the damage caused by the poor model and eventually converge to the optimal objective return. Experiment 3: Partially observable foraging world. Here we evaluate PGRD’s ability to learn in a partially observable version of the foraging domain. In addition, the agents learn a model under the erroneous (and computationally convenient) assumption that the domain is fully observable. 7 0.1 −4 D = 0, α = 2 ×10 D = 2, α = 3 ×10 −5 −5 D = 6, α = 2 ×10 0.05 D = 0&2&6, α = 0 0 1 2 3 Time Steps 4 5 x 10 4 0.06 D = 6, α = 7 ×10 D = 2, α = 7 ×10 −4 0.04 D = 1, α = 7 ×10 −4 D = 0, α = 5 ×10 −4 D = 0, α = 0 D = 1&2&6, α = 0 0.02 0 C) −4 1000 2000 3000 4000 5000 Time Steps Objective Return B) 0.08 0.15 Objective Return Objective Return A) 2.5 2 x 10 −3 D=6, α=3×10 −6 D=0, α=1×10 −5 1.5 D=0&6, α=0 1 0.5 1 2 3 Time Steps 4 5 x 10 4 Figure 3: A) Performance of PGRD with a poor model, B) Performance of PGRD in a partially observable world with recency reward features, C) Performance of PGRD in Acrobot Partial Observation: Instead of viewing the location of the worm at all times, the agent can now only see the worm when it is colocated with it: its observation is o = (l, f ), where f indicates whether the agent is colocated with the food. Learning an Incorrect Model: The model is learned just as in Experiment 1. Because of the erroneous full observability assumption, the model will hallucinate about worms at all the corridor ends based on the empirical frequency of having encountered them there. Reward Parameterization: We used the recency parameterization; due to the partial observability, agents with the observation–action feature set perform poorly in this environment. The parameters θ are initialized such that the initial reward function equals the objective reward function. Results & Discussion: Figure 3(b) plots the mean of 260 trials. As seen in the solid-line curves, PGRD improves the objective return at all depths (only a small amount for d = 0 and significantly more for d > 0). In fact, agents which don’t adapt the reward are hurt by planning (relative to d = 0). This experiment demonstrates that the combination of planning and reward improvement can be beneficial even when the model is erroneous. Because of the partial observability, optimal behavior in this environment achieves less objective return than in Experiment 1. Experiment 4: Acrobot. In this experiment we test PGRD in the Acrobot environment [22], a common benchmark task in the RL literature and one that has previously been used in the testing of policy gradient approaches [23]. This experiment demonstrates PGRD in an environment in which an agent must be limited due to the size of the state space and further demonstrates that adding model-based planning to policy gradient approaches can improve performance. Domain: The version of Acrobot we use is as specified by Sutton and Barto [22]. It is a two-link robot arm in which the position of one shoulder-joint is fixed and the agent’s control is limited to 3 actions which apply torque to the elbow-joint. Observation: The fully-observable state space is 4 dimensional, with two joint angles ψ1 and ψ2 , and ˙ ˙ two joint velocities ψ1 and ψ2 . Objective Reward: The designer receives an objective reward of 1.0 when the tip is one arm’s length above the fixed shoulder-joint, after which the bot is reset to its initial resting position. Model: We provide the agent with a perfect model of the environment. Because the environment is continuous, value iteration is intractable, and computational limitations prevent planning deep enough to compute the optimal action in any state. The feature vector contains 13 entries. One feature corresponds to the objective reward signal. For each action, there are 5 features corresponding to each of the state features plus an additional feature representing the height of the tip: φ(i, o, a) = ˙ ˙ [RO (o), {ψ1 (o), ψ2 (o), ψ1 (o), ψ2 (o), h(o)}a ]. The height feature has been used in previous work as an alternative definition of objective reward [23]. Results & Discussion: We plot the mean of 80 trials in Figure 3(c). Agents that use the fixed (α = 0) objective reward function with bounded-depth planning perform according to the bottom two curves. Allowing PGRD and OLPOMDP to adapt the parameters θ leads to improved objective return, as seen in the top two curves in Figure 3(c). Finally, the PGRD d = 6 agent outperforms the standard OLPOMDP agent (PGRD with d = 0), further demonstrating that PGRD outperforms OLPOMDP. Overall Conclusion: We developed PGRD, a new method for approximately solving the optimal reward problem in bounded planning agents that can be applied in an online setting. We showed that PGRD is a generalization of OLPOMDP and demonstrated that it both improves reward functions in limited agents and outperforms the model-free OLPOMDP approach. 8 References [1] Douglas Aberdeen and Jonathan Baxter. Scalable Internal-State Policy-Gradient Methods for POMDPs. Proceedings of the Nineteenth International Conference on Machine Learning, 2002. [2] Peter L. Bartlett and Jonathan Baxter. Stochastic optimization of controlled partially observable Markov decision processes. In Proceedings of the 39th IEEE Conference on Decision and Control, 2000. [3] Jonathan Baxter, Peter L. Bartlett, and Lex Weaver. Experiments with Infinite-Horizon, Policy-Gradient Estimation, 2001. [4] Shalabh Bhatnagar, Richard S. Sutton, M Ghavamzadeh, and Mark Lee. Natural actor-critic algorithms. Automatica, 2009. [5] Ronen I. Brafman and Moshe Tennenholtz. R-MAX - A General Polynomial Time Algorithm for NearOptimal Reinforcement Learning. Journal of Machine Learning Research, 3:213–231, 2001. [6] S. Elfwing, Eiji Uchibe, K. Doya, and H. I. Christensen. Co-evolution of Shaping Rewards and MetaParameters in Reinforcement Learning. Adaptive Behavior, 16(6):400–412, 2008. [7] J. Zico Kolter and Andrew Y. Ng. Near-Bayesian exploration in polynomial time. In Proceedings of the 26th International Conference on Machine Learning, pages 513–520, 2009. [8] Harold J. Kushner and G. George Yin. Stochastic Approximation and Recursive Algorithms and Applications. Springer, 2nd edition, 2010. [9] Cetin Mericli, Tekin Mericli, and H. Levent Akin. A Reward Function Generation Method Using Genetic ¸ ¸ ¸ Algorithms : A Robot Soccer Case Study (Extended Abstract). In Proc. of the 9th Int. Conf. on Autonomous Agents and Multiagent Systems (AAMAS 2010), number 2, pages 1513–1514, 2010. [10] Gergely Neu and Csaba Szepesv´ ri. Apprenticeship learning using inverse reinforcement learning and a gradient methods. In Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence, pages 295–302, 2007. [11] Andrew Y. Ng, Stuart J. Russell, and D. Harada. Policy invariance under reward transformations: Theory and application to reward shaping. In Proceedings of the 16th International Conference on Machine Learning, pages 278–287, 1999. [12] Scott Niekum, Andrew G. Barto, and Lee Spector. Genetic Programming for Reward Function Search. IEEE Transactions on Autonomous Mental Development, 2(2):83–90, 2010. [13] Pierre-Yves Oudeyer, Frederic Kaplan, and Verena V. Hafner. Intrinsic Motivation Systems for Autonomous Mental Development. IEEE Transactions on Evolutionary Computation, 11(2):265–286, April 2007. [14] J¨ rgen Schmidhuber. Curious model-building control systems. In IEEE International Joint Conference on u Neural Networks, pages 1458–1463, 1991. [15] Satinder Singh, Andrew G. Barto, and Nuttapong Chentanez. Intrinsically Motivated Reinforcement Learning. In Proceedings of Advances in Neural Information Processing Systems 17 (NIPS), pages 1281–1288, 2005. [16] Satinder Singh, Richard L. Lewis, and Andrew G. Barto. Where Do Rewards Come From? In Proceedings of the Annual Conference of the Cognitive Science Society, pages 2601–2606, 2009. [17] Satinder Singh, Richard L. Lewis, Andrew G. Barto, and Jonathan Sorg. Intrinsically Motivated Reinforcement Learning: An Evolutionary Perspective. IEEE Transations on Autonomous Mental Development, 2(2):70–82, 2010. [18] Jonathan Sorg, Satinder Singh, and Richard L. Lewis. Internal Rewards Mitigate Agent Boundedness. In Proceedings of the 27th International Conference on Machine Learning, 2010. [19] Jonathan Sorg, Satinder Singh, and Richard L. Lewis. Variance-Based Rewards for Approximate Bayesian Reinforcement Learning. In Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence, 2010. [20] Alexander L. Strehl and Michael L. Littman. An analysis of model-based Interval Estimation for Markov Decision Processes. Journal of Computer and System Sciences, 74(8):1309–1331, 2008. [21] Richard S. Sutton. Integrated Architectures for Learning, Planning, and Reacting Based on Approximating Dynamic Programming. In The Seventh International Conference on Machine Learning, pages 216–224. 1990. [22] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press, 1998. [23] Lex Weaver and Nigel Tao. The Optimal Reward Baseline for Gradient-Based Reinforcement Learning. In Proceedings of the 17th Conference on Uncertainty in Artificial Intelligence, pages 538–545. 2001. 9

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract Recent work has demonstrated that when artificial agents are limited in their ability to achieve their goals, the agent designer can benefit by making the agent’s goals different from the designer’s. [sent-7, score-0.627]

2 This gives rise to the optimization problem of designing the artificial agent’s goals—in the RL framework, designing the agent’s reward function. [sent-8, score-0.529]

3 Existing attempts at solving this optimal reward problem do not leverage experience gained online during the agent’s lifetime nor do they take advantage of knowledge about the agent’s structure. [sent-9, score-0.582]

4 In this work, we develop a gradient ascent approach with formal convergence guarantees for approximately solving the optimal reward problem online during an agent’s lifetime. [sent-10, score-0.596]

5 We show that our method generalizes a standard policy gradient approach, and we demonstrate its ability to improve reward functions in agents with various forms of limitations. [sent-11, score-0.836]

6 We represent goals using the Reinforcement Learning (RL) formalism of the reward function. [sent-14, score-0.537]

7 This leads to the optimal reward problem of designing the agent’s reward function so as to maximize the objective reward received by the agent designer. [sent-15, score-1.923]

8 Typically, the designer assigns his or her own reward to the agent. [sent-16, score-0.613]

9 However, there is ample work which demonstrates the benefit of assigning reward which does not match the designer’s. [sent-17, score-0.516]

10 For example, work on reward shaping [11] has shown how to modify rewards to accelerate learning without altering the optimal policy, and PAC-MDP methods [5, 20] including approximate Bayesian methods [7, 19] add bonuses to the objective reward to achieve optimism under uncertainty. [sent-18, score-1.126]

11 These approaches explicitly or implicitly assume that the asymptotic behavior of the agent should be the same as that which would occur using the objective reward function. [sent-19, score-0.891]

12 These methods do not explicitly consider the optimal reward problem; however, they do show improved performance through reward modification. [sent-20, score-1.018]

13 In our recent work that does explicitly consider the optimal reward problem [18], we analyzed an explicit hypothesis about the benefit of reward design—that it helps mitigate the performance loss caused by computational constraints (bounds) on agent architectures. [sent-21, score-1.36]

14 We considered various types of agent limitations—limits on planning depth, failure to account for partial observability, and other erroneous modeling assumptions—and demonstrated the benefits of good reward functions in each case empirically. [sent-22, score-1.059]

15 Crucially, in bounded agents, the optimal reward function often leads to behavior that is different from the asymptotic behavior achieved with the objective reward function. [sent-23, score-1.086]

16 In this work, we develop an algorithm, Policy Gradient for Reward Design (PGRD), for improving reward functions for a family of bounded agents that behave according to repeated local (from the current state) model-based planning. [sent-24, score-0.657]

17 We show that this algorithm is capable of improving the reward functions in agents with computational limitations necessitating small bounds on the depth of planning, and also from the use of an inaccurate model (which may be inaccurate due to computationally-motivated approximations). [sent-25, score-0.755]

18 PGRD has few parameters, improves the reward 1 function online during an agent’s lifetime, takes advantage of knowledge about the agent’s structure (through the gradient computation), and is linear in the number of reward function parameters. [sent-26, score-1.106]

19 The agent designer’s goals are specified via the objective reward function RO . [sent-30, score-0.927]

20 At each time step, the designer receives reward RO (st ) ∈ [0, 1] based on the current state st of the environment, where the subscript denotes time. [sent-31, score-0.661]

21 The designer’s objective return is the expected mean objective reward N 1 obtained over an infinite horizon, i. [sent-32, score-0.71]

22 In the standard view of RL, the agent uses the same reward function as the designer to align the interests of the agent and the designer. [sent-35, score-1.257]

23 Here we allow for a separate agent reward function R(· ). [sent-36, score-0.823]

24 An agent’s reward function can in general be defined in terms of the history of actions and observations, but is often more pragmatically defined in terms of some abstraction of history. [sent-37, score-0.534]

25 We define the agent’s reward function precisely in Section 2. [sent-38, score-0.501]

26 Crucially, as a result, the sequence of environment-states {st }∞ is affected by t=0 the choice of reward function; therefore, the agent designer’s return is affected as well. [sent-41, score-0.896]

27 The optimal reward problem arises from the fact that while the objective reward function is fixed as part of the problem description, the reward function is a choice to be made by the designer. [sent-42, score-1.587]

28 We capture this choice abstractly by letting the reward be parameterized by some vector of parameters θ chosen from space of parameters Θ. [sent-43, score-0.501]

29 Each θ ∈ Θ specifies a reward function R(· ; θ) which in turn produces a distribution over environment state sequences via whatever RL method the agent uses. [sent-44, score-0.91]

30 The optimal reward parameters are given by the solution to the optimal reward problem [16, 17, 18]: θ∗ = arg max U(θ) = arg max lim E θ∈Θ θ∈Θ N →∞ 1 N N RO (st ) R(·; θ) . [sent-46, score-1.034]

31 (1) t=0 Our previous research on solving the optimal reward problem has focused primarily on the properties of the optimal reward function and its correspondence to the agent architecture and the environment [16, 17, 18]. [sent-47, score-1.415]

32 Our primary contribution in this paper is a new convergent online stochastic gradient method for finding approximately optimal reward functions. [sent-49, score-0.596]

33 To our knowledge, this is the first algorithm that improves reward functions in an online setting—during a single agent’s lifetime. [sent-50, score-0.546]

34 In Section 3, we present experiments demonstrating PGRD’s ability to approximately solve the optimal reward problem online. [sent-52, score-0.531]

35 2 PGRD: Policy Gradient for Reward Design PGRD builds on the following insight: the agent’s planning algorithm procedurally converts the reward function into behavior; thus, the reward function can be viewed as a specific parameterization of the agent’s policy. [sent-53, score-1.312]

36 Using this insight, PGRD updates the reward parameters by estimating the gradient of the objective return with respect to the reward parameters, θ U(θ), from experience, using standard policy gradient techniques. [sent-54, score-1.366]

37 In this section, we first present the family of local planning agents for which PGRD improves the reward function. [sent-57, score-0.884]

38 The internal state allows the agent to use reward functions T that depend on the agent’s history. [sent-70, score-0.899]

39 We consider rewards of the form R(it , o, a; θt ) = θt φ(it , o, a), where θt is the reward parameter vector at time t, and φ(it , o, a) is a vector of features based on internal state it , planning state o, and action a. [sent-71, score-0.892]

40 Note that if φ is a vector of binary indicator features, this representation allows for arbitrary reward functions and thus the representation is completely general. [sent-72, score-0.501]

41 Many existing methods use reward functions that depend on history. [sent-73, score-0.501]

42 At each time step t, the agent’s planning algorithm, plan, performs depth-d planning using the model T and reward function R(it , o, a; θt ) with current internal state it and reward parameters θt . [sent-76, score-1.508]

43 We emphasize that the internal state it and reward parameters θt are held invariant while planning. [sent-78, score-0.577]

44 When the planning depth d is small due to computational limitations, the agent cannot account for events beyond the planning depth. [sent-83, score-0.808]

45 To develop a gradient algorithm for improving the reward function, we need to compute the gradient of the objective return with respect to θ: θ U(θ). [sent-86, score-0.773]

46 The first is the gradient of the objective return with respect to the policy µ, and the second is the gradient of the policy with respect to the reward function parameters θ. [sent-88, score-0.97]

47 The second gradient is challenging because the transformation from reward parameters to policy involves a model-based planning procedure. [sent-90, score-0.88]

48 Notably, the reward parameters are the only parameters being learned in PGRD. [sent-95, score-0.501]

49 In line 8, the agent maintains a general notion of internal state that allows for richer parameterization of policies than typically considered (similar to Aberdeen and Baxter [1]). [sent-99, score-0.523]

50 The algorithm takes as parameters a sequence of learning rates {αk }, a decaying-average parameter β, and regularization parameter λ > 0 which keeps the the reward parameters θ bounded throughout learning. [sent-100, score-0.501]

51 For the Boltzmann distribution, the gradient of the policy with respect to the reward parameters is given by the equation θt µ(a|it ; Qt ) = τ · µ(a|Qt )[ θt Qt (a|it ; θt ) − θt Qt (b; θt )], where τ is the Boltzmann b∈A temperature (see [10]). [sent-104, score-0.682]

52 The value of Qt depends on the reward parameters θt , the model, and the planning depth. [sent-106, score-0.716]

53 We assume that the gradient of the reward function with respect to the parameters is bounded: supθ,o,i,a θ R(i, o, a, θ) < ∞. [sent-110, score-0.56]

54 Given a fixed reward parameter vector θ, the joint environment-state–internal-state transitions can be modeled as a Markov chain with a |S||I| × |S||I| transition matrix M (θ) whose entries are given by M s,i , s ,i (θ) = p( s , i | s, i ; θ) = o,a ψ(i |i, a, o)Ω(o|s )P (s |s, a)µ(a|i; θ). [sent-121, score-0.518]

55 Intuitively, this condition allows PGRD to handle history-dependence of a reward function in the same manner that it handles partial observability in an environment. [sent-131, score-0.541]

56 Given β ∈ [0, 1), λ > 0, and a sequence of step sizes αt satisfying t=0 αt = ∞ and ∞ 2 t=0 (αt ) < ∞, PGRD produces a sequence of reward parameters θt such that θt → L as t → ∞ a. [sent-142, score-0.501]

57 First, notice that in the case of depth-0 planning, Q0 (i, o, a; θ) = R(i, o, a, θ), regardless of the transition model and reward parameterization. [sent-147, score-0.518]

58 Specifically, we show that when using an expressive enough reward parameterization, PGRD’s space of policies is not restricted relative to OLPOMDP’s space of policies. [sent-152, score-0.531]

59 There exists a reward parameterization such that, for an arbitrary transition model T , the space of policies representable by PGRD with infinite planning is identical to the space of policies representable by PGRD with depth 0 planning. [sent-155, score-0.974]

60 Ignoring internal state for now (holding it constant), let C(o, a) be an arbitrary reward function used by PGRD with depth 0 planning. [sent-157, score-0.633]

61 Let R(o, a; θ) be a reward function for PGRD with infinite (d = ∞) planning. [sent-158, score-0.501]

62 The depth-∞ agent uses the planning result Q∗ (o, a; θ) to act, while the depth-0 agent uses the function C(o, a) to act. [sent-159, score-0.859]

63 Theorem 3 also shows that the effect of an arbitrarily poor model can be overcome with a good choice of reward function. [sent-164, score-0.528]

64 3 Experiments The primary objective of our experiments is to demonstrate that PGRD is able to use experience online to improve the reward function parameters, thereby improving the agent’s obtained objective return. [sent-167, score-0.699]

65 Specifically, we compare the objective return achieved by PGRD to the objective return achieved by PGRD with the reward adaptation turned off. [sent-168, score-0.783]

66 In both cases, the reward function is initialized to the objective reward function. [sent-169, score-1.07]

67 01 & D=1, α=0 0 1000 2000 3000 4000 5000 D=0, α=0 Time Steps Figure 2: A) Foraging Domain, B) Performance of PGRD with observation-action reward features, C) Performance of PGRD with recency reward features blocked by a wall or the boundary, the action results in no movement. [sent-182, score-1.119]

68 The agent has an eat action, which consumes the worm when the agent is at the worm’s location. [sent-184, score-0.76]

69 After the agent consumes the worm, a new worm appears randomly in one of the other two potential worm locations. [sent-185, score-0.537]

70 Thus, the objective reward function RO provides a reward of 1. [sent-187, score-1.07]

71 0 when the agent eats a worm, and a reward of 0 otherwise. [sent-188, score-0.823]

72 Recall that PGRD for the agent with planning depth 0 is the OLPOMDP algorithm. [sent-191, score-0.593]

73 We tested values for α on an approximate logarithmic scale in the range (10−6 , 10−2 ) as well as the special value of α = 0, which corresponds to an agent that does not adapt its reward function. [sent-193, score-0.823]

74 We explicitly bound the reward parameters and capped the reward function output both to the range [−1, 1]. [sent-201, score-1.002]

75 Because we initialized θ so that the initial reward function was the objective reward function, PGRD with α = 0 was equivalent to a standard depth-limited planning agent. [sent-204, score-1.285]

76 In this experiment, we improve the reward function in an agent whose only limitation is planning depth, using (1) a general reward parameterization based on the current observation and (2) a more compact reward parameterization which also depends on the history of observations. [sent-206, score-2.267]

77 This is effectively a table representation over all reward functions indexed by (o, a). [sent-214, score-0.501]

78 The feature vector is φ(i, o, a) = [RO (o, a), 1, φcl (l, i), φcl,a (l, a, i)], where RO (o, a) is the objective reward function defined as above. [sent-218, score-0.569]

79 6 Results & Discussion: Figure 2(b) and Figure 2(c) present results for agents that use the observationaction parameterization and the recency parameterization of the reward function respectively. [sent-223, score-0.904]

80 Therefore, the agent does not benefit from adapting the reward function parameters (given that we initialize to the objective reward function). [sent-231, score-1.392]

81 This is a demonstration of our primary contribution, that PGRD is able to effectively improve the reward function with experience. [sent-237, score-0.501]

82 In contrast, the more compact recency reward parameterization does not afford this guarantee and indeed for small values of d (< 3), the solid-line curves in Figure 2(c) converge to less than optimal objective return. [sent-244, score-0.767]

83 On the other hand, for planning depths 3 ≤ d < 6, the PGRD agents with the recency parameterization achieve optimal objective return faster than the corresponding PGRD agent with the observation-action parameterization. [sent-246, score-1.038]

84 Finally, we note that this experiment validates our claim that PGRD can improve reward functions that depend on history. [sent-247, score-0.522]

85 Our theoretical analysis showed that PGRD with an incorrect model and the observation–action reward parameterization should (modulo local maxima issues) do just as well asymptotically as it would with a correct model. [sent-249, score-0.613]

86 Here we illustrate this theoretical result empirically on the same foraging domain and objective reward function used in Experiment 1. [sent-250, score-0.639]

87 With a poor model it is no longer interesting to initialize θ so that the initial reward function is the objective reward function because even for d = 6 such an agent would do poorly. [sent-254, score-1.419]

88 Here, deeper planning results in slower learning and indeed the d = 0 agent that does not use the model at all learns the fastest. [sent-260, score-0.537]

89 However, also as hypothesized, because they used the expressive observation–action parameterization, agents of all planning depths mitigate the damage caused by the poor model and eventually converge to the optimal objective return. [sent-261, score-0.525]

90 The parameters θ are initialized such that the initial reward function equals the objective reward function. [sent-279, score-1.07]

91 In fact, agents which don’t adapt the reward are hurt by planning (relative to d = 0). [sent-282, score-0.859]

92 This experiment demonstrates that the combination of planning and reward improvement can be beneficial even when the model is erroneous. [sent-283, score-0.752]

93 This experiment demonstrates PGRD in an environment in which an agent must be limited due to the size of the state space and further demonstrates that adding model-based planning to policy gradient approaches can improve performance. [sent-287, score-0.839]

94 Objective Reward: The designer receives an objective reward of 1. [sent-291, score-0.681]

95 The height feature has been used in previous work as an alternative definition of objective reward [23]. [sent-298, score-0.569]

96 Agents that use the fixed (α = 0) objective reward function with bounded-depth planning perform according to the bottom two curves. [sent-300, score-0.784]

97 Finally, the PGRD d = 6 agent outperforms the standard OLPOMDP agent (PGRD with d = 0), further demonstrating that PGRD outperforms OLPOMDP. [sent-302, score-0.644]

98 Overall Conclusion: We developed PGRD, a new method for approximately solving the optimal reward problem in bounded planning agents that can be applied in an online setting. [sent-303, score-0.895]

99 We showed that PGRD is a generalization of OLPOMDP and demonstrated that it both improves reward functions in limited agents and outperforms the model-free OLPOMDP approach. [sent-304, score-0.669]

100 Policy invariance under reward transformations: Theory and application to reward shaping. [sent-354, score-1.002]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('pgrd', 0.606), ('reward', 0.501), ('agent', 0.322), ('olpomdp', 0.229), ('planning', 0.215), ('agents', 0.143), ('designer', 0.112), ('policy', 0.105), ('worm', 0.099), ('parameterization', 0.095), ('qt', 0.093), ('return', 0.073), ('recency', 0.07), ('foraging', 0.07), ('objective', 0.068), ('ro', 0.064), ('ot', 0.063), ('gradient', 0.059), ('environment', 0.059), ('depth', 0.056), ('qd', 0.05), ('internal', 0.048), ('satinder', 0.048), ('action', 0.047), ('baxter', 0.045), ('richard', 0.041), ('jonathan', 0.04), ('observability', 0.04), ('neu', 0.04), ('depths', 0.036), ('goals', 0.036), ('reinforcement', 0.035), ('autonomous', 0.034), ('observable', 0.034), ('boltzmann', 0.033), ('policies', 0.03), ('sorg', 0.03), ('experience', 0.029), ('singh', 0.028), ('szepesv', 0.028), ('state', 0.028), ('poor', 0.027), ('zt', 0.027), ('mental', 0.026), ('barto', 0.026), ('improves', 0.025), ('bartlett', 0.025), ('rewards', 0.025), ('rl', 0.025), ('acrobot', 0.024), ('andrew', 0.022), ('observation', 0.022), ('erroneous', 0.021), ('experiment', 0.021), ('st', 0.02), ('online', 0.02), ('colocated', 0.02), ('lex', 0.02), ('mericli', 0.02), ('updateinternalstate', 0.02), ('worms', 0.02), ('michigan', 0.02), ('mitigate', 0.02), ('lewis', 0.019), ('equilibrium', 0.018), ('curve', 0.018), ('dropped', 0.018), ('actions', 0.018), ('consumes', 0.017), ('tip', 0.017), ('aberdeen', 0.017), ('genetic', 0.017), ('temperature', 0.017), ('transition', 0.017), ('curves', 0.017), ('differentiable', 0.017), ('incorrect', 0.017), ('plan', 0.017), ('optimal', 0.016), ('limitations', 0.016), ('steps', 0.016), ('lifetime', 0.016), ('demonstrates', 0.015), ('theorem', 0.015), ('shaping', 0.015), ('representable', 0.015), ('history', 0.015), ('designing', 0.014), ('calculation', 0.014), ('generalizes', 0.014), ('food', 0.014), ('intrinsically', 0.014), ('bene', 0.014), ('ri', 0.014), ('ability', 0.014), ('exploration', 0.014), ('def', 0.014), ('sutton', 0.014), ('improving', 0.013), ('inaccurate', 0.013)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999958 229 nips-2010-Reward Design via Online Gradient Ascent

Author: Jonathan Sorg, Richard L. Lewis, Satinder P. Singh

Abstract: Recent work has demonstrated that when artificial agents are limited in their ability to achieve their goals, the agent designer can benefit by making the agent’s goals different from the designer’s. This gives rise to the optimization problem of designing the artificial agent’s goals—in the RL framework, designing the agent’s reward function. Existing attempts at solving this optimal reward problem do not leverage experience gained online during the agent’s lifetime nor do they take advantage of knowledge about the agent’s structure. In this work, we develop a gradient ascent approach with formal convergence guarantees for approximately solving the optimal reward problem online during an agent’s lifetime. We show that our method generalizes a standard policy gradient approach, and we demonstrate its ability to improve reward functions in agents with various forms of limitations. 1 The Optimal Reward Problem In this work, we consider the scenario of an agent designer building an autonomous agent. The designer has his or her own goals which must be translated into goals for the autonomous agent. We represent goals using the Reinforcement Learning (RL) formalism of the reward function. This leads to the optimal reward problem of designing the agent’s reward function so as to maximize the objective reward received by the agent designer. Typically, the designer assigns his or her own reward to the agent. However, there is ample work which demonstrates the benefit of assigning reward which does not match the designer’s. For example, work on reward shaping [11] has shown how to modify rewards to accelerate learning without altering the optimal policy, and PAC-MDP methods [5, 20] including approximate Bayesian methods [7, 19] add bonuses to the objective reward to achieve optimism under uncertainty. These approaches explicitly or implicitly assume that the asymptotic behavior of the agent should be the same as that which would occur using the objective reward function. These methods do not explicitly consider the optimal reward problem; however, they do show improved performance through reward modification. In our recent work that does explicitly consider the optimal reward problem [18], we analyzed an explicit hypothesis about the benefit of reward design—that it helps mitigate the performance loss caused by computational constraints (bounds) on agent architectures. We considered various types of agent limitations—limits on planning depth, failure to account for partial observability, and other erroneous modeling assumptions—and demonstrated the benefits of good reward functions in each case empirically. Crucially, in bounded agents, the optimal reward function often leads to behavior that is different from the asymptotic behavior achieved with the objective reward function. In this work, we develop an algorithm, Policy Gradient for Reward Design (PGRD), for improving reward functions for a family of bounded agents that behave according to repeated local (from the current state) model-based planning. We show that this algorithm is capable of improving the reward functions in agents with computational limitations necessitating small bounds on the depth of planning, and also from the use of an inaccurate model (which may be inaccurate due to computationally-motivated approximations). PGRD has few parameters, improves the reward 1 function online during an agent’s lifetime, takes advantage of knowledge about the agent’s structure (through the gradient computation), and is linear in the number of reward function parameters. Notation. Formally, we consider discrete-time partially-observable environments with a finite number of hidden states s ∈ S, actions a ∈ A, and observations o ∈ O; these finite set assumptions are useful for our theorems, but our algorithm can handle infinite sets in practice. Its dynamics are governed by a state-transition function P (s |s, a) that defines a distribution over next-states s conditioned on current state s and action a, and an observation function Ω(o|s) that defines a distribution over observations o conditioned on current state s. The agent designer’s goals are specified via the objective reward function RO . At each time step, the designer receives reward RO (st ) ∈ [0, 1] based on the current state st of the environment, where the subscript denotes time. The designer’s objective return is the expected mean objective reward N 1 obtained over an infinite horizon, i.e., limN →∞ E N t=0 RO (st ) . In the standard view of RL, the agent uses the same reward function as the designer to align the interests of the agent and the designer. Here we allow for a separate agent reward function R(· ). An agent’s reward function can in general be defined in terms of the history of actions and observations, but is often more pragmatically defined in terms of some abstraction of history. We define the agent’s reward function precisely in Section 2. Optimal Reward Problem. An RL agent attempts to act so as to maximize its own cumulative reward, or return. Crucially, as a result, the sequence of environment-states {st }∞ is affected by t=0 the choice of reward function; therefore, the agent designer’s return is affected as well. The optimal reward problem arises from the fact that while the objective reward function is fixed as part of the problem description, the reward function is a choice to be made by the designer. We capture this choice abstractly by letting the reward be parameterized by some vector of parameters θ chosen from space of parameters Θ. Each θ ∈ Θ specifies a reward function R(· ; θ) which in turn produces a distribution over environment state sequences via whatever RL method the agent uses. The expected N 1 return obtained by the designer for choice θ is U(θ) = limN →∞ E N t=0 RO (st ) R(·; θ) . The optimal reward parameters are given by the solution to the optimal reward problem [16, 17, 18]: θ∗ = arg max U(θ) = arg max lim E θ∈Θ θ∈Θ N →∞ 1 N N RO (st ) R(·; θ) . (1) t=0 Our previous research on solving the optimal reward problem has focused primarily on the properties of the optimal reward function and its correspondence to the agent architecture and the environment [16, 17, 18]. This work has used inefficient exhaustive search methods for finding good approximations to θ∗ (though there is recent work on using genetic algorithms to do this [6, 9, 12]). Our primary contribution in this paper is a new convergent online stochastic gradient method for finding approximately optimal reward functions. To our knowledge, this is the first algorithm that improves reward functions in an online setting—during a single agent’s lifetime. In Section 2, we present the PGRD algorithm, prove its convergence, and relate it to OLPOMDP [2], a policy gradient algorithm. In Section 3, we present experiments demonstrating PGRD’s ability to approximately solve the optimal reward problem online. 2 PGRD: Policy Gradient for Reward Design PGRD builds on the following insight: the agent’s planning algorithm procedurally converts the reward function into behavior; thus, the reward function can be viewed as a specific parameterization of the agent’s policy. Using this insight, PGRD updates the reward parameters by estimating the gradient of the objective return with respect to the reward parameters, θ U(θ), from experience, using standard policy gradient techniques. In fact, we show that PGRD can be viewed as an (independently interesting) generalization of the policy gradient method OLPOMDP [2]. Specifically, we show that OLPOMDP is special case of PGRD when the planning depth d is zero. In this section, we first present the family of local planning agents for which PGRD improves the reward function. Next, we develop PGRD and prove its convergence. Finally, we show that PGRD generalizes OLPOMDP and discuss how adding planning to OLPOMDP affects the space of policies available to the optimization method. 2 1 2 3 4 5 Input: T , θ0 , {αt }∞ , β, γ t=0 o0 , i0 = initializeStart(); for t = 0, 1, 2, 3, . . . do ∀a Qt (a; θt ) = plan(it , ot , T, R(it , ·, ·; θt ), d,γ); at ∼ µ(a|it ; Qt ); rt+1 , ot+1 = takeAction(at ); µ(a |i ;Q ) 6 7 8 9 t zt+1 = βzt + θt t |itt ;Qt ) t ; µ(a θt+1 = θt + αt (rt+1 zt+1 − λθt ) ; it+1 = updateInternalState(it , at , ot+1 ); end Figure 1: PGRD (Policy Gradient for Reward Design) Algorithm A Family of Limited Agents with Internal State. Given a Markov model T defined over the observation space O and action space A, denote T (o |o, a) the probability of next observation o given that the agent takes action a after observing o. Our agents use the model T to plan. We do not assume that the model T is an accurate model of the environment. The use of an incorrect model is one type of agent limitation we examine in our experiments. In general, agents can use non-Markov models defined in terms of the history of observations and actions; we leave this for future work. The agent maintains an internal state feature vector it that is updated at each time step using it+1 = updateInternalState(it , at , ot+1 ). The internal state allows the agent to use reward functions T that depend on the agent’s history. We consider rewards of the form R(it , o, a; θt ) = θt φ(it , o, a), where θt is the reward parameter vector at time t, and φ(it , o, a) is a vector of features based on internal state it , planning state o, and action a. Note that if φ is a vector of binary indicator features, this representation allows for arbitrary reward functions and thus the representation is completely general. Many existing methods use reward functions that depend on history. Reward functions based on empirical counts of observations, as in PAC-MDP approaches [5, 20], provide some examples; see [14, 15, 13] for others. We present a concrete example in our empirical section. At each time step t, the agent’s planning algorithm, plan, performs depth-d planning using the model T and reward function R(it , o, a; θt ) with current internal state it and reward parameters θt . Specifically, the agent computes a d-step Q-value function Qd (it , ot , a; θt ) ∀a ∈ A, where Qd (it , o, a; θt ) = R(it , o, a; θt ) + γ o ∈O T (o |o, a) maxb∈A Qd−1 (it , o , b; θt ) and Q0 (it , o, a; θt ) = R(it , o, a; θt ). We emphasize that the internal state it and reward parameters θt are held invariant while planning. Note that the d-step Q-values are only computed for the current observation ot , in effect by building a depth-d tree rooted at ot . In the d = 0 special case, the planning procedure completely ignores the model T and returns Q0 (it , ot , a; θt ) = R(it , ot , a; θt ). Regardless of the value of d, we treat the end result of planning as providing a scoring function Qt (a; θt ) where the dependence on d, it and ot is dropped from the notation. To allow for gradient calculations, our agents act according to the τ Qt (a;θt ) def Boltzmann (soft-max) stochastic policy parameterized by Q: µ(a|it ; Qt ) = e eτ Qt (b;θt ) , where τ b is a temperature parameter that determines how stochastically the agent selects the action with the highest score. When the planning depth d is small due to computational limitations, the agent cannot account for events beyond the planning depth. We examine this limitation in our experiments. Gradient Ascent. To develop a gradient algorithm for improving the reward function, we need to compute the gradient of the objective return with respect to θ: θ U(θ). The main insight is to break the gradient calculation into the calculation of two gradients. The first is the gradient of the objective return with respect to the policy µ, and the second is the gradient of the policy with respect to the reward function parameters θ. The first gradient is exactly what is computed in standard policy gradient approaches [2]. The second gradient is challenging because the transformation from reward parameters to policy involves a model-based planning procedure. We draw from the work of Neu and Szepesv´ ri [10] which shows that this gradient computation resembles planning itself. We a develop PGRD, presented in Figure 1, explicitly as a generalization of OLPOMDP, a policy gradient algorithm developed by Bartlett and Baxter [2], because of its foundational simplicity relative to other policy-gradient algorithms such as those based on actor-critic methods (e.g., [4]). Notably, the reward parameters are the only parameters being learned in PGRD. 3 PGRD follows the form of OLPOMDP (Algorithm 1 in Bartlett and Baxter [2]) but generalizes it in three places. In Figure 1 line 3, the agent plans to compute the policy, rather than storing the policy directly. In line 6, the gradient of the policy with respect to the parameters accounts for the planning procedure. In line 8, the agent maintains a general notion of internal state that allows for richer parameterization of policies than typically considered (similar to Aberdeen and Baxter [1]). The algorithm takes as parameters a sequence of learning rates {αk }, a decaying-average parameter β, and regularization parameter λ > 0 which keeps the the reward parameters θ bounded throughout learning. Given a sequence of calculations of the gradient of the policy with respect to the parameters, θt µ(at |it ; Qt ), the remainder of the algorithm climbs the gradient of objective return θ U(θ) using OLPOMDP machinery. In the next subsection, we discuss how to compute θt µ(at |it ; Qt ). Computing the Gradient of the Policy with respect to Reward. For the Boltzmann distribution, the gradient of the policy with respect to the reward parameters is given by the equation θt µ(a|it ; Qt ) = τ · µ(a|Qt )[ θt Qt (a|it ; θt ) − θt Qt (b; θt )], where τ is the Boltzmann b∈A temperature (see [10]). Thus, computing θt µ(a|it ; Qt ) reduces to computing θt Qt (a; θt ). The value of Qt depends on the reward parameters θt , the model, and the planning depth. However, as we present below, the process of computing the gradient closely resembles the process of planning itself, and the two computations can be interleaved. Theorem 1 presented below is an adaptation of Proposition 4 from Neu and Szepesv´ ri [10]. It presents the gradient computation for depth-d a planning as well as for infinite-depth discounted planning. We assume that the gradient of the reward function with respect to the parameters is bounded: supθ,o,i,a θ R(i, o, a, θ) < ∞. The proof of the theorem follows directly from Proposition 4 of Neu and Szepesv´ ri [10]. a Theorem 1. Except on a set of measure zero, for any depth d, the gradient θ Qd (o, a; θ) exists and is given by the recursion (where we have dropped the dependence on i for simplicity) d θ Q (o, a; θ) = θ R(o, a; θ) π d−1 (b|o ) T (o |o, a) +γ o ∈O d−1 (o θQ , b; θ), (2) b∈A where θ Q0 (o, a; θ) = θ R(o, a; θ) and π d (a|o) ∈ arg maxa Qd (o, a; θ) is any policy that is greedy with respect to Qd . The result also holds for θ Q∗ (o, a; θ) = θ limd→∞ Qd (o, a; θ). The Q-function will not be differentiable when there are multiple optimal policies. This is reflected in the arbitrary choice of π in the gradient calculation. However, it was shown by Neu and Szepesv´ ri [10] that even for values of θ which are not differentiable, the above computation produces a a valid calculation of a subgradient; we discuss this below in our proof of convergence of PGRD. Convergence of PGRD (Figure 1). Given a particular fixed reward function R(·; θ), transition model T , and planning depth, there is a corresponding fixed randomized policy µ(a|i; θ)—where we have explicitly represented the reward’s dependence on the internal state vector i in the policy parameterization and dropped Q from the notation as it is redundant given that everything else is fixed. Denote the agent’s internal-state update as a (usually deterministic) distribution ψ(i |i, a, o). Given a fixed reward parameter vector θ, the joint environment-state–internal-state transitions can be modeled as a Markov chain with a |S||I| × |S||I| transition matrix M (θ) whose entries are given by M s,i , s ,i (θ) = p( s , i | s, i ; θ) = o,a ψ(i |i, a, o)Ω(o|s )P (s |s, a)µ(a|i; θ). We make the following assumptions about the agent and the environment: Assumption 1. The transition matrix M (θ) of the joint environment-state–internal-state Markov chain has a unique stationary distribution π(θ) = [πs1 ,i1 (θ), πs2 ,i2 (θ), . . . , πs|S| ,i|I| (θ)] satisfying the balance equations π(θ)M (θ) = π(θ), for all θ ∈ Θ. Assumption 2. During its execution, PGRD (Figure 1) does not reach a value of it , and θt at which µ(at |it , Qt ) is not differentiable with respect to θt . It follows from Assumption 1 that the objective return, U(θ), is independent of the start state. The original OLPOMDP convergence proof [2] has a similar condition that only considers environment states. Intuitively, this condition allows PGRD to handle history-dependence of a reward function in the same manner that it handles partial observability in an environment. Assumption 2 accounts for the fact that a planning algorithm may not be fully differentiable everywhere. However, Theorem 1 showed that infinite and bounded-depth planning is differentiable almost everywhere (in a measure theoretic sense). Furthermore, this assumption is perhaps stronger than necessary, as stochastic approximation algorithms, which provide the theory upon which OLPOMDP is based, have been shown to converge using subgradients [8]. 4 In order to state the convergence theorem, we must define the approximate gradient which OLPOMDP def T calculates. Let the approximate gradient estimate be β U(θ) = limT →∞ t=1 rt zt for a fixed θ and θ PGRD parameter β, where zt (in Figure 1) represents a time-decaying average of the θt µ(at |it , Qt ) calculations. It was shown by Bartlett and Baxter [2] that β U(θ) is close to the true value θ U(θ) θ for large values of β. Theorem 2 proves that PGRD converges to a stable equilibrium point based on this approximate gradient measure. This equilibrium point will typically correspond to some local optimum in the return function U(θ). Given our development and assumptions, the theorem is a straightforward extension of Theorem 6 from Bartlett and Baxter [2] (proof omitted). ∞ Theorem 2. Given β ∈ [0, 1), λ > 0, and a sequence of step sizes αt satisfying t=0 αt = ∞ and ∞ 2 t=0 (αt ) < ∞, PGRD produces a sequence of reward parameters θt such that θt → L as t → ∞ a.s., where L is the set of stable equilibrium points of the differential equation ∂θ = β U(θ) − λθ. θ ∂t PGRD generalizes OLPOMDP. As stated above, OLPOMDP, when it uses a Boltzmann distribution in its policy representation (a common case), is a special case of PGRD when the planning depth is zero. First, notice that in the case of depth-0 planning, Q0 (i, o, a; θ) = R(i, o, a, θ), regardless of the transition model and reward parameterization. We can also see from Theorem 1 that 0 θ Q (i, o, a; θ) = θ R(i, o, a; θ). Because R(i, o, a; θ) can be parameterized arbitrarily, PGRD can be configured to match standard OLPOMDP with any policy parameterization that also computes a score function for the Boltzmann distribution. In our experiments, we demonstrate that choosing a planning depth d > 0 can be beneficial over using OLPOMDP (d = 0). In the remainder of this section, we show theoretically that choosing d > 0 does not hurt in the sense that it does not reduce the space of policies available to the policy gradient method. Specifically, we show that when using an expressive enough reward parameterization, PGRD’s space of policies is not restricted relative to OLPOMDP’s space of policies. We prove the result for infinite planning, but the extension to depth-limited planning is straightforward. Theorem 3. There exists a reward parameterization such that, for an arbitrary transition model T , the space of policies representable by PGRD with infinite planning is identical to the space of policies representable by PGRD with depth 0 planning. Proof. Ignoring internal state for now (holding it constant), let C(o, a) be an arbitrary reward function used by PGRD with depth 0 planning. Let R(o, a; θ) be a reward function for PGRD with infinite (d = ∞) planning. The depth-∞ agent uses the planning result Q∗ (o, a; θ) to act, while the depth-0 agent uses the function C(o, a) to act. Therefore, it suffices to show that one can always choose θ such that the planning solution Q∗ (o, a; θ) equals C(o, a). For all o ∈ O, a ∈ A, set R(o, a; θ) = C(o, a) − γ o T (o |o, a) maxa C(o , a ). Substituting Q∗ for C, this is the Bellman optimality equation [22] for infinite-horizon planning. Setting R(o, a; θ) as above is possible if it is parameterized by a table with an entry for each observation–action pair. Theorem 3 also shows that the effect of an arbitrarily poor model can be overcome with a good choice of reward function. This is because a Boltzmann distribution can, allowing for an arbitrary scoring function C, represent any policy. We demonstrate this ability of PGRD in our experiments. 3 Experiments The primary objective of our experiments is to demonstrate that PGRD is able to use experience online to improve the reward function parameters, thereby improving the agent’s obtained objective return. Specifically, we compare the objective return achieved by PGRD to the objective return achieved by PGRD with the reward adaptation turned off. In both cases, the reward function is initialized to the objective reward function. A secondary objective is to demonstrate that when a good model is available, adding the ability to plan—even for small depths—improves performance relative to the baseline algorithm of OLPOMDP (or equivalently PGRD with depth d = 0). Foraging Domain for Experiments 1 to 3: The foraging environment illustrated in Figure 2(a) is a 3 × 3 grid world with 3 dead-end corridors (rows) separated by impassable walls. The agent (bird) has four available actions corresponding to each cardinal direction. Movement in the intended direction fails with probability 0.1, resulting in movement in a random direction. If the resulting direction is 5 Objective Return 0.15 D=6, α=0 & D=6, α=5×10 −5 D=4, α=2×10 −4 D=0, α=5×10 −4 0.1 0.05 0 D=4, α=0 D=0, α=0 1000 2000 3000 4000 5000 Time Steps C) Objective Return B) A) 0.15 D=6, α=0 & D=6, α=5×10 −5 D=3, α=3×10 −3 D=1, α=3×10 −4 0.1 D=3, α=0 0.05 D=0, α=0.01 & D=1, α=0 0 1000 2000 3000 4000 5000 D=0, α=0 Time Steps Figure 2: A) Foraging Domain, B) Performance of PGRD with observation-action reward features, C) Performance of PGRD with recency reward features blocked by a wall or the boundary, the action results in no movement. There is a food source (worm) located in one of the three right-most locations at the end of each corridor. The agent has an eat action, which consumes the worm when the agent is at the worm’s location. After the agent consumes the worm, a new worm appears randomly in one of the other two potential worm locations. Objective Reward for the Foraging Domain: The designer’s goal is to maximize the average number of worms eaten per time step. Thus, the objective reward function RO provides a reward of 1.0 when the agent eats a worm, and a reward of 0 otherwise. The objective return is defined as in Equation (1). Experimental Methodology: We tested PGRD for depth-limited planning agents of depths 0–6. Recall that PGRD for the agent with planning depth 0 is the OLPOMDP algorithm. For each depth, we jointly optimized over the PGRD algorithm parameters, α and β (we use a fixed α throughout learning). We tested values for α on an approximate logarithmic scale in the range (10−6 , 10−2 ) as well as the special value of α = 0, which corresponds to an agent that does not adapt its reward function. We tested β values in the set 0, 0.4, 0.7, 0.9, 0.95, 0.99. Following common practice [3], we set the λ parameter to 0. We explicitly bound the reward parameters and capped the reward function output both to the range [−1, 1]. We used a Boltzmann temperature parameter of τ = 100 and planning discount factor γ = 0.95. Because we initialized θ so that the initial reward function was the objective reward function, PGRD with α = 0 was equivalent to a standard depth-limited planning agent. Experiment 1: A fully observable environment with a correct model learned online. In this experiment, we improve the reward function in an agent whose only limitation is planning depth, using (1) a general reward parameterization based on the current observation and (2) a more compact reward parameterization which also depends on the history of observations. Observation: The agent observes the full state, which is given by the pair o = (l, w), where l is the agent’s location and w is the worm’s location. Learning a Correct Model: Although the theorem of convergence of PGRD relies on the agent having a fixed model, the algorithm itself is readily applied to the case of learning a model online. In this experiment, the agent’s model T is learned online based on empirical transition probabilities between observations (recall this is a fully observable environment). Let no,a,o be the number of times that o was reached after taking action a after observing o. The agent models the probability of seeing o as no,a,o T (o |o, a) = . n o o,a,o Reward Parameterizations: Recall that R(i, o, a; θ) = θT φ(i, o, a), for some φ(i, o, a). (1) In the observation-action parameterization, φ(i, o, a) is a binary feature vector with one binary feature for each observation-action pair—internal state is ignored. This is effectively a table representation over all reward functions indexed by (o, a). As shown in Theorem 3, the observation-action feature representation is capable of producing arbitrary policies over the observations. In large problems, such a parameterization would not be feasible. (2) The recency parameterization is a more compact representation which uses features that rely on the history of observations. The feature vector is φ(i, o, a) = [RO (o, a), 1, φcl (l, i), φcl,a (l, a, i)], where RO (o, a) is the objective reward function defined as above. The feature φcl (l) = 1 − 1/c(l, i), where c(l, i) is the number of time steps since the agent has visited location l, as represented in the agent’s internal state i. Its value is normalized to the range [0, 1) and is high when the agent has not been to location l recently. The feature φcl,a (l, a, i) = 1 − 1/c(l, a, i) is similarly defined with respect to the time since the agent has taken action a in location l. Features based on recency counts encourage persistent exploration [21, 18]. 6 Results & Discussion: Figure 2(b) and Figure 2(c) present results for agents that use the observationaction parameterization and the recency parameterization of the reward function respectively. The horizontal axis is the number of time steps of experience. The vertical axis is the objective return, i.e., the average objective reward per time step. Each curve is an average over 130 trials. The values of d and the associated optimal algorithm parameters for each curve are noted in the figures. First, note that with d = 6, the agent is unbounded, because food is never more than 6 steps away. Therefore, the agent does not benefit from adapting the reward function parameters (given that we initialize to the objective reward function). Indeed, the d = 6, α = 0 agent performs as well as the best reward-optimizing agent. The performance for d = 6 improves with experience because the model improves with experience (and thus from the curves it is seen that the model gets quite accurate in about 1500 time steps). The largest objective return obtained for d = 6 is also the best objective return that can be obtained for any value of d. Several results can be observed in both Figures 2(b) and (c). 1) Each curve that uses α > 0 (solid lines) improves with experience. This is a demonstration of our primary contribution, that PGRD is able to effectively improve the reward function with experience. That the improvement over time is not just due to model learning is seen in the fact that for each value of d < 6 the curve for α > 0 (solid-line) which adapts the reward parameters does significantly better than the corresponding curve for α = 0 (dashed-line); the α = 0 agents still learn the model. 2) For both α = 0 and α > 0 agents, the objective return obtained by agents with equivalent amounts of experience increases monotonically as d is increased (though to maintain readability we only show selected values of d in each figure). This demonstrates our secondary contribution, that the ability to plan in PGRD significantly improves performance over standard OLPOMDP (PGRD with d = 0). There are also some interesting differences between the results for the two different reward function parameterizations. With the observation-action parameterization, we noted that there always exists a setting of θ for all d that will yield optimal objective return. This is seen in Figure 2(b) in that all solid-line curves approach optimal objective return. In contrast, the more compact recency reward parameterization does not afford this guarantee and indeed for small values of d (< 3), the solid-line curves in Figure 2(c) converge to less than optimal objective return. Notably, OLPOMDP (d = 0) does not perform well with this feature set. On the other hand, for planning depths 3 ≤ d < 6, the PGRD agents with the recency parameterization achieve optimal objective return faster than the corresponding PGRD agent with the observation-action parameterization. Finally, we note that this experiment validates our claim that PGRD can improve reward functions that depend on history. Experiment 2: A fully observable environment and poor given model. Our theoretical analysis showed that PGRD with an incorrect model and the observation–action reward parameterization should (modulo local maxima issues) do just as well asymptotically as it would with a correct model. Here we illustrate this theoretical result empirically on the same foraging domain and objective reward function used in Experiment 1. We also test our hypothesis that a poor model should slow down the rate of learning relative to a correct model. Poor Model: We gave the agents a fixed incorrect model of the foraging environment that assumes there are no internal walls separating the 3 corridors. Reward Parameterization: We used the observation–action reward parameterization. With a poor model it is no longer interesting to initialize θ so that the initial reward function is the objective reward function because even for d = 6 such an agent would do poorly. Furthermore, we found that this initialization leads to excessively bad exploration and therefore poor learning of how to modify the reward. Thus, we initialize θ to uniform random values near 0, in the range (−10−3 , 10−3 ). Results: Figure 3(a) plots the objective return as a function of number of steps of experience. Each curve is an average over 36 trials. As hypothesized, the bad model slows learning by a factor of more than 10 (notice the difference in the x-axis scales from those in Figure 2). Here, deeper planning results in slower learning and indeed the d = 0 agent that does not use the model at all learns the fastest. However, also as hypothesized, because they used the expressive observation–action parameterization, agents of all planning depths mitigate the damage caused by the poor model and eventually converge to the optimal objective return. Experiment 3: Partially observable foraging world. Here we evaluate PGRD’s ability to learn in a partially observable version of the foraging domain. In addition, the agents learn a model under the erroneous (and computationally convenient) assumption that the domain is fully observable. 7 0.1 −4 D = 0, α = 2 ×10 D = 2, α = 3 ×10 −5 −5 D = 6, α = 2 ×10 0.05 D = 0&2&6, α = 0 0 1 2 3 Time Steps 4 5 x 10 4 0.06 D = 6, α = 7 ×10 D = 2, α = 7 ×10 −4 0.04 D = 1, α = 7 ×10 −4 D = 0, α = 5 ×10 −4 D = 0, α = 0 D = 1&2&6, α = 0 0.02 0 C) −4 1000 2000 3000 4000 5000 Time Steps Objective Return B) 0.08 0.15 Objective Return Objective Return A) 2.5 2 x 10 −3 D=6, α=3×10 −6 D=0, α=1×10 −5 1.5 D=0&6, α=0 1 0.5 1 2 3 Time Steps 4 5 x 10 4 Figure 3: A) Performance of PGRD with a poor model, B) Performance of PGRD in a partially observable world with recency reward features, C) Performance of PGRD in Acrobot Partial Observation: Instead of viewing the location of the worm at all times, the agent can now only see the worm when it is colocated with it: its observation is o = (l, f ), where f indicates whether the agent is colocated with the food. Learning an Incorrect Model: The model is learned just as in Experiment 1. Because of the erroneous full observability assumption, the model will hallucinate about worms at all the corridor ends based on the empirical frequency of having encountered them there. Reward Parameterization: We used the recency parameterization; due to the partial observability, agents with the observation–action feature set perform poorly in this environment. The parameters θ are initialized such that the initial reward function equals the objective reward function. Results & Discussion: Figure 3(b) plots the mean of 260 trials. As seen in the solid-line curves, PGRD improves the objective return at all depths (only a small amount for d = 0 and significantly more for d > 0). In fact, agents which don’t adapt the reward are hurt by planning (relative to d = 0). This experiment demonstrates that the combination of planning and reward improvement can be beneficial even when the model is erroneous. Because of the partial observability, optimal behavior in this environment achieves less objective return than in Experiment 1. Experiment 4: Acrobot. In this experiment we test PGRD in the Acrobot environment [22], a common benchmark task in the RL literature and one that has previously been used in the testing of policy gradient approaches [23]. This experiment demonstrates PGRD in an environment in which an agent must be limited due to the size of the state space and further demonstrates that adding model-based planning to policy gradient approaches can improve performance. Domain: The version of Acrobot we use is as specified by Sutton and Barto [22]. It is a two-link robot arm in which the position of one shoulder-joint is fixed and the agent’s control is limited to 3 actions which apply torque to the elbow-joint. Observation: The fully-observable state space is 4 dimensional, with two joint angles ψ1 and ψ2 , and ˙ ˙ two joint velocities ψ1 and ψ2 . Objective Reward: The designer receives an objective reward of 1.0 when the tip is one arm’s length above the fixed shoulder-joint, after which the bot is reset to its initial resting position. Model: We provide the agent with a perfect model of the environment. Because the environment is continuous, value iteration is intractable, and computational limitations prevent planning deep enough to compute the optimal action in any state. The feature vector contains 13 entries. One feature corresponds to the objective reward signal. For each action, there are 5 features corresponding to each of the state features plus an additional feature representing the height of the tip: φ(i, o, a) = ˙ ˙ [RO (o), {ψ1 (o), ψ2 (o), ψ1 (o), ψ2 (o), h(o)}a ]. The height feature has been used in previous work as an alternative definition of objective reward [23]. Results & Discussion: We plot the mean of 80 trials in Figure 3(c). Agents that use the fixed (α = 0) objective reward function with bounded-depth planning perform according to the bottom two curves. Allowing PGRD and OLPOMDP to adapt the parameters θ leads to improved objective return, as seen in the top two curves in Figure 3(c). Finally, the PGRD d = 6 agent outperforms the standard OLPOMDP agent (PGRD with d = 0), further demonstrating that PGRD outperforms OLPOMDP. Overall Conclusion: We developed PGRD, a new method for approximately solving the optimal reward problem in bounded planning agents that can be applied in an online setting. We showed that PGRD is a generalization of OLPOMDP and demonstrated that it both improves reward functions in limited agents and outperforms the model-free OLPOMDP approach. 8 References [1] Douglas Aberdeen and Jonathan Baxter. Scalable Internal-State Policy-Gradient Methods for POMDPs. Proceedings of the Nineteenth International Conference on Machine Learning, 2002. [2] Peter L. Bartlett and Jonathan Baxter. Stochastic optimization of controlled partially observable Markov decision processes. In Proceedings of the 39th IEEE Conference on Decision and Control, 2000. [3] Jonathan Baxter, Peter L. Bartlett, and Lex Weaver. Experiments with Infinite-Horizon, Policy-Gradient Estimation, 2001. [4] Shalabh Bhatnagar, Richard S. Sutton, M Ghavamzadeh, and Mark Lee. Natural actor-critic algorithms. Automatica, 2009. [5] Ronen I. Brafman and Moshe Tennenholtz. R-MAX - A General Polynomial Time Algorithm for NearOptimal Reinforcement Learning. Journal of Machine Learning Research, 3:213–231, 2001. [6] S. Elfwing, Eiji Uchibe, K. Doya, and H. I. Christensen. Co-evolution of Shaping Rewards and MetaParameters in Reinforcement Learning. Adaptive Behavior, 16(6):400–412, 2008. [7] J. Zico Kolter and Andrew Y. Ng. Near-Bayesian exploration in polynomial time. In Proceedings of the 26th International Conference on Machine Learning, pages 513–520, 2009. [8] Harold J. Kushner and G. George Yin. Stochastic Approximation and Recursive Algorithms and Applications. Springer, 2nd edition, 2010. [9] Cetin Mericli, Tekin Mericli, and H. Levent Akin. A Reward Function Generation Method Using Genetic ¸ ¸ ¸ Algorithms : A Robot Soccer Case Study (Extended Abstract). In Proc. of the 9th Int. Conf. on Autonomous Agents and Multiagent Systems (AAMAS 2010), number 2, pages 1513–1514, 2010. [10] Gergely Neu and Csaba Szepesv´ ri. Apprenticeship learning using inverse reinforcement learning and a gradient methods. In Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence, pages 295–302, 2007. [11] Andrew Y. Ng, Stuart J. Russell, and D. Harada. Policy invariance under reward transformations: Theory and application to reward shaping. In Proceedings of the 16th International Conference on Machine Learning, pages 278–287, 1999. [12] Scott Niekum, Andrew G. Barto, and Lee Spector. Genetic Programming for Reward Function Search. IEEE Transactions on Autonomous Mental Development, 2(2):83–90, 2010. [13] Pierre-Yves Oudeyer, Frederic Kaplan, and Verena V. Hafner. Intrinsic Motivation Systems for Autonomous Mental Development. IEEE Transactions on Evolutionary Computation, 11(2):265–286, April 2007. [14] J¨ rgen Schmidhuber. Curious model-building control systems. In IEEE International Joint Conference on u Neural Networks, pages 1458–1463, 1991. [15] Satinder Singh, Andrew G. Barto, and Nuttapong Chentanez. Intrinsically Motivated Reinforcement Learning. In Proceedings of Advances in Neural Information Processing Systems 17 (NIPS), pages 1281–1288, 2005. [16] Satinder Singh, Richard L. Lewis, and Andrew G. Barto. Where Do Rewards Come From? In Proceedings of the Annual Conference of the Cognitive Science Society, pages 2601–2606, 2009. [17] Satinder Singh, Richard L. Lewis, Andrew G. Barto, and Jonathan Sorg. Intrinsically Motivated Reinforcement Learning: An Evolutionary Perspective. IEEE Transations on Autonomous Mental Development, 2(2):70–82, 2010. [18] Jonathan Sorg, Satinder Singh, and Richard L. Lewis. Internal Rewards Mitigate Agent Boundedness. In Proceedings of the 27th International Conference on Machine Learning, 2010. [19] Jonathan Sorg, Satinder Singh, and Richard L. Lewis. Variance-Based Rewards for Approximate Bayesian Reinforcement Learning. In Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence, 2010. [20] Alexander L. Strehl and Michael L. Littman. An analysis of model-based Interval Estimation for Markov Decision Processes. Journal of Computer and System Sciences, 74(8):1309–1331, 2008. [21] Richard S. Sutton. Integrated Architectures for Learning, Planning, and Reacting Based on Approximating Dynamic Programming. In The Seventh International Conference on Machine Learning, pages 216–224. 1990. [22] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press, 1998. [23] Lex Weaver and Nigel Tao. The Optimal Reward Baseline for Gradient-Based Reinforcement Learning. In Proceedings of the 17th Conference on Uncertainty in Artificial Intelligence, pages 538–545. 2001. 9

2 0.33130792 196 nips-2010-Online Markov Decision Processes under Bandit Feedback

Author: Gergely Neu, Andras Antos, András György, Csaba Szepesvári

Abstract: We consider online learning in finite stochastic Markovian environments where in each time step a new reward function is chosen by an oblivious adversary. The goal of the learning agent is to compete with the best stationary policy in terms of the total reward received. In each time step the agent observes the current state and the reward associated with the last transition, however, the agent does not observe the rewards associated with other state-action pairs. The agent is assumed to know the transition probabilities. The state of the art result for this setting is a no-regret algorithm. In this paper we propose a new learning algorithm and, assuming that stationary policies mix uniformly fast, we show that after T time steps, the expected regret of the new algorithm is O T 2/3 (ln T )1/3 , giving the first rigorously proved regret bound for the problem. 1

3 0.22407426 93 nips-2010-Feature Construction for Inverse Reinforcement Learning

Author: Sergey Levine, Zoran Popovic, Vladlen Koltun

Abstract: The goal of inverse reinforcement learning is to find a reward function for a Markov decision process, given example traces from its optimal policy. Current IRL techniques generally rely on user-supplied features that form a concise basis for the reward. We present an algorithm that instead constructs reward features from a large collection of component features, by building logical conjunctions of those component features that are relevant to the example policy. Given example traces, the algorithm returns a reward function as well as the constructed features. The reward function can be used to recover a full, deterministic, stationary policy, and the features can be used to transplant the reward function into any novel environment on which the component features are well defined. 1

4 0.21461812 4 nips-2010-A Computational Decision Theory for Interactive Assistants

Author: Alan Fern, Prasad Tadepalli

Abstract: We study several classes of interactive assistants from the points of view of decision theory and computational complexity. We first introduce a class of POMDPs called hidden-goal MDPs (HGMDPs), which formalize the problem of interactively assisting an agent whose goal is hidden and whose actions are observable. In spite of its restricted nature, we show that optimal action selection in finite horizon HGMDPs is PSPACE-complete even in domains with deterministic dynamics. We then introduce a more restricted model called helper action MDPs (HAMDPs), where the assistant’s action is accepted by the agent when it is helpful, and can be easily ignored by the agent otherwise. We show classes of HAMDPs that are complete for PSPACE and NP along with a polynomial time class. Furthermore, we show that for general HAMDPs a simple myopic policy achieves a regret, compared to an omniscient assistant, that is bounded by the entropy of the initial goal distribution. A variation of this policy is shown to achieve worst-case regret that is logarithmic in the number of goals for any goal distribution. 1

5 0.19935307 130 nips-2010-Interval Estimation for Reinforcement-Learning Algorithms in Continuous-State Domains

Author: Martha White, Adam White

Abstract: The reinforcement learning community has explored many approaches to obtaining value estimates and models to guide decision making; these approaches, however, do not usually provide a measure of confidence in the estimate. Accurate estimates of an agent’s confidence are useful for many applications, such as biasing exploration and automatically adjusting parameters to reduce dependence on parameter-tuning. Computing confidence intervals on reinforcement learning value estimates, however, is challenging because data generated by the agentenvironment interaction rarely satisfies traditional assumptions. Samples of valueestimates are dependent, likely non-normally distributed and often limited, particularly in early learning when confidence estimates are pivotal. In this work, we investigate how to compute robust confidences for value estimates in continuous Markov decision processes. We illustrate how to use bootstrapping to compute confidence intervals online under a changing policy (previously not possible) and prove validity under a few reasonable assumptions. We demonstrate the applicability of our confidence estimation algorithms with experiments on exploration, parameter estimation and tracking. 1

6 0.19860351 43 nips-2010-Bootstrapping Apprenticeship Learning

7 0.19525822 184 nips-2010-Nonparametric Bayesian Policy Priors for Reinforcement Learning

8 0.19159956 192 nips-2010-Online Classification with Specificity Constraints

9 0.18681204 11 nips-2010-A POMDP Extension with Belief-dependent Rewards

10 0.15547125 179 nips-2010-Natural Policy Gradient Methods with Parameter-based Exploration for Control Tasks

11 0.13782787 189 nips-2010-On a Connection between Importance Sampling and the Likelihood Ratio Policy Gradient

12 0.13661259 168 nips-2010-Monte-Carlo Planning in Large POMDPs

13 0.12727557 14 nips-2010-A Reduction from Apprenticeship Learning to Classification

14 0.12164579 203 nips-2010-Parametric Bandits: The Generalized Linear Case

15 0.12156672 152 nips-2010-Learning from Logged Implicit Exploration Data

16 0.10593498 201 nips-2010-PAC-Bayesian Model Selection for Reinforcement Learning

17 0.1009026 208 nips-2010-Policy gradients in linearly-solvable MDPs

18 0.099463947 37 nips-2010-Basis Construction from Power Series Expansions of Value Functions

19 0.092834346 50 nips-2010-Constructing Skill Trees for Reinforcement Learning Agents from Demonstration Trajectories

20 0.087924376 160 nips-2010-Linear Complementarity for Regularized Policy Evaluation and Improvement


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.16), (1, -0.342), (2, -0.052), (3, -0.035), (4, -0.0), (5, 0.048), (6, -0.067), (7, -0.008), (8, 0.008), (9, 0.24), (10, -0.044), (11, 0.107), (12, 0.012), (13, -0.069), (14, 0.047), (15, -0.089), (16, -0.196), (17, 0.061), (18, 0.047), (19, 0.013), (20, 0.035), (21, -0.125), (22, -0.106), (23, -0.079), (24, 0.107), (25, 0.014), (26, -0.054), (27, -0.055), (28, 0.146), (29, -0.11), (30, -0.072), (31, 0.019), (32, -0.03), (33, 0.041), (34, -0.074), (35, -0.057), (36, 0.055), (37, 0.06), (38, 0.061), (39, -0.053), (40, 0.003), (41, 0.055), (42, 0.036), (43, -0.051), (44, -0.004), (45, 0.067), (46, -0.059), (47, -0.014), (48, -0.071), (49, 0.074)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97553098 229 nips-2010-Reward Design via Online Gradient Ascent

Author: Jonathan Sorg, Richard L. Lewis, Satinder P. Singh

Abstract: Recent work has demonstrated that when artificial agents are limited in their ability to achieve their goals, the agent designer can benefit by making the agent’s goals different from the designer’s. This gives rise to the optimization problem of designing the artificial agent’s goals—in the RL framework, designing the agent’s reward function. Existing attempts at solving this optimal reward problem do not leverage experience gained online during the agent’s lifetime nor do they take advantage of knowledge about the agent’s structure. In this work, we develop a gradient ascent approach with formal convergence guarantees for approximately solving the optimal reward problem online during an agent’s lifetime. We show that our method generalizes a standard policy gradient approach, and we demonstrate its ability to improve reward functions in agents with various forms of limitations. 1 The Optimal Reward Problem In this work, we consider the scenario of an agent designer building an autonomous agent. The designer has his or her own goals which must be translated into goals for the autonomous agent. We represent goals using the Reinforcement Learning (RL) formalism of the reward function. This leads to the optimal reward problem of designing the agent’s reward function so as to maximize the objective reward received by the agent designer. Typically, the designer assigns his or her own reward to the agent. However, there is ample work which demonstrates the benefit of assigning reward which does not match the designer’s. For example, work on reward shaping [11] has shown how to modify rewards to accelerate learning without altering the optimal policy, and PAC-MDP methods [5, 20] including approximate Bayesian methods [7, 19] add bonuses to the objective reward to achieve optimism under uncertainty. These approaches explicitly or implicitly assume that the asymptotic behavior of the agent should be the same as that which would occur using the objective reward function. These methods do not explicitly consider the optimal reward problem; however, they do show improved performance through reward modification. In our recent work that does explicitly consider the optimal reward problem [18], we analyzed an explicit hypothesis about the benefit of reward design—that it helps mitigate the performance loss caused by computational constraints (bounds) on agent architectures. We considered various types of agent limitations—limits on planning depth, failure to account for partial observability, and other erroneous modeling assumptions—and demonstrated the benefits of good reward functions in each case empirically. Crucially, in bounded agents, the optimal reward function often leads to behavior that is different from the asymptotic behavior achieved with the objective reward function. In this work, we develop an algorithm, Policy Gradient for Reward Design (PGRD), for improving reward functions for a family of bounded agents that behave according to repeated local (from the current state) model-based planning. We show that this algorithm is capable of improving the reward functions in agents with computational limitations necessitating small bounds on the depth of planning, and also from the use of an inaccurate model (which may be inaccurate due to computationally-motivated approximations). PGRD has few parameters, improves the reward 1 function online during an agent’s lifetime, takes advantage of knowledge about the agent’s structure (through the gradient computation), and is linear in the number of reward function parameters. Notation. Formally, we consider discrete-time partially-observable environments with a finite number of hidden states s ∈ S, actions a ∈ A, and observations o ∈ O; these finite set assumptions are useful for our theorems, but our algorithm can handle infinite sets in practice. Its dynamics are governed by a state-transition function P (s |s, a) that defines a distribution over next-states s conditioned on current state s and action a, and an observation function Ω(o|s) that defines a distribution over observations o conditioned on current state s. The agent designer’s goals are specified via the objective reward function RO . At each time step, the designer receives reward RO (st ) ∈ [0, 1] based on the current state st of the environment, where the subscript denotes time. The designer’s objective return is the expected mean objective reward N 1 obtained over an infinite horizon, i.e., limN →∞ E N t=0 RO (st ) . In the standard view of RL, the agent uses the same reward function as the designer to align the interests of the agent and the designer. Here we allow for a separate agent reward function R(· ). An agent’s reward function can in general be defined in terms of the history of actions and observations, but is often more pragmatically defined in terms of some abstraction of history. We define the agent’s reward function precisely in Section 2. Optimal Reward Problem. An RL agent attempts to act so as to maximize its own cumulative reward, or return. Crucially, as a result, the sequence of environment-states {st }∞ is affected by t=0 the choice of reward function; therefore, the agent designer’s return is affected as well. The optimal reward problem arises from the fact that while the objective reward function is fixed as part of the problem description, the reward function is a choice to be made by the designer. We capture this choice abstractly by letting the reward be parameterized by some vector of parameters θ chosen from space of parameters Θ. Each θ ∈ Θ specifies a reward function R(· ; θ) which in turn produces a distribution over environment state sequences via whatever RL method the agent uses. The expected N 1 return obtained by the designer for choice θ is U(θ) = limN →∞ E N t=0 RO (st ) R(·; θ) . The optimal reward parameters are given by the solution to the optimal reward problem [16, 17, 18]: θ∗ = arg max U(θ) = arg max lim E θ∈Θ θ∈Θ N →∞ 1 N N RO (st ) R(·; θ) . (1) t=0 Our previous research on solving the optimal reward problem has focused primarily on the properties of the optimal reward function and its correspondence to the agent architecture and the environment [16, 17, 18]. This work has used inefficient exhaustive search methods for finding good approximations to θ∗ (though there is recent work on using genetic algorithms to do this [6, 9, 12]). Our primary contribution in this paper is a new convergent online stochastic gradient method for finding approximately optimal reward functions. To our knowledge, this is the first algorithm that improves reward functions in an online setting—during a single agent’s lifetime. In Section 2, we present the PGRD algorithm, prove its convergence, and relate it to OLPOMDP [2], a policy gradient algorithm. In Section 3, we present experiments demonstrating PGRD’s ability to approximately solve the optimal reward problem online. 2 PGRD: Policy Gradient for Reward Design PGRD builds on the following insight: the agent’s planning algorithm procedurally converts the reward function into behavior; thus, the reward function can be viewed as a specific parameterization of the agent’s policy. Using this insight, PGRD updates the reward parameters by estimating the gradient of the objective return with respect to the reward parameters, θ U(θ), from experience, using standard policy gradient techniques. In fact, we show that PGRD can be viewed as an (independently interesting) generalization of the policy gradient method OLPOMDP [2]. Specifically, we show that OLPOMDP is special case of PGRD when the planning depth d is zero. In this section, we first present the family of local planning agents for which PGRD improves the reward function. Next, we develop PGRD and prove its convergence. Finally, we show that PGRD generalizes OLPOMDP and discuss how adding planning to OLPOMDP affects the space of policies available to the optimization method. 2 1 2 3 4 5 Input: T , θ0 , {αt }∞ , β, γ t=0 o0 , i0 = initializeStart(); for t = 0, 1, 2, 3, . . . do ∀a Qt (a; θt ) = plan(it , ot , T, R(it , ·, ·; θt ), d,γ); at ∼ µ(a|it ; Qt ); rt+1 , ot+1 = takeAction(at ); µ(a |i ;Q ) 6 7 8 9 t zt+1 = βzt + θt t |itt ;Qt ) t ; µ(a θt+1 = θt + αt (rt+1 zt+1 − λθt ) ; it+1 = updateInternalState(it , at , ot+1 ); end Figure 1: PGRD (Policy Gradient for Reward Design) Algorithm A Family of Limited Agents with Internal State. Given a Markov model T defined over the observation space O and action space A, denote T (o |o, a) the probability of next observation o given that the agent takes action a after observing o. Our agents use the model T to plan. We do not assume that the model T is an accurate model of the environment. The use of an incorrect model is one type of agent limitation we examine in our experiments. In general, agents can use non-Markov models defined in terms of the history of observations and actions; we leave this for future work. The agent maintains an internal state feature vector it that is updated at each time step using it+1 = updateInternalState(it , at , ot+1 ). The internal state allows the agent to use reward functions T that depend on the agent’s history. We consider rewards of the form R(it , o, a; θt ) = θt φ(it , o, a), where θt is the reward parameter vector at time t, and φ(it , o, a) is a vector of features based on internal state it , planning state o, and action a. Note that if φ is a vector of binary indicator features, this representation allows for arbitrary reward functions and thus the representation is completely general. Many existing methods use reward functions that depend on history. Reward functions based on empirical counts of observations, as in PAC-MDP approaches [5, 20], provide some examples; see [14, 15, 13] for others. We present a concrete example in our empirical section. At each time step t, the agent’s planning algorithm, plan, performs depth-d planning using the model T and reward function R(it , o, a; θt ) with current internal state it and reward parameters θt . Specifically, the agent computes a d-step Q-value function Qd (it , ot , a; θt ) ∀a ∈ A, where Qd (it , o, a; θt ) = R(it , o, a; θt ) + γ o ∈O T (o |o, a) maxb∈A Qd−1 (it , o , b; θt ) and Q0 (it , o, a; θt ) = R(it , o, a; θt ). We emphasize that the internal state it and reward parameters θt are held invariant while planning. Note that the d-step Q-values are only computed for the current observation ot , in effect by building a depth-d tree rooted at ot . In the d = 0 special case, the planning procedure completely ignores the model T and returns Q0 (it , ot , a; θt ) = R(it , ot , a; θt ). Regardless of the value of d, we treat the end result of planning as providing a scoring function Qt (a; θt ) where the dependence on d, it and ot is dropped from the notation. To allow for gradient calculations, our agents act according to the τ Qt (a;θt ) def Boltzmann (soft-max) stochastic policy parameterized by Q: µ(a|it ; Qt ) = e eτ Qt (b;θt ) , where τ b is a temperature parameter that determines how stochastically the agent selects the action with the highest score. When the planning depth d is small due to computational limitations, the agent cannot account for events beyond the planning depth. We examine this limitation in our experiments. Gradient Ascent. To develop a gradient algorithm for improving the reward function, we need to compute the gradient of the objective return with respect to θ: θ U(θ). The main insight is to break the gradient calculation into the calculation of two gradients. The first is the gradient of the objective return with respect to the policy µ, and the second is the gradient of the policy with respect to the reward function parameters θ. The first gradient is exactly what is computed in standard policy gradient approaches [2]. The second gradient is challenging because the transformation from reward parameters to policy involves a model-based planning procedure. We draw from the work of Neu and Szepesv´ ri [10] which shows that this gradient computation resembles planning itself. We a develop PGRD, presented in Figure 1, explicitly as a generalization of OLPOMDP, a policy gradient algorithm developed by Bartlett and Baxter [2], because of its foundational simplicity relative to other policy-gradient algorithms such as those based on actor-critic methods (e.g., [4]). Notably, the reward parameters are the only parameters being learned in PGRD. 3 PGRD follows the form of OLPOMDP (Algorithm 1 in Bartlett and Baxter [2]) but generalizes it in three places. In Figure 1 line 3, the agent plans to compute the policy, rather than storing the policy directly. In line 6, the gradient of the policy with respect to the parameters accounts for the planning procedure. In line 8, the agent maintains a general notion of internal state that allows for richer parameterization of policies than typically considered (similar to Aberdeen and Baxter [1]). The algorithm takes as parameters a sequence of learning rates {αk }, a decaying-average parameter β, and regularization parameter λ > 0 which keeps the the reward parameters θ bounded throughout learning. Given a sequence of calculations of the gradient of the policy with respect to the parameters, θt µ(at |it ; Qt ), the remainder of the algorithm climbs the gradient of objective return θ U(θ) using OLPOMDP machinery. In the next subsection, we discuss how to compute θt µ(at |it ; Qt ). Computing the Gradient of the Policy with respect to Reward. For the Boltzmann distribution, the gradient of the policy with respect to the reward parameters is given by the equation θt µ(a|it ; Qt ) = τ · µ(a|Qt )[ θt Qt (a|it ; θt ) − θt Qt (b; θt )], where τ is the Boltzmann b∈A temperature (see [10]). Thus, computing θt µ(a|it ; Qt ) reduces to computing θt Qt (a; θt ). The value of Qt depends on the reward parameters θt , the model, and the planning depth. However, as we present below, the process of computing the gradient closely resembles the process of planning itself, and the two computations can be interleaved. Theorem 1 presented below is an adaptation of Proposition 4 from Neu and Szepesv´ ri [10]. It presents the gradient computation for depth-d a planning as well as for infinite-depth discounted planning. We assume that the gradient of the reward function with respect to the parameters is bounded: supθ,o,i,a θ R(i, o, a, θ) < ∞. The proof of the theorem follows directly from Proposition 4 of Neu and Szepesv´ ri [10]. a Theorem 1. Except on a set of measure zero, for any depth d, the gradient θ Qd (o, a; θ) exists and is given by the recursion (where we have dropped the dependence on i for simplicity) d θ Q (o, a; θ) = θ R(o, a; θ) π d−1 (b|o ) T (o |o, a) +γ o ∈O d−1 (o θQ , b; θ), (2) b∈A where θ Q0 (o, a; θ) = θ R(o, a; θ) and π d (a|o) ∈ arg maxa Qd (o, a; θ) is any policy that is greedy with respect to Qd . The result also holds for θ Q∗ (o, a; θ) = θ limd→∞ Qd (o, a; θ). The Q-function will not be differentiable when there are multiple optimal policies. This is reflected in the arbitrary choice of π in the gradient calculation. However, it was shown by Neu and Szepesv´ ri [10] that even for values of θ which are not differentiable, the above computation produces a a valid calculation of a subgradient; we discuss this below in our proof of convergence of PGRD. Convergence of PGRD (Figure 1). Given a particular fixed reward function R(·; θ), transition model T , and planning depth, there is a corresponding fixed randomized policy µ(a|i; θ)—where we have explicitly represented the reward’s dependence on the internal state vector i in the policy parameterization and dropped Q from the notation as it is redundant given that everything else is fixed. Denote the agent’s internal-state update as a (usually deterministic) distribution ψ(i |i, a, o). Given a fixed reward parameter vector θ, the joint environment-state–internal-state transitions can be modeled as a Markov chain with a |S||I| × |S||I| transition matrix M (θ) whose entries are given by M s,i , s ,i (θ) = p( s , i | s, i ; θ) = o,a ψ(i |i, a, o)Ω(o|s )P (s |s, a)µ(a|i; θ). We make the following assumptions about the agent and the environment: Assumption 1. The transition matrix M (θ) of the joint environment-state–internal-state Markov chain has a unique stationary distribution π(θ) = [πs1 ,i1 (θ), πs2 ,i2 (θ), . . . , πs|S| ,i|I| (θ)] satisfying the balance equations π(θ)M (θ) = π(θ), for all θ ∈ Θ. Assumption 2. During its execution, PGRD (Figure 1) does not reach a value of it , and θt at which µ(at |it , Qt ) is not differentiable with respect to θt . It follows from Assumption 1 that the objective return, U(θ), is independent of the start state. The original OLPOMDP convergence proof [2] has a similar condition that only considers environment states. Intuitively, this condition allows PGRD to handle history-dependence of a reward function in the same manner that it handles partial observability in an environment. Assumption 2 accounts for the fact that a planning algorithm may not be fully differentiable everywhere. However, Theorem 1 showed that infinite and bounded-depth planning is differentiable almost everywhere (in a measure theoretic sense). Furthermore, this assumption is perhaps stronger than necessary, as stochastic approximation algorithms, which provide the theory upon which OLPOMDP is based, have been shown to converge using subgradients [8]. 4 In order to state the convergence theorem, we must define the approximate gradient which OLPOMDP def T calculates. Let the approximate gradient estimate be β U(θ) = limT →∞ t=1 rt zt for a fixed θ and θ PGRD parameter β, where zt (in Figure 1) represents a time-decaying average of the θt µ(at |it , Qt ) calculations. It was shown by Bartlett and Baxter [2] that β U(θ) is close to the true value θ U(θ) θ for large values of β. Theorem 2 proves that PGRD converges to a stable equilibrium point based on this approximate gradient measure. This equilibrium point will typically correspond to some local optimum in the return function U(θ). Given our development and assumptions, the theorem is a straightforward extension of Theorem 6 from Bartlett and Baxter [2] (proof omitted). ∞ Theorem 2. Given β ∈ [0, 1), λ > 0, and a sequence of step sizes αt satisfying t=0 αt = ∞ and ∞ 2 t=0 (αt ) < ∞, PGRD produces a sequence of reward parameters θt such that θt → L as t → ∞ a.s., where L is the set of stable equilibrium points of the differential equation ∂θ = β U(θ) − λθ. θ ∂t PGRD generalizes OLPOMDP. As stated above, OLPOMDP, when it uses a Boltzmann distribution in its policy representation (a common case), is a special case of PGRD when the planning depth is zero. First, notice that in the case of depth-0 planning, Q0 (i, o, a; θ) = R(i, o, a, θ), regardless of the transition model and reward parameterization. We can also see from Theorem 1 that 0 θ Q (i, o, a; θ) = θ R(i, o, a; θ). Because R(i, o, a; θ) can be parameterized arbitrarily, PGRD can be configured to match standard OLPOMDP with any policy parameterization that also computes a score function for the Boltzmann distribution. In our experiments, we demonstrate that choosing a planning depth d > 0 can be beneficial over using OLPOMDP (d = 0). In the remainder of this section, we show theoretically that choosing d > 0 does not hurt in the sense that it does not reduce the space of policies available to the policy gradient method. Specifically, we show that when using an expressive enough reward parameterization, PGRD’s space of policies is not restricted relative to OLPOMDP’s space of policies. We prove the result for infinite planning, but the extension to depth-limited planning is straightforward. Theorem 3. There exists a reward parameterization such that, for an arbitrary transition model T , the space of policies representable by PGRD with infinite planning is identical to the space of policies representable by PGRD with depth 0 planning. Proof. Ignoring internal state for now (holding it constant), let C(o, a) be an arbitrary reward function used by PGRD with depth 0 planning. Let R(o, a; θ) be a reward function for PGRD with infinite (d = ∞) planning. The depth-∞ agent uses the planning result Q∗ (o, a; θ) to act, while the depth-0 agent uses the function C(o, a) to act. Therefore, it suffices to show that one can always choose θ such that the planning solution Q∗ (o, a; θ) equals C(o, a). For all o ∈ O, a ∈ A, set R(o, a; θ) = C(o, a) − γ o T (o |o, a) maxa C(o , a ). Substituting Q∗ for C, this is the Bellman optimality equation [22] for infinite-horizon planning. Setting R(o, a; θ) as above is possible if it is parameterized by a table with an entry for each observation–action pair. Theorem 3 also shows that the effect of an arbitrarily poor model can be overcome with a good choice of reward function. This is because a Boltzmann distribution can, allowing for an arbitrary scoring function C, represent any policy. We demonstrate this ability of PGRD in our experiments. 3 Experiments The primary objective of our experiments is to demonstrate that PGRD is able to use experience online to improve the reward function parameters, thereby improving the agent’s obtained objective return. Specifically, we compare the objective return achieved by PGRD to the objective return achieved by PGRD with the reward adaptation turned off. In both cases, the reward function is initialized to the objective reward function. A secondary objective is to demonstrate that when a good model is available, adding the ability to plan—even for small depths—improves performance relative to the baseline algorithm of OLPOMDP (or equivalently PGRD with depth d = 0). Foraging Domain for Experiments 1 to 3: The foraging environment illustrated in Figure 2(a) is a 3 × 3 grid world with 3 dead-end corridors (rows) separated by impassable walls. The agent (bird) has four available actions corresponding to each cardinal direction. Movement in the intended direction fails with probability 0.1, resulting in movement in a random direction. If the resulting direction is 5 Objective Return 0.15 D=6, α=0 & D=6, α=5×10 −5 D=4, α=2×10 −4 D=0, α=5×10 −4 0.1 0.05 0 D=4, α=0 D=0, α=0 1000 2000 3000 4000 5000 Time Steps C) Objective Return B) A) 0.15 D=6, α=0 & D=6, α=5×10 −5 D=3, α=3×10 −3 D=1, α=3×10 −4 0.1 D=3, α=0 0.05 D=0, α=0.01 & D=1, α=0 0 1000 2000 3000 4000 5000 D=0, α=0 Time Steps Figure 2: A) Foraging Domain, B) Performance of PGRD with observation-action reward features, C) Performance of PGRD with recency reward features blocked by a wall or the boundary, the action results in no movement. There is a food source (worm) located in one of the three right-most locations at the end of each corridor. The agent has an eat action, which consumes the worm when the agent is at the worm’s location. After the agent consumes the worm, a new worm appears randomly in one of the other two potential worm locations. Objective Reward for the Foraging Domain: The designer’s goal is to maximize the average number of worms eaten per time step. Thus, the objective reward function RO provides a reward of 1.0 when the agent eats a worm, and a reward of 0 otherwise. The objective return is defined as in Equation (1). Experimental Methodology: We tested PGRD for depth-limited planning agents of depths 0–6. Recall that PGRD for the agent with planning depth 0 is the OLPOMDP algorithm. For each depth, we jointly optimized over the PGRD algorithm parameters, α and β (we use a fixed α throughout learning). We tested values for α on an approximate logarithmic scale in the range (10−6 , 10−2 ) as well as the special value of α = 0, which corresponds to an agent that does not adapt its reward function. We tested β values in the set 0, 0.4, 0.7, 0.9, 0.95, 0.99. Following common practice [3], we set the λ parameter to 0. We explicitly bound the reward parameters and capped the reward function output both to the range [−1, 1]. We used a Boltzmann temperature parameter of τ = 100 and planning discount factor γ = 0.95. Because we initialized θ so that the initial reward function was the objective reward function, PGRD with α = 0 was equivalent to a standard depth-limited planning agent. Experiment 1: A fully observable environment with a correct model learned online. In this experiment, we improve the reward function in an agent whose only limitation is planning depth, using (1) a general reward parameterization based on the current observation and (2) a more compact reward parameterization which also depends on the history of observations. Observation: The agent observes the full state, which is given by the pair o = (l, w), where l is the agent’s location and w is the worm’s location. Learning a Correct Model: Although the theorem of convergence of PGRD relies on the agent having a fixed model, the algorithm itself is readily applied to the case of learning a model online. In this experiment, the agent’s model T is learned online based on empirical transition probabilities between observations (recall this is a fully observable environment). Let no,a,o be the number of times that o was reached after taking action a after observing o. The agent models the probability of seeing o as no,a,o T (o |o, a) = . n o o,a,o Reward Parameterizations: Recall that R(i, o, a; θ) = θT φ(i, o, a), for some φ(i, o, a). (1) In the observation-action parameterization, φ(i, o, a) is a binary feature vector with one binary feature for each observation-action pair—internal state is ignored. This is effectively a table representation over all reward functions indexed by (o, a). As shown in Theorem 3, the observation-action feature representation is capable of producing arbitrary policies over the observations. In large problems, such a parameterization would not be feasible. (2) The recency parameterization is a more compact representation which uses features that rely on the history of observations. The feature vector is φ(i, o, a) = [RO (o, a), 1, φcl (l, i), φcl,a (l, a, i)], where RO (o, a) is the objective reward function defined as above. The feature φcl (l) = 1 − 1/c(l, i), where c(l, i) is the number of time steps since the agent has visited location l, as represented in the agent’s internal state i. Its value is normalized to the range [0, 1) and is high when the agent has not been to location l recently. The feature φcl,a (l, a, i) = 1 − 1/c(l, a, i) is similarly defined with respect to the time since the agent has taken action a in location l. Features based on recency counts encourage persistent exploration [21, 18]. 6 Results & Discussion: Figure 2(b) and Figure 2(c) present results for agents that use the observationaction parameterization and the recency parameterization of the reward function respectively. The horizontal axis is the number of time steps of experience. The vertical axis is the objective return, i.e., the average objective reward per time step. Each curve is an average over 130 trials. The values of d and the associated optimal algorithm parameters for each curve are noted in the figures. First, note that with d = 6, the agent is unbounded, because food is never more than 6 steps away. Therefore, the agent does not benefit from adapting the reward function parameters (given that we initialize to the objective reward function). Indeed, the d = 6, α = 0 agent performs as well as the best reward-optimizing agent. The performance for d = 6 improves with experience because the model improves with experience (and thus from the curves it is seen that the model gets quite accurate in about 1500 time steps). The largest objective return obtained for d = 6 is also the best objective return that can be obtained for any value of d. Several results can be observed in both Figures 2(b) and (c). 1) Each curve that uses α > 0 (solid lines) improves with experience. This is a demonstration of our primary contribution, that PGRD is able to effectively improve the reward function with experience. That the improvement over time is not just due to model learning is seen in the fact that for each value of d < 6 the curve for α > 0 (solid-line) which adapts the reward parameters does significantly better than the corresponding curve for α = 0 (dashed-line); the α = 0 agents still learn the model. 2) For both α = 0 and α > 0 agents, the objective return obtained by agents with equivalent amounts of experience increases monotonically as d is increased (though to maintain readability we only show selected values of d in each figure). This demonstrates our secondary contribution, that the ability to plan in PGRD significantly improves performance over standard OLPOMDP (PGRD with d = 0). There are also some interesting differences between the results for the two different reward function parameterizations. With the observation-action parameterization, we noted that there always exists a setting of θ for all d that will yield optimal objective return. This is seen in Figure 2(b) in that all solid-line curves approach optimal objective return. In contrast, the more compact recency reward parameterization does not afford this guarantee and indeed for small values of d (< 3), the solid-line curves in Figure 2(c) converge to less than optimal objective return. Notably, OLPOMDP (d = 0) does not perform well with this feature set. On the other hand, for planning depths 3 ≤ d < 6, the PGRD agents with the recency parameterization achieve optimal objective return faster than the corresponding PGRD agent with the observation-action parameterization. Finally, we note that this experiment validates our claim that PGRD can improve reward functions that depend on history. Experiment 2: A fully observable environment and poor given model. Our theoretical analysis showed that PGRD with an incorrect model and the observation–action reward parameterization should (modulo local maxima issues) do just as well asymptotically as it would with a correct model. Here we illustrate this theoretical result empirically on the same foraging domain and objective reward function used in Experiment 1. We also test our hypothesis that a poor model should slow down the rate of learning relative to a correct model. Poor Model: We gave the agents a fixed incorrect model of the foraging environment that assumes there are no internal walls separating the 3 corridors. Reward Parameterization: We used the observation–action reward parameterization. With a poor model it is no longer interesting to initialize θ so that the initial reward function is the objective reward function because even for d = 6 such an agent would do poorly. Furthermore, we found that this initialization leads to excessively bad exploration and therefore poor learning of how to modify the reward. Thus, we initialize θ to uniform random values near 0, in the range (−10−3 , 10−3 ). Results: Figure 3(a) plots the objective return as a function of number of steps of experience. Each curve is an average over 36 trials. As hypothesized, the bad model slows learning by a factor of more than 10 (notice the difference in the x-axis scales from those in Figure 2). Here, deeper planning results in slower learning and indeed the d = 0 agent that does not use the model at all learns the fastest. However, also as hypothesized, because they used the expressive observation–action parameterization, agents of all planning depths mitigate the damage caused by the poor model and eventually converge to the optimal objective return. Experiment 3: Partially observable foraging world. Here we evaluate PGRD’s ability to learn in a partially observable version of the foraging domain. In addition, the agents learn a model under the erroneous (and computationally convenient) assumption that the domain is fully observable. 7 0.1 −4 D = 0, α = 2 ×10 D = 2, α = 3 ×10 −5 −5 D = 6, α = 2 ×10 0.05 D = 0&2&6, α = 0 0 1 2 3 Time Steps 4 5 x 10 4 0.06 D = 6, α = 7 ×10 D = 2, α = 7 ×10 −4 0.04 D = 1, α = 7 ×10 −4 D = 0, α = 5 ×10 −4 D = 0, α = 0 D = 1&2&6, α = 0 0.02 0 C) −4 1000 2000 3000 4000 5000 Time Steps Objective Return B) 0.08 0.15 Objective Return Objective Return A) 2.5 2 x 10 −3 D=6, α=3×10 −6 D=0, α=1×10 −5 1.5 D=0&6, α=0 1 0.5 1 2 3 Time Steps 4 5 x 10 4 Figure 3: A) Performance of PGRD with a poor model, B) Performance of PGRD in a partially observable world with recency reward features, C) Performance of PGRD in Acrobot Partial Observation: Instead of viewing the location of the worm at all times, the agent can now only see the worm when it is colocated with it: its observation is o = (l, f ), where f indicates whether the agent is colocated with the food. Learning an Incorrect Model: The model is learned just as in Experiment 1. Because of the erroneous full observability assumption, the model will hallucinate about worms at all the corridor ends based on the empirical frequency of having encountered them there. Reward Parameterization: We used the recency parameterization; due to the partial observability, agents with the observation–action feature set perform poorly in this environment. The parameters θ are initialized such that the initial reward function equals the objective reward function. Results & Discussion: Figure 3(b) plots the mean of 260 trials. As seen in the solid-line curves, PGRD improves the objective return at all depths (only a small amount for d = 0 and significantly more for d > 0). In fact, agents which don’t adapt the reward are hurt by planning (relative to d = 0). This experiment demonstrates that the combination of planning and reward improvement can be beneficial even when the model is erroneous. Because of the partial observability, optimal behavior in this environment achieves less objective return than in Experiment 1. Experiment 4: Acrobot. In this experiment we test PGRD in the Acrobot environment [22], a common benchmark task in the RL literature and one that has previously been used in the testing of policy gradient approaches [23]. This experiment demonstrates PGRD in an environment in which an agent must be limited due to the size of the state space and further demonstrates that adding model-based planning to policy gradient approaches can improve performance. Domain: The version of Acrobot we use is as specified by Sutton and Barto [22]. It is a two-link robot arm in which the position of one shoulder-joint is fixed and the agent’s control is limited to 3 actions which apply torque to the elbow-joint. Observation: The fully-observable state space is 4 dimensional, with two joint angles ψ1 and ψ2 , and ˙ ˙ two joint velocities ψ1 and ψ2 . Objective Reward: The designer receives an objective reward of 1.0 when the tip is one arm’s length above the fixed shoulder-joint, after which the bot is reset to its initial resting position. Model: We provide the agent with a perfect model of the environment. Because the environment is continuous, value iteration is intractable, and computational limitations prevent planning deep enough to compute the optimal action in any state. The feature vector contains 13 entries. One feature corresponds to the objective reward signal. For each action, there are 5 features corresponding to each of the state features plus an additional feature representing the height of the tip: φ(i, o, a) = ˙ ˙ [RO (o), {ψ1 (o), ψ2 (o), ψ1 (o), ψ2 (o), h(o)}a ]. The height feature has been used in previous work as an alternative definition of objective reward [23]. Results & Discussion: We plot the mean of 80 trials in Figure 3(c). Agents that use the fixed (α = 0) objective reward function with bounded-depth planning perform according to the bottom two curves. Allowing PGRD and OLPOMDP to adapt the parameters θ leads to improved objective return, as seen in the top two curves in Figure 3(c). Finally, the PGRD d = 6 agent outperforms the standard OLPOMDP agent (PGRD with d = 0), further demonstrating that PGRD outperforms OLPOMDP. Overall Conclusion: We developed PGRD, a new method for approximately solving the optimal reward problem in bounded planning agents that can be applied in an online setting. We showed that PGRD is a generalization of OLPOMDP and demonstrated that it both improves reward functions in limited agents and outperforms the model-free OLPOMDP approach. 8 References [1] Douglas Aberdeen and Jonathan Baxter. Scalable Internal-State Policy-Gradient Methods for POMDPs. Proceedings of the Nineteenth International Conference on Machine Learning, 2002. [2] Peter L. Bartlett and Jonathan Baxter. Stochastic optimization of controlled partially observable Markov decision processes. In Proceedings of the 39th IEEE Conference on Decision and Control, 2000. [3] Jonathan Baxter, Peter L. Bartlett, and Lex Weaver. Experiments with Infinite-Horizon, Policy-Gradient Estimation, 2001. [4] Shalabh Bhatnagar, Richard S. Sutton, M Ghavamzadeh, and Mark Lee. Natural actor-critic algorithms. Automatica, 2009. [5] Ronen I. Brafman and Moshe Tennenholtz. R-MAX - A General Polynomial Time Algorithm for NearOptimal Reinforcement Learning. Journal of Machine Learning Research, 3:213–231, 2001. [6] S. Elfwing, Eiji Uchibe, K. Doya, and H. I. Christensen. Co-evolution of Shaping Rewards and MetaParameters in Reinforcement Learning. Adaptive Behavior, 16(6):400–412, 2008. [7] J. Zico Kolter and Andrew Y. Ng. Near-Bayesian exploration in polynomial time. In Proceedings of the 26th International Conference on Machine Learning, pages 513–520, 2009. [8] Harold J. Kushner and G. George Yin. Stochastic Approximation and Recursive Algorithms and Applications. Springer, 2nd edition, 2010. [9] Cetin Mericli, Tekin Mericli, and H. Levent Akin. A Reward Function Generation Method Using Genetic ¸ ¸ ¸ Algorithms : A Robot Soccer Case Study (Extended Abstract). In Proc. of the 9th Int. Conf. on Autonomous Agents and Multiagent Systems (AAMAS 2010), number 2, pages 1513–1514, 2010. [10] Gergely Neu and Csaba Szepesv´ ri. Apprenticeship learning using inverse reinforcement learning and a gradient methods. In Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence, pages 295–302, 2007. [11] Andrew Y. Ng, Stuart J. Russell, and D. Harada. Policy invariance under reward transformations: Theory and application to reward shaping. In Proceedings of the 16th International Conference on Machine Learning, pages 278–287, 1999. [12] Scott Niekum, Andrew G. Barto, and Lee Spector. Genetic Programming for Reward Function Search. IEEE Transactions on Autonomous Mental Development, 2(2):83–90, 2010. [13] Pierre-Yves Oudeyer, Frederic Kaplan, and Verena V. Hafner. Intrinsic Motivation Systems for Autonomous Mental Development. IEEE Transactions on Evolutionary Computation, 11(2):265–286, April 2007. [14] J¨ rgen Schmidhuber. Curious model-building control systems. In IEEE International Joint Conference on u Neural Networks, pages 1458–1463, 1991. [15] Satinder Singh, Andrew G. Barto, and Nuttapong Chentanez. Intrinsically Motivated Reinforcement Learning. In Proceedings of Advances in Neural Information Processing Systems 17 (NIPS), pages 1281–1288, 2005. [16] Satinder Singh, Richard L. Lewis, and Andrew G. Barto. Where Do Rewards Come From? In Proceedings of the Annual Conference of the Cognitive Science Society, pages 2601–2606, 2009. [17] Satinder Singh, Richard L. Lewis, Andrew G. Barto, and Jonathan Sorg. Intrinsically Motivated Reinforcement Learning: An Evolutionary Perspective. IEEE Transations on Autonomous Mental Development, 2(2):70–82, 2010. [18] Jonathan Sorg, Satinder Singh, and Richard L. Lewis. Internal Rewards Mitigate Agent Boundedness. In Proceedings of the 27th International Conference on Machine Learning, 2010. [19] Jonathan Sorg, Satinder Singh, and Richard L. Lewis. Variance-Based Rewards for Approximate Bayesian Reinforcement Learning. In Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence, 2010. [20] Alexander L. Strehl and Michael L. Littman. An analysis of model-based Interval Estimation for Markov Decision Processes. Journal of Computer and System Sciences, 74(8):1309–1331, 2008. [21] Richard S. Sutton. Integrated Architectures for Learning, Planning, and Reacting Based on Approximating Dynamic Programming. In The Seventh International Conference on Machine Learning, pages 216–224. 1990. [22] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press, 1998. [23] Lex Weaver and Nigel Tao. The Optimal Reward Baseline for Gradient-Based Reinforcement Learning. In Proceedings of the 17th Conference on Uncertainty in Artificial Intelligence, pages 538–545. 2001. 9

2 0.81913698 4 nips-2010-A Computational Decision Theory for Interactive Assistants

Author: Alan Fern, Prasad Tadepalli

Abstract: We study several classes of interactive assistants from the points of view of decision theory and computational complexity. We first introduce a class of POMDPs called hidden-goal MDPs (HGMDPs), which formalize the problem of interactively assisting an agent whose goal is hidden and whose actions are observable. In spite of its restricted nature, we show that optimal action selection in finite horizon HGMDPs is PSPACE-complete even in domains with deterministic dynamics. We then introduce a more restricted model called helper action MDPs (HAMDPs), where the assistant’s action is accepted by the agent when it is helpful, and can be easily ignored by the agent otherwise. We show classes of HAMDPs that are complete for PSPACE and NP along with a polynomial time class. Furthermore, we show that for general HAMDPs a simple myopic policy achieves a regret, compared to an omniscient assistant, that is bounded by the entropy of the initial goal distribution. A variation of this policy is shown to achieve worst-case regret that is logarithmic in the number of goals for any goal distribution. 1

3 0.74301273 93 nips-2010-Feature Construction for Inverse Reinforcement Learning

Author: Sergey Levine, Zoran Popovic, Vladlen Koltun

Abstract: The goal of inverse reinforcement learning is to find a reward function for a Markov decision process, given example traces from its optimal policy. Current IRL techniques generally rely on user-supplied features that form a concise basis for the reward. We present an algorithm that instead constructs reward features from a large collection of component features, by building logical conjunctions of those component features that are relevant to the example policy. Given example traces, the algorithm returns a reward function as well as the constructed features. The reward function can be used to recover a full, deterministic, stationary policy, and the features can be used to transplant the reward function into any novel environment on which the component features are well defined. 1

4 0.7423752 11 nips-2010-A POMDP Extension with Belief-dependent Rewards

Author: Mauricio Araya, Olivier Buffet, Vincent Thomas, Françcois Charpillet

Abstract: Partially Observable Markov Decision Processes (POMDPs) model sequential decision-making problems under uncertainty and partial observability. Unfortunately, some problems cannot be modeled with state-dependent reward functions, e.g., problems whose objective explicitly implies reducing the uncertainty on the state. To that end, we introduce ρPOMDPs, an extension of POMDPs where the reward function ρ depends on the belief state. We show that, under the common assumption that ρ is convex, the value function is also convex, what makes it possible to (1) approximate ρ arbitrarily well with a piecewise linear and convex (PWLC) function, and (2) use state-of-the-art exact or approximate solving algorithms with limited changes. 1

5 0.67812341 196 nips-2010-Online Markov Decision Processes under Bandit Feedback

Author: Gergely Neu, Andras Antos, András György, Csaba Szepesvári

Abstract: We consider online learning in finite stochastic Markovian environments where in each time step a new reward function is chosen by an oblivious adversary. The goal of the learning agent is to compete with the best stationary policy in terms of the total reward received. In each time step the agent observes the current state and the reward associated with the last transition, however, the agent does not observe the rewards associated with other state-action pairs. The agent is assumed to know the transition probabilities. The state of the art result for this setting is a no-regret algorithm. In this paper we propose a new learning algorithm and, assuming that stationary policies mix uniformly fast, we show that after T time steps, the expected regret of the new algorithm is O T 2/3 (ln T )1/3 , giving the first rigorously proved regret bound for the problem. 1

6 0.66742337 130 nips-2010-Interval Estimation for Reinforcement-Learning Algorithms in Continuous-State Domains

7 0.66739702 43 nips-2010-Bootstrapping Apprenticeship Learning

8 0.6600768 168 nips-2010-Monte-Carlo Planning in Large POMDPs

9 0.62314254 37 nips-2010-Basis Construction from Power Series Expansions of Value Functions

10 0.53557187 184 nips-2010-Nonparametric Bayesian Policy Priors for Reinforcement Learning

11 0.52408224 203 nips-2010-Parametric Bandits: The Generalized Linear Case

12 0.52363366 192 nips-2010-Online Classification with Specificity Constraints

13 0.51448619 50 nips-2010-Constructing Skill Trees for Reinforcement Learning Agents from Demonstration Trajectories

14 0.42814264 201 nips-2010-PAC-Bayesian Model Selection for Reinforcement Learning

15 0.40103656 183 nips-2010-Non-Stochastic Bandit Slate Problems

16 0.39404929 14 nips-2010-A Reduction from Apprenticeship Learning to Classification

17 0.37098664 68 nips-2010-Effects of Synaptic Weight Diffusion on Learning in Decision Making Networks

18 0.37058014 64 nips-2010-Distributionally Robust Markov Decision Processes

19 0.34913814 252 nips-2010-SpikeAnts, a spiking neuron network modelling the emergence of organization in a complex system

20 0.3486236 212 nips-2010-Predictive State Temporal Difference Learning


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(13, 0.026), (27, 0.048), (30, 0.039), (39, 0.014), (45, 0.222), (50, 0.028), (52, 0.021), (60, 0.385), (77, 0.029), (78, 0.011), (80, 0.014), (90, 0.03), (99, 0.018)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.95962238 223 nips-2010-Rates of convergence for the cluster tree

Author: Kamalika Chaudhuri, Sanjoy Dasgupta

Abstract: For a density f on Rd , a high-density cluster is any connected component of {x : f (x) ≥ λ}, for some λ > 0. The set of all high-density clusters form a hierarchy called the cluster tree of f . We present a procedure for estimating the cluster tree given samples from f . We give finite-sample convergence rates for our algorithm, as well as lower bounds on the sample complexity of this estimation problem. 1

2 0.93779743 165 nips-2010-MAP estimation in Binary MRFs via Bipartite Multi-cuts

Author: Sashank J. Reddi, Sunita Sarawagi, Sundar Vishwanathan

Abstract: We propose a new LP relaxation for obtaining the MAP assignment of a binary MRF with pairwise potentials. Our relaxation is derived from reducing the MAP assignment problem to an instance of a recently proposed Bipartite Multi-cut problem where the LP relaxation is guaranteed to provide an O(log k) approximation where k is the number of vertices adjacent to non-submodular edges in the MRF. We then propose a combinatorial algorithm to efficiently solve the LP and also provide a lower bound by concurrently solving its dual to within an approximation. The algorithm is up to an order of magnitude faster and provides better MAP scores and bounds than the state of the art message passing algorithm of [1] that tightens the local marginal polytope with third-order marginal constraints. 1

3 0.93172467 278 nips-2010-Universal Consistency of Multi-Class Support Vector Classification

Author: Tobias Glasmachers

Abstract: Steinwart was the first to prove universal consistency of support vector machine classification. His proof analyzed the ‘standard’ support vector machine classifier, which is restricted to binary classification problems. In contrast, recent analysis has resulted in the common belief that several extensions of SVM classification to more than two classes are inconsistent. Countering this belief, we prove the universal consistency of the multi-class support vector machine by Crammer and Singer. Our proof extends Steinwart’s techniques to the multi-class case. 1

4 0.90485495 104 nips-2010-Generative Local Metric Learning for Nearest Neighbor Classification

Author: Yung-kyun Noh, Byoung-tak Zhang, Daniel D. Lee

Abstract: We consider the problem of learning a local metric to enhance the performance of nearest neighbor classification. Conventional metric learning methods attempt to separate data distributions in a purely discriminative manner; here we show how to take advantage of information from parametric generative models. We focus on the bias in the information-theoretic error arising from finite sampling effects, and find an appropriate local metric that maximally reduces the bias based upon knowledge from generative models. As a byproduct, the asymptotic theoretical analysis in this work relates metric learning with dimensionality reduction, which was not understood from previous discriminative approaches. Empirical experiments show that this learned local metric enhances the discriminative nearest neighbor performance on various datasets using simple class conditional generative models. 1

5 0.86234581 62 nips-2010-Discriminative Clustering by Regularized Information Maximization

Author: Andreas Krause, Pietro Perona, Ryan G. Gomes

Abstract: Is there a principled way to learn a probabilistic discriminative classifier from an unlabeled data set? We present a framework that simultaneously clusters the data and trains a discriminative classifier. We call it Regularized Information Maximization (RIM). RIM optimizes an intuitive information-theoretic objective function which balances class separation, class balance and classifier complexity. The approach can flexibly incorporate different likelihood functions, express prior assumptions about the relative size of different classes and incorporate partial labels for semi-supervised learning. In particular, we instantiate the framework to unsupervised, multi-class kernelized logistic regression. Our empirical evaluation indicates that RIM outperforms existing methods on several real data sets, and demonstrates that RIM is an effective model selection method. 1

same-paper 6 0.807464 229 nips-2010-Reward Design via Online Gradient Ascent

7 0.70009714 164 nips-2010-MAP Estimation for Graphical Models by Likelihood Maximization

8 0.67748708 80 nips-2010-Estimation of Renyi Entropy and Mutual Information Based on Generalized Nearest-Neighbor Graphs

9 0.66846263 196 nips-2010-Online Markov Decision Processes under Bandit Feedback

10 0.66428572 138 nips-2010-Large Margin Multi-Task Metric Learning

11 0.66338289 193 nips-2010-Online Learning: Random Averages, Combinatorial Parameters, and Learnability

12 0.65684426 4 nips-2010-A Computational Decision Theory for Interactive Assistants

13 0.65623486 263 nips-2010-Switching state space model for simultaneously estimating state transitions and nonstationary firing rates

14 0.65551698 31 nips-2010-An analysis on negative curvature induced by singularity in multi-layer neural-network learning

15 0.65477157 102 nips-2010-Generalized roof duality and bisubmodular functions

16 0.65368193 287 nips-2010-Worst-Case Linear Discriminant Analysis

17 0.65334535 87 nips-2010-Extended Bayesian Information Criteria for Gaussian Graphical Models

18 0.64841092 75 nips-2010-Empirical Risk Minimization with Approximations of Probabilistic Grammars

19 0.6468302 163 nips-2010-Lower Bounds on Rate of Convergence of Cutting Plane Methods

20 0.64225364 70 nips-2010-Efficient Optimization for Discriminative Latent Class Models