Framework and methods of diverse exploration for fast and safe policy improvement

11568236 · 2023-01-31

Assignee

Inventors

Cpc classification

International classification

Abstract

The present technology addresses the problem of quickly and safely improving policies in online reinforcement learning domains. As its solution, an exploration strategy comprising diverse exploration (DE) is employed, which learns and deploys a diverse set of safe policies to explore the environment. DE theory explains why diversity in behavior policies enables effective exploration without sacrificing exploitation. An empirical study shows that an online policy improvement algorithm framework implementing the DE strategy can achieve both fast policy improvement and safe online performance.

Claims

1. A method of learning and deploying a set of behavior policies for an artificial agent selected from a a space of stochastic behavior policies, the method comprising: iteratively improving the set of behavior policies by: selecting a diverse set comprising a plurality of behavior policies from the set of behavior policies for evaluation in each iteration, each respective behavior policy being ensured safe and having a statistically expected return no worse than a lower bound of policy performance, which excludes a portion of the set of behavior policies that is at least one of not ensured safe and not having an expected return no worse than a lower bound of policy performance, employing a diverse exploration strategy for the selecting which strives for behavior diversity, and assessing policy performance of each behavior policy of the diverse set with respect to the artificial agent.

2. The method according to claim 1, wherein each behavior policy has a variance associated with the estimate of its policy performance by importance sampling, and each diverse set has a common average variance, in each of a plurality of policy improvement iterations.

3. The method according to claim 1, wherein each of the policy performance and policy behavior diversity is quantified according to a common objective function, and the common objective function is employed to assess the policy performance of the diverse set.

4. The method according to claim 1, wherein the diverse set comprises the plurality of behavior policies which are adaptively defined based on assessed policy performance within a respective single iteration.

5. The method according to claim 4, wherein the adaptation is based on a change in the lower bound of policy performance as a selection criterion for a subsequent diverse set within a respective single iteration.

6. The method according to claim 4, wherein the adaptation is based on feedback of a system state of the artificial agent received after deploying a prior behavior policy within a respective single iteration.

7. The method according to claim 1, wherein the diverse set within a respective single iteration is selected as the plurality of behavior policies generated based on prior feedback, having maximum differences from each other according to a Kullback-Leibler (KL) divergence measure.

8. The method according to claim 1, wherein the plurality of behavior policies of the diverse set within a respective iteration are selected according to an aggregate group statistic.

9. The method according to claim 1, each respective behavior policy represents a trained first artificial neural network, and each respective behavior policy controls an artificial agent comprising a second artificial neural network.

10. The method according to claim 1, wherein the plurality of behavior policies within the diverse set for each iteration is selected by importance sampling within a confidence interval.

11. The method according to claim 1, wherein in each iteration, a data set is collected representing an environment in a first number of dimensions, and the set of behavior policies have a second number of dimensions less than the first number of dimensions.

12. The method according to claim 1, wherein the statistically expected return no worse than a lower bound of policy performance is updated between iterations.

13. The method according to claim 1, wherein, in each iteration, feedback is obtained from a system controlled by the artificial agent in accordance with the respective behavior policy, and the feedback is used to improve a computational model of the system which is predictive of future behavior of the system over a range of environmental conditions.

14. The method according to claim 13, further comprising providing a computational model of the system which is predictive of future behavior of the system over a multidimensional range of environmental conditions, based on a plurality of observations under different environmental conditions having a distribution, and the diverse exploration strategy is biased to select respective behavior policies within the set of behavior policies which selectively explore portions of the multidimensional range of environmental conditions.

15. The method according to claim 1, wherein the diverse set is selected based on a predicted state of a system controlled by the artificial agent according to the respective behavior policy during deployment of the respective behavior policy.

16. The method according to claim 1, further comprising selecting the diverse set for assessment within each iteration to generate a maximum predicted statistical improvement in policy performance.

17. The method according to claim 1, wherein the diverse set in each policy improvement iteration i, is selected by deploying a most recently confirmed set of policies custom characterto collect n trajectories uniformly distributed over the respective policies π.sub.i within the set of policies π.sub.i∈custom character; further comprising for each set of trajectories custom character.sub.i collected from a respective policy π.sub.i, partition custom character.sub.i and append to a training set of trajectories custom character.sub.train and a testing set of trajectories custom character.sub.test; said assessing policy performance comprises from custom character.sub.train, generating a set of candidate policies and evaluating the set of candidate policies using custom character.sub.test; further comprising confirming a subset of policies as meeting predetermined criteria, and if no new policies π.sub.i are confirmed, redeploying the current set of policies custom character.

18. The method according to claim 17, further comprising, for each iteration: defining the lower policy performance bound ρ.sub.−; and performing a t-test on normalized returns of custom character.sub.test without importance sampling, treating the set of deployed policies custom characteras a mixture policy that generated custom character.sub.test.

19. The method according to claim 17, further comprising employing a set of conjugate policies custom charactergenerated as a byproduct of conjugate gradient descent.

20. An apparatus for performing the method according to claim 1, comprising: an input configured to receive data from operation of a system within an environment controlled by the artificial agent according to a respective behavior policy; at least one automated processor configured to perform said iteratively improving; and at least one output configured to control the system within the environment controlled by the artificial agent in accordance with a respective behavior policy of the set of behavior policies.

21. The method according to claim 1, further comprising adapting the diverse set for each iteration based on: a change in the lower bound of policy performance as a selection criterion for a subsequent diverse set within a respective single iteration; and feedback of a system state of a system controlled by the artificial agent received after deploying a prior behavior policy within a respective single iteration.

22. The apparatus according to claim 20, wherein each respective behavior policy represents a trained first artificial neural network and each respective behavior policy controls a second artificial neural network.

23. The method according to claim 1, further comprising controlling a physical dynamic system whose dynamics and operating statistics change over a period of policy improvement with the artificial agent.

24. A method for controlling a system within an environment, comprising: providing an artificial agent configured to control the system, the artificial agent being controlled according to a behavioral policy; iteratively improving a set of behavioral policies comprising the behavioral policy, by, for each iteration: selecting a diverse set of behavior policies from a space of stochastic behavior policies for evaluation in each iteration, each respective behavior policy being ensured safe and having a statistically expected return no worse than a lower threshold of policy performance, the diverse set maximizing behavior diversity according to a diversity metric; assessing policy performance a plurality of behavioral policies of the diverse set; and updating a selection criterion; and controlling the system within the environment with the artificial agent in accordance with a respective behavioral policy from the iteratively improved diverse set.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1A shows a 4×4 grid-world with five possible actions (custom character, ↑, .fwdarw., ↓, ←); optimal actions for each state labeled in the upper left corner; an example of two policies (red and blue) of similar quality but different actions at 9 states.

(2) FIG. 1B shows a partial view of the distribution of pair-wise diversity (i.e., no. of states two policies differ) across a range of policy quality (i.e., total extra steps to Goal than optimal over all states).

(3) FIG. 2A shows average normalized returns over 50 runs of policy improvement.

(4) FIG. 2B shows diversity in experienced (s; a) pairs.

(5) FIGS. 3A-3F show comparisons between TRPO, RP (TRPO with Random Perturbations), and DE (TRPO with Diverse Exploration) on average performance of all behavior policies and trace of the covariance matrix of perturbed gradient estimates, across iterations of learning on (FIGS. 3A, 3D) Hopper, (FIGS. 3B, 3E) Walker and (FIGS. 3C, 3F) HalfCheetah. Reported values are the average and interquartile range over 10 runs.

(6) FIG. 4 shows a graph of average performance of all behavior policies for DE on Hopper with a decreasing number of perturbed policies and TRPO.

(7) FIG. 5 shows an exemplary prior art computing platform.

DETAILED DESCRIPTION OF THE INVENTION

(8) Rationale for Diverse Exploration

(9) Problem Formulation

(10) Definition 1. Consider an RL problem with an initial policy, π.sub.0, a lower bound, ρ.sub.−, on policy performance, and a confidence level, δ(0<δ<½), all specified by a user. Let π.sub.1, . . . , π.sub.d be d(d≥1) iterations of behavior policies and ρ(π.sub.i) be the performance (expected return) of π.sub.i. Fast and Safe Improvement (FSI) aims at: max(ρ(π.sub.d)−ρ(π.sub.0)) subject to ∀i=1, . . . d, ρ(π.sub.i)≥ρ.sub.−, with probability at least (1−δ) per iteration.

(11) FSI requires that in each iteration of policy improvement, a behavior policy (the policy that gets deployed) π.sub.i's expected return is no worse than a bound ρ.sub.−, with probability at least 1−δ. Policy π.sub.i is called a safe policy. Both δ and ρ.sub.− can be adjusted by the user to specify how much risk is reasonable for the application at hand. ρ can be the performance of π.sub.0 or π.sub.i−1, Furthermore, FSI aims at maximally improving the behavior policy within a limited number of policy improvement iterations. This objective is what distinguishes FSI from the safe policy improvement (SPI) problem that enforces only the safety constraint on behavior policies (Petrik, Ghavamzadeh, and Chow 2016; Thomas, Theocharous, and Ghavamzadeh 2015b).

(12) To achieve exploration within the safety constraint, one could resort to a stochastic safe policy. However, this is often ineffective for fast improvement because the randomness of the policy and hence the exploratory capacity must be limited in order to achieve good performance. Alternatively, DE is proposed, which strives for behavior diversity and performs exploration in the space of stochastic policies.

(13) Advantage of DE Over SPI Solutions

(14) DE can be thought of as a generalized version of any solution to the SPI problem. DE learns and deploys a diverse set of safe policies instead of a single safe policy (as is typical in SPI) during each policy improvement iteration. The high confidence policy improvement method in (Thomas, Theocharous, and Ghavamzadeh 2015b) is an SPI method that applies HCOPE (reviewed earlier) to provide lower bounds on policy performance. For simplicity, SPI is used to refer to a solution to the SPI problem that uses this safety model. The safety guarantees in HCOPE are the result of importance sampling based estimates. A problem with SPI, which has not been previously discussed in the literature, stems from a property of importance sampling: data from a single behavior policy can result in very different variances in the estimates for different candidate policies that SPI evaluates for safety. Specifically, variance will be low for policies that are similar to the behavior policy. Thus, deploying a single behavior policy results in an implicit bias (in the form of a lower variance estimate, and hence a better chance of confirming as a safe policy) towards a particular region of policy space with policies similar to the deployed policy. This does not allow SPI to fully explore the space of policies which may obstruct fast policy improvement.

(15) To overcome this limitation of SPI and address the FSI challenge, sufficient exploration must be generated while maintaining safety. DE achieves this, and DE theory explains why deploying a population of safe policies achieves better exploration than a single safe policy. Informally, in the context of HCOPE by importance sampling, when diverse behavior policies are deployed (i.e., by multiple importance sampling) DE leads to uniformity among the variances of estimators, which gives an equal chance of passing the safety test to different candidate policies/target distributions. Such uniformity in turn promotes diversity in the behavior policies in subsequent iterations. While iteratively doing so, DE also maintains the average of the variances of estimators (i.e., maintaining utility of the current data for confirming the next round of candidates). In contrast, SPI deploys only one safe policy among available ones (i.e., by single importance sampling), and gives a heavily biased chance towards the policy that is most similar to the behavior policy, which leads to a limited update to the data. These theoretical insights are consistent with the intuition that for a population, diversity promotes diversity, while homogeneity tends to stay homogeneous.

(16) Environments with Diverse Safe Policies

(17) The behavior diversity needed to realize the synergistic circle of diversity to diversity naturally exists. Consider a 4×4 grid-world environment in FIG. 1A. The goal of an agent is to move from the initial (bottom left) state to the terminal (top right) state in the fewest steps. Immediate rewards are always −1. Compared to the standard grid-world problem, an additional diagonal upright action is introduced to each state that significantly increases the size of the policy search space and also serves to expand and thicken the spectrum of policy quality. From a deterministic point of view, in the standard grid-world, there are a total of 29 optimal policies (which take either up or right in the 9 states outside of the topmost row and rightmost column). All of these policies become sub-optimal at different levels of quality in this extension.

(18) As shown in FIG. 1A, two policies of similar quality can differ greatly in action choices due to: (1) they take different but equally good actions at the same state; and (2) they take sub-optimal actions at different states. As a result, there exists significant diversity among policies of similar quality within any small window in the spectrum of policy quality. This effect is demonstrated by FIG. 1B. To manage the space of enumeration, the policies considered are limited in this illustration to the 59 policies that take the diagonal, the policies considered in this illustration are limited to the 5.sup.9 policies that take the diagonal, up, left, down, or right action in the 9 states outside of the topmost row and rightmost column and take the optimal action in other states. The quality of a policy is measured in terms of the total extra steps to Goal starting from each state, compared to the total steps to Goal by an optimal policy. Besides the existence of significant diversity, another interesting observation from FIG. 1B is that as policy quality approaches optimal (extra steps approaches 0), both the total number of policies at a given quality level and the diversity among them decrease.

(19) In domains with large state and action spaces and complex dynamics, it is reasonable to expect some degree of diversity among safe policies at various levels of quality and the existence of multiple exploratory paths for policy improvement. It is worth noting that in simple domains where there is significant homogeneity in the solution paths of better policies towards an optimal solution, DE will not be very effective due to limited diversity in sub-optimal policies. (For example, a Markov chain domain with two actions (left or right), and the goal state is at one end of the chain.) In complex domains, the advantage from exploiting diversity among safe policies can also diminish as the quality of safe policies approaches near optimal. Nevertheless, DE will not lose to a safe policy improvement algorithm when there is little diversity to explore, since it will follow the safe algorithm by default. When there is substantial diversity to exploit, DE theory formally explains why it is beneficial to do so.

(20) Theory on Diverse Exploration

(21) This section provides justification for how deploying a diverse set of behavior policies, when available, improves uniformity among the variances of policy performance estimates, while maintaining the average of the variances of estimators. This theory section does not address how to effectively identify diverse safe policies.

(22) Importance sampling aims to approximate the expectation of a random variable X with a target density p(x) on D by sampling from a proposal density q(x).

(23) μ = E p [ X ] = D f ( x ) p ( x ) q ( x ) q ( x ) dx = E q [ f ( x ) p ( x ) q ( x ) ] ( 2 )

(24) Let {p.sub.1, p.sub.2, . . . , p.sub.r} be a set of r≥2 target distributions and {q.sub.1, q.sub.2, . . . , q.sub.m} a set of m≥2 proposal distributions (which correspond to candidate policies, π.sub.p, and behavior policies, π.sub.q, in the RL setting, respectively). Note this problem setting is different from traditional single or multiple importance sampling because multiple target distributions (r≥2) are considered. All target and proposal distributions are assumed distinct. For 1≤j≤r, 1≤t≤m,

(25) X j , t , i = p j ( x i ) f ( x i ) q t ( x i )
is the importance sampling estimator for the j.sup.th target distribution using the i.sup.th sample generated by the t.sup.th proposal distribution.

(26) The sample mean of X.sub.j,t,i is

(27) μ j , t = 1 n .Math. i = 1 n X j , t , i ( 3 )

(28) Then, the variance of μ.sub.j,t is

(29) var ( μ j , t ) = var ( 1 n .Math. i = 1 n x j , t , i ) = var ( X j , t , i ) n ( 4 ) Where var ( X j , t , i ) <

(30) In the context of multiple importance sampling, the sample mean of X.sub.j,t,i; 1≤t≤m is defined as

(31) μ j , k = 1 n .Math. t = 1 m .Math. i = 1 k t X j , t , i where k = ( k 1 , k 2 , .Math. , k m ) , k t 0 .Math. t = 1 m k t = n . ( 5 )

(32) The vector k describes how a total of n samples are selected from the m proposal distributions. k.sub.t is the number of samples drawn from proposal distribution q.sub.t(x). The second subscript of the estimator μ has been overloaded with the vector k to indicate that the collection of n samples has been distributed over the m proposal distributions. There are special vectors of the form k=(0, . . . , n, . . . , 0) where k.sub.t=n, k.sub.l=0 ∀l≠t, which correspond to single importance sampling. These special vectors are denoted as k.sup.(t) where y=(y.sub.1, y.sub.2, . . . , y.sub.m), y.sub.t≥0, y.sub.t=n. When k=k.sup.(t), μ.sub.j,k reduces to μ.sub.j,t because all n samples are collected from the t.sup.th proposal distribution.

(33) μ.sub.j,k has variance

(34) var ( μ j , t ) = var ( 1 n .Math. t = 1 m .Math. i = 1 k t X j , t , i ) = 1 n 2 .Math. t = 1 m k t var ( X j , t , i ) ( 6 )

(35) When k=k.sup.(t), var(j,k) reduces to var(j,t).

(36) Given the FSI problem, a goal is to promote uniformity of variances (i.e., reducing variance of variances) across estimators for an unknown set of target distributions (candidate policies). This leads to the following constrained optimization problem:

(37) k * = arg min k 1 r .Math. j = 1 r .Math. var ( μ j , k ) .Math. ( 7 ) subject to k * = ( k 1 * , k 2 * , .Math. , k m * ) , k 1 * 0 .Math. t = 1 m k t * = n

(38) where k* is an optimal way to distribute n samples over m proposal distributions such that the variances of the estimates are most similar (i.e., the average distance between var(μ.sub.j,k) and their mean be minimized). If the set of target distributions and the set of proposal distributions are both known in advance, computing k* can be solved analytically. However, in the FSI context, the set of promising candidate target distributions to be estimated and evaluated by a safety test are unknown before the collection of a total of n samples from the set of available proposal distributions which are already confirmed by the safety test in the past policy improvement iteration. Under such uncertainty, it is infeasible to make an optimal decision on the sample size for each available proposal distribution according to the objective function in Equation (7). Given the objective is convex, the quality of a solution vector k depends on its distance to an unknown optimal vector k*. The closer the distance, the better uniformity of variances it produces. Lemma 1 below provides a tight upper bound on the distance from a given vector k to any possible solution to the objective in Equation (7).

(39) Lemma 1. Given any vector k=(k.sub.1, k.sub.2, . . . , k.sub.m) such that k.sub.t≥0, Σ.sub.t=1.sup.mk.sub.t=n. Let k.sub.min=k.sub.t where k.sub.t≤k.sub.i∀i≠t. Then

(40) max y .Math. y - k .Math. L 1 2 n - 2 k min , where y = ( y 1 , y 2 , .Math. , y m ) , y t 0 , .Math. t = 1 m y t = n . ( 8 )

(41) In any given iteration of policy improvement, the SPI approach (Thomas, Theocharous, and Ghavamzadeh 2015b) simply picks one of the available proposal distributions and uses it to generate the entire set of n samples. That is, SPI selects with equal probability from the set of special vectors k.sup.(t). The effectiveness of SPI with respect to the objective in Equation (7) depends on the expectation E[∥k.sup.(t)−k*∥] where the expectation is taken over the set of special vectors k.sup.(t) with equal probability. DE, a better, and optimal under uncertainty of target distributions, approach based on multiple importance sampling, samples according to the vector

(42) k DE = ( n m , n m , . . . , n m ) .

(43) Theorem 1.

(44) With respect to the objective in Equation (7),

(45) (i) the solution vector k.sup.DE is worst case optimal; and

(46) (ii)
0≤∥k.sup.DE−k*∥.sub.L.sub.1≤E[∥k.sup.(t)−k*∥.sub.L.sub.1]=2n−2n/m  (9)

(47) where the expectation is over all special vectors k.sup.(t).

(48) Proof. (Sketch):

(49) (i)

(50) 0 0 k min n m
can be shown by a straightforward pigeonhole argument. In addition, from Lemma 1, smaller k.sub.min gives larger upper bound. Since k.sup.DE has the largest value of

(51) k min = n m ,
k.sup.DE is worst case optimal and
0≤∥k.sup.DE−k*∥.sub.L.sub.1≤2n−2n/m  (10)

(52) (ii) Follows by evaluating

(53) E [ .Math. k ( t ) - k * .Math. L 1 ] = 1 m .Math. t = 1 m .Math. k ( t ) - k * .Math. L 1 = 2 n - 2 n m ( 11 )

(54) Theorem 1 part (i) states that the particular multiple importance sampling solution k.sup.DE which equally allocates samples to the m proposal distributions has the best worse case performance (i.e., the smallest tight upper bound on the distance to an optimal solution). Additionally, any single importance sampling solution k.sup.(t) has the worst upper bound. Any multiple importance sampling solution vector k with k.sub.t>0∀t has better worst case performance than k.sup.(t). Part (ii) states that the expectation of the distance between single importance sampling solutions and an optimal k* upper bounds the distance between k.sup.DE and k*. Together, Theorem 1 shows that k.sup.DE achieves in the worst case optimal uniformity among variances across estimators for a set of r target distributions and greater or equal uniformity with respect to the average case of k.sup.(t).

(55) Theorem 2.

(56) The average variance across estimators for the r target distributions produced by k.sup.DE equals the expected average variance produced by the SPI approach. That is,

(57) 1 r .Math. j = 1 r var ( μ j , k DE ) = E [ 1 r .Math. j = 1 r var ( μ j , k ( t ) ) ] ( 12 )

(58) where the expectation is over special vectors k.sup.(t).

(59) Proof. (Sketch): It follows from rearranging the following equation:

(60) 1 r .Math. j = 1 r var ( μ j , k DE ) = 1 r .Math. j = 1 r 1 n 2 .Math. t = 1 m n m var ( X j , t , i ) ( 13 )

(61) In combination, Theorems 1 and 2 show that DE achieves better uniformity among the variances of the r estimators than SPI while maintaining the average variance of the system. Although DE may not provide an optimal solution, it is a robust approach. Its particular choice of equal allocation of samples is guaranteed to outperform the expected performance of SPI.

(62) Diverse Exploration Algorithm Framework

(63) Algorithm 1 provides the overall DE framework. In each policy improvement iteration, it deploys the most recently confirmed set of policies custom character to collect n trajectories as uniformly distributed over the π.sub.i∈custom character as possible. That is, if |custom character|=m, according to

(64) k DE = ( n m , n m , . . . , n m ) .
For each trajectory, it maintains a label with the π.sub.i which generated it in order to track which policy is the behavior policy for importance sampling later on. For each set of trajectories custom character.sub.i collected from π.sub.i, partition custom character.sub.i and append to custom character.sub.train and custom character.sub.test accordingly. Then, from custom character.sub.train a set of candidate policies is generated in line 8 after which each is evaluated in line 9 using custom character.sub.test If any subset of policies are confirmed they become the new set of policies to deploy in the next iteration. If no new policies are confirmed, the current set of policies are redeployed.

(65) In choosing a lower bound ρ for each iteration, the EvalPolicies function performs a t-test on the normalized returns of custom character.sub.test without importance sampling. It treats the set of deployed policies as a mixture policy that generated custom character.sub.test. In this way, ρ.sub.− reflects the performance of the past policies, and naturally increases per iteration as deployed policies improve and |custom character.sub.test| increases.

(66) A set of trajectories custom character.sub.train is assumed to have collected by deploying an initial policy π.sub.0. The question remains how to learn a set of diverse and good policies which requires a good balance between the diversity and quality of the resulting policies. Inspired by ensemble learning (Dietterich 2001), our approach learns an ensemble of policy or value functions from custom character.sub.train. The function GenCandidatePolicies can employ any batch RL algorithm such as a direct policy search algorithm as in (Thomas, Theocharous, and Ghavamzadeh 2015b) or a fitted value iteration algorithm like Fitted Q-Iteration (FQI) (Ernst et al. 2005). A general procedure for GenCandidatePolicies is given in Algorithm 2.

(67) TABLE-US-00001 Algorithm 1 DIVERSEEXPLORATION (π.sub.0, r, d, n, δ) Input: π.sub.0 Starting policy, r: number of candidates to generate, d: number of iterations of policy improvement, n: number of trajectories to collect per iteration, δ: confidence  1: custom character  ← {π.sub.0}  2: custom character .sub.train, custom character .sub.test = ∅  3: for j = 1 to d do  4:   for π.sub.i ∈ custom character  do  5: generate n .Math. P .Math. trajectories from π i and append a fixed portion to 𝒟 train and the rest to 𝒟 test  6:   end for  7:   ρ. = t-test (custom character .sub.test, δ, | custom character .sub.test|)  8:   {π.sub.1, . . . π.sub.r} = GenCandidatePolicies (custom character .sub.train, r)  9:   passed = EvalPolicies ({π.sub.1, . . . π.sub.r}, custom character .sub.test, δ, ρ.) 10:   if |passed| > 0 then 11:     custom character  = passed 12:   end if 13: end for

(68) TABLE-US-00002 Algorithm 2 GENCANDIDAIEPOLICIES ( custom character  .sub.train, r) Input: custom character  .sub.train: set of training trajectories, r: number of candidates to generate Output: set of r candidate policies 1: custom character  = Ø 2: π.sub.1 = LearnPolicy ( custom character  .sub.train) 3: custom character  ← append ( custom character  , π.sub.1) 4: for i = 2 to r do 5:  custom character  ′ = bootstrap ( custom character  .sub.train) 6: π.sub.i = LearnPolicy ( custom character  ′) 7: custom character  ← append ( custom character  , π.sub.i) 8: end for 9: return custom character

(69) A bootstrapping (sampling with replacement) method is preferably employed with an additional subtlety which fits naturally with the fact that trajectories are collected incrementally from different policies. The intention is to maintain the diversity in the resulting trajectories in each bootstrapped subset of data. With traditional bootstrapping over the entire training set, it is possible to get unlucky and select a batch of trajectories that do not represent policies from each iteration of policy improvement. To avoid this, bootstrapping within trajectories collected per iteration is performed. Training on a subset of trajectories from the original training set custom character.sub.train may sacrifice the quality of the candidate policies for diversity, when the size of custom character.sub.train is small as at the beginning of policy improvement iterations. Thus, the first policy added to the set of candidate policies is trained on the full custom character.sub.train, and the rest are trained on bootstrapped data.

(70) There is potential for the application of more sophisticated ensemble ideas. For example, one could perform an ensemble selection procedure to maximize diversity in a subset of member policies based on some diversity measure (e.g., pairwise KL divergence between member policies).

(71) Although the proposed procedure has some similarity to ensemble learning, it is distinct in how the individual models are used. Ensemble learning aggregates the ensemble of models into one, while the present procedure will validate each derived policy and deploy the confirmed ones independently to explore the environment. As a result, only the experience data from these diverse behavior policies are assembled for the next round of policy learning.

(72) To validate candidate policies, a set of trajectories independent from the trajectories used to generate candidate policies is needed. So, separate training and test sets custom character.sub.train, custom character.sub.test are maintained by partitioning the trajectories collected from each behavior policy π.sub.i based on a predetermined ratio (1/5, 4/5) and appending to custom character.sub.train and custom character.sub.test. GenCandidatePolicies uses only custom character.sub.train whereas validation in EvalPolicies uses only custom character.sub.test.

(73) Specifically, EvalPolicies uses the HCOPE method (described earlier) to obtain a lower bound p.sub.− on policy performance with confidence 1−δ. However, since it performs testing on multiple candidate policies, it also applies the Benjamini Hochberg procedure (Benjamini and Hochberg 1995) to control the false discovery rate in multiple testing. A general procedure for EvalPolicies is outlined in Algorithm 3.

(74) TABLE-US-00003 Algorithm 3 EVALPOLICIES ( custom character  , custom character  .sub.test, δ, ρ) Input: custom character  : set of candidate policies,  custom character  .sub.test: set of test trajectories, δ: confidence, ρ: lower bound Output: passed: candidates that pass 1:   Apply HCOPE t-test ∀ π.sub.i ∈ custom character  with  custom character  .sub.test, δ, | custom character  .sub.test| 2:   passed = { π.sub.i|π.sub.i deemed safe following FDR control} 3:   return passed

(75) TABLE-US-00004 Algorithm 4 DIVERSEPOLICYGRADIENT (π.sub.1, r, β, β custom character .sub. ) Input: custom character  : π.sub.1: starting policy, r: number of conjugate policies to generate, β: number of steps to sample from main policy, β custom character .sub. : number of steps to sample per conjugate policy Output: passed: candidates that pass 1: Initialize conjugate policies custom character  .sub.1 as r copies of starting policy 2: for {.sup.i = 1,2...} 3: S.sub.i ← sample β steps from π.sub.i and β custom character .sub. i steps from each conjugate policy π ∈ custom character  (sample main and diverse policies) 4: π.sub.i+1 ← policy_improvement(S.sub.i, π.sub.i) 5: custom character  .sub.i+1 ← conjugate_policies(S.sub.i, π.sub.i+1) (generate diverse policies) 6: end for

(76) The Diverse Policy Gradient (DPG) algorithm is a policy gradient (PG) method for reinforcement learning (RL) that generalizes certain aspects of traditional PG methods and falls under the Diverse Exploration framework of iteratively learning and deploying diverse and safe policies to explore an environment. Traditional methods iteratively sample a single policy and use those samples to make gradient based improvements to that policy. DPG also makes gradient based improvements to a single policy but employs the novel idea of sampling from multiple conjugate policies. This novelty addresses a recognized deficiency in PG methods; a lack of exploration which causes PG methods to suffer from high sample complexity. Conjugate policies are optimally diverse with respect to a KL-divergence based diversity measure and can be safe if their distances in terms of KL divergence to a main policy are constrained.

(77) Algorithm 4 is a general algorithmic framework for DPG. In line 3, the main policy along with each of the conjugate policies are sampled for β and β.sub.C steps, respectively. In line 4, any policy gradient improvement step can be applied i.e. Natural Gradient Descent (see, e.g., Amari, Shun-ichi, Andrzej Cichocki, and Howard Hua Yang. “A new learning algorithm for blind signal separation.” In Advances in neural information processing systems, pp. 757-763. 1996, expressly incorporated herein by reference in its entirety), Trust Region Policy Optimization (see, Schulman, John, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. “Trust region policy optimization.” In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 1889-1897. 2015, expressly incorporated herein by reference in its entirety) to generate a new policy π.sub.i+1.

(78) In line 5, conjugate policies with respect to π.sub.i+1 are generated to be deployed in the next iteration of sampling. Generation of conjugate policies is discussed in the following section.

(79) Conjugate Policies

(80) In the context of PG methods, a policy π is a distribution over the action space conditioned by the current state and parameterized by a vector θ. That is, an action a is drawn from the distribution a ˜π(⋅|s, θ), given state s and parameters θ.

(81) Conjugacy between two vectors μ.sub.i and μ.sub.j with respect to an inner product is defined as μ.sub.iA.sub.iμ.sub.j=0 if i≠j.

(82) where A is a positive definite matrix. In the current setting, A is the Fisher Information Matrix (FIM) where

(83) A ij = θ i θ j log ( π ( .Math. .Math. θ ) ) .

(84) We define two policies π.sub.1 and π.sub.2 as conjugate if their parameterizations are translations of an original set of parameters θ by conjugate vectors. Concretely, π.sub.1 and π.sub.2 are conjugate policies if their parameterizations can be written as θ+μ.sub.1 and θ+μ.sub.2 for two conjugate vectors μ.sub.1 and μ.sub.2.

(85) There are a number of ways to generate conjugate vectors and, for simplicity, we use the vectors generated as a byproduct of the conjugate gradient descent algorithm. (See, Gilbert, Jean Charles, and Jorge Nocedal. “Global convergence properties of conjugate gradient methods for optimization.” SIAM Journal on optimization 2, no. 1 (1992): 21-42; Nocedal, Jorge, and Stephen Wright. Numerical optimization. Springer Science & Business Media, 2006, expressly incorporated herein by reference in their entirety) which is used to compute the natural gradient descent direction in the PG algorithms. A more sophisticated but computationally expensive method is to take an eigenvector decomposition of the FIM. See, Vallisneri, Michele. “Use and abuse of the Fisher information matrix in the assessment of gravitational-wave parameter-estimation prospects.” Physical Review D 77, no. 4 (2008): 042001; Louis, Thomas A. “Finding the observed information matrix when using the EM algorithm.” Journal of the Royal Statistical Society. Series B (Methodological) (1982): 226-233; Yu, Hua, and Jie Yang. “A direct LDA algorithm for high-dimensional data—with application to face recognition.” Pattern recognition 34, no. 10 (2001): 2067-2070; Kammer, Daniel C. “Sensor placement for on-orbit modal identification and correlation of large space structures.” Journal of Guidance, Control, and Dynamics 14, no. 2 (1991): 251-259; Stoica, Petre, and Thomas L. Marzetta. “Parameter estimation problems with singular information matrices.” IEEE Transactions on Signal Processing 49, no. 1 (2001): 87-90, expressly incorporated herein by reference in their entirety).

(86) Relationship to Baseline Algorithm

(87) Algorithm 1 reduces to the baseline algorithm SPI when the number of candidate policies to generate, r, is set to 1. In this case, GenCandidatePolicies simply returns one policy π.sub.1 trained on the full trajectory set. The multiple comparison procedure in EvalCandidatePolicies degenerates to a single t-test on importance weighted returns. The trajectory collection phase in DiverseExploration becomes a collection of n trajectories from one policy.

(88) In implementation, this baseline algorithm is most similar to the Daedalus2 algorithm proposed in (Thomas, Theocharous, and Ghavamzadeh 2015b) (reviewed earlier) with some technical differences. For example, the lower bound p.sub.− is fixed for each iteration of policy improvement whereas in the present algorithm, p.sub.− increases over iterations.

(89) Empirical Study

(90) As a baseline, SPI is used, which, like DE, provides a feasible solution to the FSI problem, making a more suitable candidate for comparison than either c-greedy or R-MAX like approaches. Comparing DE with SPI allows us to directly contrast multiple importance sampling vs. single importance sampling.

(91) Three RL benchmark domains are used for analysis: an extended Grid World as described earlier and the classic control domains of Mountain Car and Acrobot (Sutton and Barto 1998). To demonstrate the generality of the DE framework two markedly different RL algorithms are used for learning policies. In Grid World, Covariance Matrix Adaptation, Evolution Strategies (CMA-ES) (Hansen 2006), is used, a gradient-free policy search algorithm that directly maximizes the importance sampled estimate as the objective as in (Thomas, Theocharous, and Ghavamzadeh 2015b). In Mountain Car and Acrobot, FQI, an off-policy value approximation algorithm, with Fourier basis functions of order 3 is used (Konidaris, Osentoski, and Thomas 2011) for function approximation. Following (Thomas, Theocharous, and Ghavamzadeh 2015b), δ=0.05 for is set all experiments.

(92) Candidate policies are generated as mixed policies, as in (Thomas, Theocharous, and Ghavamzadeh 2015b) and (Jiang and Li 2016), to control how different a candidate policy can be from a prior behavior policy. A mixed policy μ.sub.α,π.sub.0.sub.,π is defined as a mixture of policies π.sub.0 and π by mixing parameter α∈[0, 1]: μ.sub.α,π.sub.0.sub.,π(a|s):=(1−a)π(a|s)+απ.sub.0(a|s). A larger α tends to make policy confirmation easier, at the cost of yielding a more conservative candidate policy and reducing the diversity in the confirmed policies. In experiments, α=0.3 is used for Gridworld and α=0.9 for Mountain Car/Acrobot. For Mountain Car and Acrobot, a high value of α is needed because FQI does not directly maximize the importance sampled estimate objective function as with CMA-ES used for Gridworld. With smaller values of a, DE still outperforms SPI but requires significantly more iterations.

(93) To measure how DE contributes to the diversity of the experiences collected, the joint entropy measure is used, which is calculated over the joint distribution over states and actions. Higher entropy (uncertainty) means higher diversity in experienced (s,a) pairs, which reflects more effective exploration to reduce the uncertainty in the environment.

(94) FIGS. 2A and 2B show the results comparing DE with SPI on Grid World. DE succeeds in the FSI objective of learning more quickly and reliably than SPI does. FIG. 2A shows average normalized returns over 50 runs of policy improvement. FIG. 2B shows diversity in experienced (s, a) pairs.

(95) TABLE-US-00005 TABLE 1 Average aggregate normalized returns Domain SPI DE Grid World 604.970 675.562 Mountain Car 362.038 381.333 Acrobot 417.145 430.146 DE results are significant improvements (p ≤ .001)

(96) DEs deployed policies obtain a higher average return from iteration 7 onward and ultimately achieve a higher return of 0.73 compared to 0.65 from SPI. To be clear, each point in the two curves shown in FIG. 2A represents the average (over 50 runs) of the average normalized return of a total of n=40 trajectories collected during a policy improvement iteration. To test the significance of the results, a two-sided paired t-test is run at each iteration and found p<0.001. Further, FIG. 2B clearly shows that DE is superior in terms of the joint-entropy of the collected sample distribution, meaning DE collects more diverse samples. DE's advantage in overall performance is attributed to the significant increase in sample diversity.

(97) Ideally, an FSI solution will derive and confirm an optimal policy π* in as few iterations as possible, although determining if a given policy is optimal can be difficult in complex domains. In Grid World, this is not a difficulty as there are 64 distinct optimal policies π*. For these experiments the average number of iterations required to confirm at least one π* is computed. DE achieved this after 16 iterations, whereas SPI achieved this after 22 iterations. This translates to a 240 trajectory difference on average in favor of DE. Additionally, DE was able to confirm an optimal policy in all 50 runs whereas SPI was unsuccessful in 5 runs.

(98) For conciseness of presentation, Table 1 shows the performance results of the two methods over all three domains in the form of average aggregate normalized return. This statistic corresponds to the area under the curve for performance curves as shown in FIG. 2A. Higher values indicate faster policy improvement and more effective learning. The results show that DE succeeds in learning and deploying better performing policies more quickly than SPI.

(99) Finally, to evaluate the safety of deployed policies, the empirical error rates (the probability that a policy was incorrectly declared safe) was computed. In all experiments the empirical error for DE is well below the 5% threshold. Combined these results demonstrate that DE can learn faster and more effectively than SPI without sacrificing safety.

(100) A novel exploration strategy is proposed as the solution to the FSI problem and the DE theory explaining the advantage of DE over SPI. The DE algorithm framework is shown to achieve both safe and fast policy improvement and that it significantly outperforms the baseline SPI algorithm.

(101) Other importance sampling estimators may be employed in the framework, such as (Jiang and Li 2016; Thomas and Brunskill 2016; Wang, Agarwal, and Dudik 2017). DE can also be integrated with other safe policy improvement algorithms (Petrik, Ghavamzadeh, and Chow 2016). Diverse policies may be optimally generated to fully capitalize on the benefit of DE.

(102) The technology can be applied to autonomous systems in various domains such as smart manufacturing, industrial robots, financial trading and portfolio management, cyber system management, autonomous vehicles, and autonomous controls in power plants and smart buildings.

(103) DE for Additive Manufacturing Design

(104) Additive manufacturing (e.g., cold spray and powder bed manufacturing) commonly involves the deployment of a robotic agent and complex trajectory traversal by the agent to meet multifaceted objectives such as surface quality, material properties, etc. In high-precision domains (e.g., aerospace), it is very costly and time consuming to manually design an effective control policy for every specific design, manufacturing or repair task.

(105) However, according to the present technology, the manual design effort may be replaced by a safe diverse exploration and feedback effort, which permits humans or automated agents to assess the performance of the result. For example, where surface texture serves a visual appearance function, a human observer may be used to rate the texture. The rating is then fed back, and the system will develop over various tasks and improve performance. Other criteria, such as mechanical performance, may be assessed through objective measures, and both objective and subjective inputs may be considered. While this presupposes an iterative development of the policy, in many cases, human developers will also require iterations, and in many cases, even an excess of automated trials is cost efficient as comparted to human implementation.

(106) DE for Cybersecurity

(107) In the domain of cyber infrastructure and security management, an autonomous agent is tasked with managing the services, maintenance, and cyber defense operations of an organization's network. The agent must continuously improve and adapt a control policy that provides the necessary availability of services and achieves high efficiency of its resources as well as strong security. In this highly uncertain and dynamic domain, generic fixed policies cannot provide high efficiency or security, but the system must provide an adequately guaranteed baseline of performance.

(108) In this case, a human operator may oversee the intelligent agent, but often, an immediate automated response to a risk is required, and therefore the human intervention is used to assess the machine performance. In this case, the safety criteria include basic rules and norms of behavior that at least meet stated or mandated policies and practices.

(109) The adoption of the technology in this invention to these applications can enable an autonomous agent to start with a baseline control policy developed by domain experts or learned by the agent from a simulated environment, and quickly improve the performance of the current control policy online in a real-world environment while ensuring safe operations.

(110) DE Via Conjugate Policies

(111) We address the challenge of effective exploration while maintaining good performance in policy gradient methods. As a solution, we propose diverse exploration (DE) via conjugate policies. DE learns and deploys a set of conjugate policies which can be conveniently generated as a byproduct of conjugate gradient descent. We provide both theoretical and empirical results showing the effectiveness of DE at achieving exploration, improving policy performance, and the advantage of DE over exploration by random policy perturbations.

(112) Variance Reduction of Parameter Perturbation Gradient Estimator

(113) Increasing the KL divergence between perturbed policies reduces the variance of the perturbed gradient estimate. Conjugate vectors maximize pairwise KL divergence among a constrained number of perturbations.

(114) Considering the general case where ϵ˜P where custom character is the perturbation distribution. When custom character=custom character(0,Σ), we recover the gradient in Equation (1). To simplify notations in the variance analysis of the perturbed gradient estimate, E is written as shorthand for ϕ+E and let π.sub.ϵ be the policy with parameters ϕ perturbed by ϵ. Moreover,

(115) G := 𝔼 τ ~ π .Math. t = 0 T ϕ log ( π ( a t .Math. s t ) ) R t ( τ ) ]
is the gradient with respect to ϕ with perturbation ϵ. The final estimate to the true gradient in Equation (1) is the Monte Carlo estimate of G.sub.ϵ.sub.i(1≤i≤k) over k perturbations. For any ϵ.sub.i, G.sub.ϵ.sub.i is an unbiased estimate of the gradient so the averaged estimator is too. Therefore, by reducing the variance, we reduce the estimate's mean squared error. The variance of the estimate over k perturbations ϵ.sub.i is

(116) 𝕍 ( 1 k .Math. i = 1 k G i ) = 1 k 2 .Math. i = 1 k 𝕍 i ( G i ) + 2 k 2 .Math. i = 1 k - 1 .Math. j = i + 1 k Cov i , j ( G i , G j ) ( 14 )

(117) where V.sub.∈.sub.i(G.sub.∈.sub.i) is the variance of the gradient estimate G.sub.∈.sub.i and Cov.sub.∈.sub.i.sub.∈.sub.j(G.sub.∈.sub.i, G.sub.∈.sub.j) is the covariance between the gradients G.sub.∈.sub.i and G.sub.∈.sub.j.

(118) V(G.sub.∈.sub.i) is equal to a constant for all i because G.sub.∈.sub.i are identically distributed. So, the first term in Equation (14) approaches zero as k increases and does not contribute to the asymptotic variance. The covariance term determines whether the overall variance can be reduced. To see this, consider the extreme case when G.sub.∈.sub.i=G.sub.∈.sub.j for i≠j. Equation (14) becomes

(119) 0 𝕍 ( 1 k .Math. i = 1 k G i ) = 𝕍 i ( G i )
because all Cov.sub.∈.sub.i.sub.,∈j(G.sub.∈.sub.i,G.sub.∈.sub.j)=V(G.sub.∈.sub.i). The standard PG estimation (i.e. TRPO) falls into this extreme as a special case of the perturbed gradient estimate where all perturbations are the zero vector.

(120) Next consider the special case where Cov.sub.∈.sub.i.sub.,∈j(G.sub.∈.sub.i,G.sub.∈.sub.j)=0 for i≠j. Then, the second term vanishes and

(121) 𝕍 ( 1 k .Math. i = 1 k G i ) = O ( k - 1 ) .
The RP approach strives for this case by i.i.d. sampling of perturbations ∈. This explains why RP was shown to outperform TRPO in some experiments (Plappert et al. 2018). However, it is important to note that i.i.d. ∈ do not necessarily produce uncorrelated gradients G.sub.∈ as this depends on the local curvature of the objective function. For example, perturbations in a flat portion of parameter space will produce equal gradient estimates that are perfectly positively correlated. Thus, G.sub.ϵ.sub.i are identically distributed but not necessarily independent. This suggests that using a perturbation distribution such as custom character(0, Σ) may suffer from potentially high variance if further care is not taken. This work develops a principled way to select perturbations in order to reduce the covariance.

(122) There are two major sources of variance in the covariance terms; the correlations among custom character.sub.ϕ log(π.sub.∈.sub.i) and custom character.sub.ϕ log(π.sub.∈.sub.j) and correlations related to Rt(τ). The difference in performance of two policies (as measured by R.sub.t(τ)) can be bounded by a function of the average KL divergence between them (Schulman et al. 2015). So, the contribution to the covariance from R.sub.t(τ) will be relatively fixed since all perturbations have a bounded KL divergence to the main policy. In view of this, we focus on controlling the correlation between custom character.sub.ϕ log(π.sub.∈.sub.i) and custom character.sub.ϕ log(π.sub.∈.sub.j).

(123) This brings us to Theorem 3 which shows that maximizing the diversity in terms of KL divergence between two policies π.sub.∈.sub.i and π.sub.∈.sub.j minimizes the trace of the covariance between custom character.sub.ϕ log(π.sub.∈.sub.i) and custom character.sub.ϕ log(π.sub.∈.sub.j).

(124) Theorem 3.

(125) Let ϵ.sub.i and ϵ.sub.j be two perturbations such that ∥ϵ.sub.i∥.sub.2=∥ϵ.sub.i∥.sub.2=δu. Then, (1) the trace of Cov(custom characterϕ log(π.sub.ϵ.sub.j), custom characterϕ log(π.sub.ϵ.sub.i)) is minimized and

(126) (2) ½(ϵ.sub.j−ϵ.sub.i).sup.T{circumflex over (F)}(ϵ.sub.i)(ϵ.sub.j−ϵ.sub.i) the estimated KL divergence D.sub.KL(π.sub.ϵ.sub.i∥π.sub.ϵ.sub.j) is maximized, when ϵ.sub.i=−ϵ.sub.j and they are along the direction of the eigenvector of F(ϵ.sub.i) with the largest eigenvalue.

(127) This theorem shows that, when two perturbations ϵ.sub.i and ϵ.sub.j have a fixed L2 norm δ.sub.ϵ, the perturbations that maximize the KL divergence D.sub.KL(π.sub.ϵ.sub.i∥π.sub.ϵ.sub.j) and also minimize the trace of the covariance Cov(custom character.sub.ϕ log(π.sub.∈.sub.i), custom character.sub.ϕ log(π.sub.∈.sub.j)) are uniquely defined by the positive and negative directions along the eigenvector with the largest eigenvalue. This provides a principled way to select two perturbations to minimize the covariance.

(128) Conjugate Vectors Maximize KL Divergence

(129) In domains with high sample cost, there is likely a limit on the number of policies which can be deployed per iteration. Therefore, it is important to generate a small number of perturbations which yield maximum variance reduction. Theorem 3 shows that the reduction of the covariance can be done by maximizing the KL divergence, which can be achieved using eigenvectors. Eigenvectors are a special case of what are known as conjugate vectors. Theorem 4 shows that when there is a fixed set of k perturbations, conjugate vectors maximize the sum of the pairwise KL divergences.

(130) Since the FIM F.sub.ϕ is symmetric positive definite, there exist n conjugate vectors custom character={μ.sub.1, μ.sub.2, . . . , μ.sub.n} with respect to F.sub.ϕ where n is the length of the parameter vector ϕ. Formally, μ.sub.i and μ.sub.j, i≠j are conjugate if μ.sub.i.sup.TF.sub.ϕμ.sub.j=0. i and j are defined as conjugate policies if their parameterizations can be written as ϕ+μ.sub.i and ϕ+μ.sub.j for two conjugate vectors μ.sub.i and μ.sub.j. custom character forms a basis for custom character.sup.n so any local perturbation ϵ to ϕ, after scaling, can be written as a linear combination of custom character,
ϵ=η.sub.1μ.sub.1+η.sub.2μ.sub.2+ . . . +η.sub.nμ.sub.n where ∥η∥≤1  (15)

(131) For convenience, we assume that η.sub.i≥0. Since the negative of a conjugate vector is also conjugate, if there is a negative η.sub.i, we may flip the sign of the corresponding i to make it positive.

(132) the approximation of KL divergence, provided above is
{tilde over (D)}(ϕ∥ϕ+ϵ)=½ϵ.sup.TF.sub.ϕϵ

(133) The measure of KL divergence of concern is the total divergence between all pairs of perturbed policies:

(134) .Math. i = 1 k - 1 .Math. j = i + 1 k D ~ KL ( ϕ + ϵ j .Math. ϕ + ϵ i ) = .Math. i = 1 k - 1 .Math. j = 1 + 1 k 1 2 ( ϵ i - ϵ j ) T F ϕ ( ϵ i - ϵ j ) ( 16 )

(135) where k is the number of perturbations. Note that ϕ and not ϕ+ϵ. in the subscript of the FIM which would be more precise with respect to the local approximation. The use of the former is a practical choice which allows estimation of a single FIM and avoidance of estimating the FIM of each perturbation. Estimating the FIM is already a computational burden and, since perturbations are small and bounded, using F.sub.ϕ instead of F.sub.ϕ+ϵ has little effect and performs well in practice as demonstrated in experiments. For the remainder of this discussion, we omit ϕ in the subscript of F for convenience. The constraint on the number of perturbations presents the following optimization problem that optimizes a set of perturbations custom character to maximize (16) while constraining |custom character|.

(136) 𝒫 * = arg max 𝒫 .Math. i = 1 k - 1 .Math. j = i + 1 k D ~ KL ( ϕ + ϵ j .Math. ϕ + ϵ i ) ( 17 )

(137) subject to |custom character|=k≤n

(138) We define ∥⋅∥.sub.F as the norm induced by F, that is, ∥x∥.sub.F=x.sup.TFx.

(139) Without the loss of generality, assume the conjugate vectors are ordered with respect to the F-norm,
∥μ.sub.1∥.sub.F≥∥μ.sub.2∥.sub.F≥ . . . ≥∥μ.sub.n∥.sub.F.

(140) The following theorem gives an optimal solution to the Equation (17).

(141) Theorem 4.

(142) The set of conjugate vectors {μ.sub.1, μ.sub.2, . . . , μ.sub.k} maximize the Equation (17) among any k perturbations.

(143) If the assumption that η.sub.i≥0 is relaxed, then the set of vectors that maximize the Equation (17) simply includes the negative of each conjugate vector as well, i.e.,
custom character={μ.sub.1,−μ.sub.1,μ.sub.2,−μ.sub.2, . . . ,μ.sub.k/2,−μ.sub.k/2}.

(144) Including the negatives of perturbations is known as symmetric sampling (Sehnke et al. 2010) which is discussed below.

(145) Theorem 4 makes clear that randomly generated perturbations will be sub-optimal with high probability with respect to the Equation (17) because the optimal solution is uniquely the top k conjugate vectors. Identifying the top k conjugate vectors in each iteration of policy improvement will require significant computation when the FIM is large. Fortunately, there exist computationally efficient methods of generating sequences of conjugate vectors such as conjugate gradient descent (Wright and Nocedal 1999) (to be discussed), although they may not provide the top k. From Theorem 2, it is observed that when all conjugate vectors have the same F-norm, then any set of k conjugate vectors maximize the Equation (17). If the perturbation radius (the maximum KL divergence a perturbation may have from the main policy) is bounded as in (Plappert et al. 2018), DE achieves a computationally efficient, optimal solution to the Equation (17).

(146) Method

(147) Generating Conjugate Policies

(148) Generating conjugate policies by finding the top k conjugate vectors is feasible but computationally expensive. It would require estimating the full empirical FIM of a large neural network (for which efficient approximate methods exist (Grosse and Martens 2016)) and a decomposition into conjugate vectors. This additional computational burden is avoided altogether and conjugate policies generated by taking advantage of runoff from the conjugate gradient descent (CGD) algorithm (Wright and Nocedal 1999). CGD is often used to efficiently approximate the natural gradient descent direction as in (Schulman et al. 2015).

(149) CGD iteratively minimizes the error in the estimate of the natural gradient descent direction along a vector conjugate to all minimized directions in previous iterations. These conjugate vectors are utilized in DE to be used as perturbations. Although these are not necessarily the top k conjugate vectors, they are computed essentially for free because they are generated from one application of CGD when estimating the natural gradient descent direction. To account for the suboptimality, a perturbation radius δ.sub.P is introduced such that for any perturbation ϵ
{tilde over (D)}.sub.KL(ϕ∥ϕ+ϵ)≤γ.sub.P.  (18)

(150) We can perform a line search along each perturbation direction such that {tilde over (D)}.sub.KL(ϕ∥ϕ+ϵ)=δ.sub.P. With this constraint, the use of any k vectors are optimal as long as they are conjugate and the benefit comes from achieving the optimal pairwise divergence.

(151) For each conjugate vector, its negative (i.e., symmetric sampling) is also included as motivated by the more general form of Theorem 4 with relaxed assumptions (without η.sub.i>0). In methods following different gradient frameworks, symmetric sampling was used to improve gradient estimations by alleviating a possible bias due to a skewed reward distribution (Sehnke et al. 2010). Finally, δ.sub.P is linearly reduced, motivated by the observation in (Cohen, Yu, and Wright 2018) that as a policy approaches optimal there exist fewer policies with similar performance.

(152) Algorithm Framework

(153) TABLE-US-00006 Algorithm 5 DIVERSE_EXPLORATION (π.sub.1, k, β, β.sub.k, δ.sub.P) Input: π.sub.1: starting policy, k: number of conjugate policies to generate, β: number of steps to sample from main policy, β.sub.k: number of steps to sample per conjugate policy, δ.sub.P: perturbation radius 1: Initialize conjugate policies custom character  .sub.1 as k copies of π.sub.1 2: for i = 1, 2, ... do 3: custom character  ← sample β steps from π.sub.1 and β.sub.k steps from each conjugate policy π ∈ custom character  .sub.i     //sample main and diverse policies 4: π.sub.i+1, custom character  .sub.i+1 ← policy improvement(S.sub.i, π.sub.i, k, δ.sub.P) 5: end for

(154) A general framework for DE is sketched in Algorithm 5. In line 1, DE assumes a starting policy π.sub.1 (e.g., one generated randomly) which is used to initialize conjugate policies as exact copies. The initial parameterization of π.sub.1 is the mean vector ϕ.sub.1. The number of conjugate policies to be generated is user defined by an argument k. The number of samples to collect from the main and conjugate policies are specified by β and β.sub.k, respectively. The relative values of k, β and β.sub.k control how much exploration will be performed by conjugate policies. DE reduces to the standard PG algorithm when k=0 or β.sub.k=0.

(155) In the ith iteration, after sampling the main and conjugate policies in line 3, line 4 updates ϕ.sub.i via natural gradient descent using the perturbed gradient estimate and returns the updated policy π.sub.i+1 parameterized by ϕ.sub.i+1 and the set of conjugate policies custom character.sub.i+1 parameterized by ϕ.sub.i+1 perturbed by conjugate vectors; policy improvement is a placeholder for any RL algorithm that accomplishes this. Computing perturbations could be done in a separate subroutine (i.e. estimating the FIM and taking an eigendecomposition). When computing the natural gradient by CGD as discussed above, the intermediate conjugate vectors are saved to be used as perturbations.

(156) Empirical Study

(157) The impact of DE via conjugate policies is evaluated on TRPO (Schulman et al. 2015). TRPO is state-of-the-art in its ability to train large neural networks as policies for complex problems. In its standard form, TRPO only uses on-policy data, so its capacity for exploration is inherently limited.

(158) In experiments, three aspects of DE were investigated in comparison with baseline methods. First, the performance of all deployed policies through iterations of policy improvement. It is worth noting the importance of examining the performance of not only the main policy but also the perturbed policies in order to take the cost of exploration into account. Second, the pairwise KL divergence achieved by the perturbed policies of DE and RP, which measures the diversity of the perturbed policies. Third, the trace of the covariance matrix of perturbed gradient estimates. High KL divergence correlates with a low trace of covariance in support of the theoretical analysis. Additionally, the diminishing benefit of exploration when decreasing the number of perturbed policies is demonstrated.

(159) Methods in Comparison

(160) We use two different versions of TRPO as baselines; the standard TRPO and TRPO with random perturbations (RP) and symmetric sampling. The RP baseline follows the same framework as DE but with random perturbations instead of conjugate perturbations. When implementing RP, we replace learning the covariance Σ in the perturbed gradient estimate with a fixed σ.sup.2I as in (Plappert et al. 2018) in which it was noted that the computation for learning Σ was prohibitively costly. A simple scheme is proposed to adjust σ to control for parameter sensitivity to perturbations. The adjustment ensures perturbed policies maintain a bounded distance to the main policy. This is achieved by, for both conjugate and random, searching along the perturbation direction to find the parameterization furthest from the main policy but still within the perturbation radius δ.sub.P. In light of the theoretical results, the use of symmetric sampling in RP serves as a more competitive baseline.

(161) Policies are represented by feedforward neural networks with two hidden layers containing 32 nodes and tan h activation functions. Increasing complexity of the networks did not significantly impact performance and only increased computation cost. Additionally, layer normalization (Ba, Kiros, and Hinton 2016) is used as in (Plappert et al. 2018) to ensure that networks are sensitive to perturbations. Policies map a state to the mean of a Gaussian distribution with an independent variance for each action dimension that is independent of the state as in (Schulman et al. 2015). The values of these variance parameters are significantly constrained to align with the motivation for parameter perturbation approaches discussed in the Introduction. This will also limit the degree of exploration as a result of noisy action selection. The TD(1) (Sutton and Barto 1998) algorithm is used to estimate a value function V over all trajectories collected by both the main and perturbed policies. To estimate the advantage function, the empirical return of the trajectory is used as the Q component and V as a baseline. TRPO hyperparameters are taken from (Schulman et al. 2015; Duan et al. 2016).

(162) The results are displayed on three difficult continuous control tasks, Hopper, Walker and HalfCheetah implemented in OpenAl gym (Brockman et al. 2016) and using the Mujoco physics simulator (Todorov, Erez, and Tassa 2012). As mentioned in the discussion of Algorithm 5, the values of k, β and β.sub.k determine exploration performed by perturbed policies. TRPO is at the extreme of minimal exploration since all samples come from the main policy. To promote exploration, in DE and RP we collect samples equally from all policies. More specifically, we use k=20 perturbations for Hopper and k=40 perturbations for Walker and HalfCheetah for both DE and RP. Walker and HalfCheetah each have 3 more action dimensions than Hopper and so require more exploration and hence more agents. For a total of N (N=21000 for Hopper and N=41000 for Walker and HalfCheetah in the reported results) samples collected in each policy improvement iteration, TRPO collects β=N samples per iteration while DE and RP collect

(163) β = β k = N k + 1
samples from the main and each perturbed policy. The experiments show a trend of diminishing effect of exploration on policy performance when the total samples are held constant and increases. The initial perturbation radius used in experiments is δ.sub.P=0.2 for Hopper and HalfCheetah and δ.sub.P=0.1 for Walker. Larger perturbation radiuses caused similar performance to the reported results but suffered from greater instability.

(164) TABLE-US-00007 TABLE 2 Total pairwise KL divergence averaged over iterations of DE vs. RP. Reported values are the average over 10 runs with all p-values < 0.001. Domain Hopper Walker HalfCheetah DE 53.5 82.7 192.5 RP 38.1 77.6 156.1

(165) Results

(166) The two rows of Table 2 aim to address the three points of investigation raised at the beginning of this section. The goal is to show that perturbations with larger pairwise KL divergence are key to both strong online performance and enhanced exploration.

(167) In the first column of Table 2, results are reported on the Hopper domain. FIG. 3A contains curves of the average performance (sum of all rewards per episode) attained by TRPO, RP and DE. For RP and DE, this average includes the main and perturbed policies. RP has a slight performance advantage over TRPO throughout all iterations and converges to a superior policy. DE shows a statistically significant advantage in performance over RP and TRPO; a two-sided paired t-test of the average performance at each iteration yields p<0.05. Additionally, DE converges to a stronger policy and shows a larger rate of increase over both RP and TRPO. DE also results in the smallest variance in policy performance as shown by the interquartile range (IQR) which indicates that DE escapes local optima more consistently than the baselines. These results demonstrate the effect of enhanced exploration by DE over TRPO and RP.

(168) The trace of covariance of the perturbed gradient estimates are contained in FIG. 3D. Note, the covariance of TRPO gradient estimates can be computed by treating TRPO as RP but with policies perturbed by the zero vector. Interestingly, FIG. 3D shows an increasing trend for all approaches. Two possible explanations are posited for this; that policies tend to become more deterministic across learning iterations as they improve and, for DE and RP, the decreasing perturbation radius. Ultimately, both limit the variance of action selection and so yield more similar gradient estimates. Nevertheless, at any iteration, DE can significantly reduce the trace of covariance matrix due to its diversity.

(169) Column 1 of Table 2 reports the average total pairwise KL divergence over all perturbed policies for the Hopper domain. DE's conjugate policies have significantly larger pairwise KL divergence than RP. This significant advantage in pairwise KL divergence yields lower variance gradient estimates which explain the observed superiority in performance, rate of improvement and lower IQR as discussed.

(170) Similar trends are observed in FIGS. 3B and 3E and column 2 in Table 2 on the Walker domain. The performance of DE is clearly superior to both baselines but, due to the higher variance of the performance of the baselines, does not yield a statistically significant advantage. Despite this, DE maintains a significantly higher KL divergence between perturbed policies and significantly lower trace covariance estimates across iterations. Additionally, the same trends are observed in FIGS. 3C and 3F and column 3 in Table 2 in the HalfCheetah domain. DE shows a statistically significant advantage in terms of performance and pairwise KL divergence (p<0.05) over RP and TRPO despite their more similar covariance estimates.

(171) Finally, the impact of decreasing the number of perturbed policies while keeping the samples collected constant on the Hopper domain is investigated. In FIG. 4, the average performance of DE for k=20, 10, 4, 2 as well as TRPO (k=0) is shown. Decreasing k leads to decreasing average performance and rate of improvement. Additionally, decreasing k leads to increasing performance variance. Both of these observations demonstrate that increasing diversity among behavior policies is key to strong online performance and exploration.

(172) Computational Platform

(173) The present invention may be implemented on various platforms, which may include cloud computers processing clusters, general purpose computers, general purpose graphics processing units (GPGPUs, typically SIMD parallel processors), embedded controllers, application specific integrated circuits (ASICs), programmable logic arrays (PGAs), and other type of platforms. For exemplary and non-limiting description, such a platform may be (see, U.S. Pat. No. 9,858,592, expressly incorporated herein by reference in its entirety):

(174) FIG. 3 illustrates an example of exemplary hardware configuration, see U.S. Pat. No. 7,702,660, expressly incorporated herein by reference, which shows a block diagram of a computer system 400. Computer system 400 includes a bus 402 or other communication mechanism for communicating information, and a processor 404 coupled with bus 402 for processing information. Computer system 400 also includes a main memory 406, such as a random access memory (RAM, DDR, DDR2, DDR3, DDR4, DDR5) or other dynamic storage device, coupled to bus 402 for storing information and instructions to be executed by processor 404 (e.g., ARM, x86, i3, i5, i7, i9, Rizen, etc.). Main memory 406 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 404. Computer system 400 further includes a read only memory (ROM) 408 or other static storage device coupled to bus 402 for storing static information and instructions for processor 404. A storage device 410, such as a magnetic disk or optical disk or magneto-optical disk or solid state disk device, is provided and coupled to bus 402 for storing information and instructions. The computer system may also employ non-volatile memory, such as FRAM and/or MRAM.

(175) The computer system may include a graphics processing unit (GPU), which, for example, provides a parallel processing system which is architected, for example, as a single instruction-multiple data (SIMD) processor. Such a GPU may be used to efficiently compute transforms and other readily parallelized and processed according to mainly consecutive unbranched instruction codes.

(176) Computer system 400 may be coupled via bus 402 to a display 412, such as a liquid crystal display (LCD), for displaying information to a computer user. An input device 414, including alphanumeric and other keys, is coupled to bus 402 for communicating information and command selections to processor 404. Another type of user input device is cursor control 416, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 404 and for controlling cursor movement on display 412. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.

(177) According to one embodiment of the invention, those techniques are performed by computer system 400 in response to processor 404 executing one or more sequences of one or more instructions contained in main memory 406. Such instructions may be read into main memory 406 from another machine-readable medium, such as storage device 410. Execution of the sequences of instructions contained in main memory 406 causes processor 404 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.

(178) The computing architecture may also encompass so-called cloud computing, compute clusters, field programmable gate arrays, and other computational platforms.

(179) The term “machine-readable medium” as used herein refers to any medium that participates in providing data that causes a machine to operation in a specific fashion. In an embodiment implemented using computer system 400, various machine-readable media are involved, for example, in providing instructions to processor 404 for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media. Non-volatile media includes, for example, semiconductor devices, optical or magnetic disks, such as storage device 410. Volatile media includes dynamic memory, such as main memory 406. All such media are tangible to enable the instructions carried by the media to be detected by a physical mechanism that reads the instructions into a machine. Common forms of machine-readable media include, for example, hard disk (or other magnetic medium), CD-ROM, DVD-ROM (or other optical or magnetoptical medium), DVD-RW, Blueray, semiconductor memory such as RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer can read. Various forms of machine-readable media may be involved in carrying one or more sequences of one or more instructions to processor 404 for execution.

(180) For example, the instructions may initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over the Internet through an automated computer communication network. An interface local to computer system 400, such as an Internet router, can receive the data and communicate using an Ethernet protocol (e.g., IEEE-802.X) or wireless network interface (e.g., IEEE-802.11, or Bluetooth compatible, 3G cellular, 4G cellular, 5G cellular, WiMax, etc.) to a compatible receiver, and place the data on bus 402. Bus 402 carries the data to main memory 406, from which processor 404 retrieves and executes the instructions. The instructions received by main memory 406 may optionally be stored on storage device 410 either before or after execution by processor 404.

(181) Computer system 400 also includes a communication interface 418 coupled to bus 402. Communication interface 418 provides a two-way data communication coupling to a network link 420 that is connected to a local network 422. For example, communication interface 418 may be a local area network (LAN) interface to provide a data communication connection to a compatible LAN, such as 1 GBit Ethernet. In any such implementation, communication interface 418 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Network link 420 typically provides data communication through one or more networks to other data devices. For example, network link 420 may provide a connection through local network 422 to a host computer 424 or to data equipment operated by an Internet Service Provider (ISP) 426. ISP 426 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 428. Local network 422 and Internet 428 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 420 and through communication interface 418, which carry the digital data to and from computer system 400, are exemplary forms of carrier waves transporting the information.

(182) Computer system 400 can send messages and receive data, including memory pages, memory sub-pages, and program code, through the network(s), network link 420 and communication interface 418. In the Internet example, a server 430 might transmit a requested code for an application program through Internet 428, ISP 426, local network 422 and communication interface 418. The received code may be executed by processor 404 as it is received, and/or stored in storage device 410, or other non-volatile storage for later execution.

(183) The CPU may be a multicore CISC processor, and may be lossely or tightly coupled with a parallel processing unit suitable for graphics processing, such as a GPU which employs SIMD technology. Advantageously, the graphics processing unit may be programmed to assist in handling parallel tasks, such as matrix transformation, linear algebra, and other tasks, especially if concurrent demand for graphics processing is low, or alternate facilities are available to produce an output display.

(184) A GPU processor, e.g., a GPGPU such as within the nVidia CUDA architecture, may effectively be used for deep learning and generation of neural networks or deep neural networks, e.g., representing the respective policies or sets of policies, implement the diverse exploration, the safety confidence testing, and, in some cases, may themselves represent a target system for action by the policy. In other cases, a standard CISC architecture processor may be used, and/or other types of parallel processing or sequential processing. In some cases, the implementation of the algorithm for generating the diverse set of safe policies may be performed using cloud computing technology, such as using virtual machines in server racks of a data center.

(185) The order in which operations, procedures, steps, stages, etc., are executed in processing in the apparatuses, the system, the programs and the methods described in the appended claims, the specification and the drawings is not indicated particularly explicitly by “before”, “prior to” or the like. Also, it is to be noted that such process steps can be realized in a sequence freely selected except where an output from a preceding stage is used in a subsequent case. Even if descriptions are made by using “first”, “next”, etc., for convenience sake with respect to operation flows in the appended claims, the specification and the drawings, they are not intended to show the necessity to execute in the order specified thereby.

(186) What has been described above includes examples of the disclosed and claimed subject matter. It is, of course, not possible to describe every conceivable combination of components and/or methodologies, but one of ordinary skill in the art may recognize that many further combinations, subcombinations, and permutations are possible, and are expressly contemplated of the various disclosures herein, including those incorporated by reference herein. Accordingly, the claimed subject matter is intended to embrace all such alterations, hybrids, modifications and variations that fall within the spirit and scope of the appended claims.

(187) Furthermore, to the extent that the term “includes” is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term “comprising” as “comprising” is interpreted when employed as a transitional word in a claim. Unless inconsistent with the context, the word “or” shall be interpreted to include the both the conjunction and disjunction of the options.