Global overview of Imitation Learning
Abstract
Imitation Learning is a sequential task where the learner tries to mimic an expert’s action in order to achieve the best perfomance. Several algorithms have been proposed recently for this task. In this project, we aim at proposing a wide review of these algorithms, presenting their main features and comparing them on their performance and their regret bounds.
1 Introduction
The principle behind Imitation Learning is to act and exhibit a human behavior by implicitly giving to a learner prior information about the world. In Imitation Learning tasks, the agent seeks the best way to use a training set (inputoutput pair) demonstrated by an expert in order to learn a policy and achieve an action as similar as possible as the expert’s one. Imitation is often needed to automate actions when the agent is human and it is too expensive to run its actions in realtime. Apprenticeship learning[1], on the contrary, executes pure greedy/exploitative policies and use all (state/action) trajectories to learn a nearoptimal policy using Reinforcement Learning approaches. It requires difficult maneuvers and it is nearly impossible to recover from unobserved states. Imitation learning can often deal with those unexplored states so it offers a more reliable framework for many tasks such as selfdriving cars. We will first setup the notations and the general framework, then we will present some of the main Imitation Learning algorithms and their guarantees of convergence. Finally, we will focus on experimental results of the DAgger approach on a realworld application.
2 Problem setup
Let us introduce imitation learning in the framework of Markov Decision Processes (MDP).
A MDP is defined by the tuple with : the set of states, the finite set of actions, the transition function, the reward ( [0,1]) of performing action in state and the initial state distribution. We denote as being the number of epochs.
The policy we make use of can either be stationary (Markovian) or nonstationary with being the time horizon. It indicates the action to take in state and at time . We denote the deterministic expert policy as and use the following notations :

the expected total reward of trajectories starting with the initial state

the empirical mean of state distribution induced over each time step

the total reward in a step trajectory

the observed surrogate loss
With the previous notations, we obtain the following quantity that we aim at maximizing :
There exists two types of settings that can be applied: the passive setting where the learner is provided with a training set of optimal policy’s execution full trajectories and the active setting where the learner is allowed to pose action queries to an expert that returns the desired action for a specific time.
3 Stateoftheart algorithms and their guarantee of convergence
Learning from Demonstration (LfD) is a practical framework for learning complex behaviour policies from demonstration trajectories produced by an expert, even if there are very few of them or if they are inaccurate. We list and compare here some of the most used algorithms for imitation learning where we have drawn illustrations of the models on a selfdriving car to spot the differences between the various algorithms. Some of the theoretical proofs and intuitions of the theorems below are in Appendix.
3.1 Supervised learning
The first approach to tackle imitation learning is supervised learning by classification. We have a set of training trajectories (stationary policy) achieved by an expert where a single trajectory consists of a sequence of observations and a sequence of actions executed by an expert. The motivation behind imitation learning with supervised learning is to teach a classifier that attempts to mimic the expert’s action based on the observations at that time.
It is a passive approach where the objective is to learn a target policy by passively observing full execution trajectories. The expert acts only before solving the learning objective which is to train a policy over the states encountered by the expert. Also, we need to make the assumption that actions in the expert trajectories are independent identically distributed (i.i.d).
There exists an upper bound on the loss suffered by the Supervised Imitation Learning algorithm as a function of the quality of the expert and the error rate of the learned classifier. Let be the error rate of the underlying classifier, T the horizon and the learned policy, we have a quadratic cost.
Theorem 1
Let denote , then there exists such that
The main issue with this supervised learning approach to imitation learning is that it cannot learn to recover from failures. Indeed, supposing that the model has deviated from the optimal trajectory at one time step, it will not be able to get back to states seen by the expert and hence, it will generate a cascade of errors. We conclude that this naive algorithm fails to generalize to unseen situations. The next approaches rectify this behaviour.
3.2 Forward Training
The forward training algorithm was introduced by Ross and Bagnell (2010)[2] and it trains one policy at each time step over (nonstationary policy), i.e at each , the machine learns a policy to mimic the expert policy on the states induced by the previous learned policies . This iterative training is described in Algorithm 1.
Let be be the maximal increase in the expected total cost from any probable state, when changing only the policy. For this algorithm, we have a guaranteed performance with a nearlinear regret.
Theorem 2
Let denote , then there exists and such that
In the worst case, we have the same convergence as for classical supervised learning but in general, the convergence is sublinear and the experts policies succeed in recovering the mistakes of the model policy. Thus, the Forward Training algorithm should perform better than the previous one.
However, one major weakness of the presented approach is that it needs to iterate over all the T periods, where the time horizon T can be quite large or even undefined. Thus, taken that the policy is nonstationary, the algorithm becomes impracticable in most realworld applications (T large or undefined). Some of the next algorithms overcome this issue.
3.3 Searchbased Structured Prediction (SEARN)
The idea behind SEARN introduced by Daumé III et al. (2009) [3] is that instead of learning some sort of global model and then searching (as it is the standard), it will simply learn a classifier to make each of the decisions of the search optimally. The algorithm starts by following the experts action at every step. Iteratively, it collects the demonstrations and make use of them to train a new policy. It compiles new episodes by taking actions according to a mixture of all previously trained policies, as well as the experts actions. Finally, over time, it learns to follow its mixture of policies and stops relying on the expert to decide which actions to take.
In short, this algorithm attempts to learn a classifier that will walk us through the search space. It operates by maintaining a current policy and attempts to use it in order to generate new training data on which to learn a new policy (new classifier). When a new classifier is learned, we interpolate it with the old classifier. This iterative scheme is described in Algorithm 2.
We can bound the cost as explained below in Theorem 3.
Theorem 3
Using in and in then, there exists and such as
However, this Searchbased structured prediction can be overly optimistic and is challenging in practice mainly due to its initialization which is different from the optimal policy. Below, we will detail other approaches that overcome this issue.
3.4 Stochastic Mixing Iterative Learning (SMILe)
The SMILe algorithm was also introduced by Ross and Bagnell (2010)[2] to correct some of the inconvenient issues of the forward training algorithm. It is a stochastic mixing algorithm based on SEARN that uses its benefits with a substantially simpler implementation and less demanding interaction with an expert. It trains a stochastic stationary policy over several iterations and then makes use of a “geometric” stochastic mixing of the policies trained.
Concretely, we start with a policy that follows exactly the expert’s actions. At each iteration , we train a policy to mimic the expert under the trajectories induced by the previous policy . Then, we add the new trained policy to the previous mix of policies with a geometric discount factor . So, the new policy is a mix of policies, with the probability of using the expert’s action as . The SMILe algorithm is described in Algorithm 3.
Selecting and guarantees nearlinear regret (with the policy disadvantage) as for the forward training algorithm.
Theorem 4
Let denote a variable depending on . Then there exists and (the kth order policy disadvantage of with respect to ) such that
The main advantage of this approach is that we can interrupt the process at any time in order to not take into account a too large or undefined time horizon. Unfortunately, due to its stochastic policy, the model is not stable.
3.5 Reductionbased Active Imitation Learning (RAIL)
The principle behind RAIL introduced by Ross et al. (2011) [4] is to perform a sequence of calls to an independent identically distributed (i.i.d) active learner . We note that it is likely to find a useful stationary policy well before all T calls are issued, which palliates the drawbacks of forward training. Indeed, the active learner is able to ask queries across a range of time point and we might expect policies learned in earlier iterations to achieve nontrivial performance throughout the entire horizon.
Concretely, RAIL iterates for iterations with the notable difference that on each iteration, it learns a new stationary policy that can be applied across all time steps. Iteration of the model learns a new policy that achieves a low error rate at predicting the expert’s actions with respect to the state distribution of the previous policy. The RAIL algorithm is described in Algorithm 4.
RAIL is an idealized algorithm intended for analysis which achieves the theoretical goals. However, it has a number of inefficiencies from a practical perspective mainly because of the unlabeled state distributions used at early iterations that can be quite different from .
3.6 Dataset Aggregation (DAgger)
3.6.1 DAgger
Ross and Bagnell proposed, in 2010, the DAgger[5] algorithm to also solves the Learning from Demonstration problems. DAgger is an iterative policy training algorithm via a reduction to online learning. At each iteration, we retrain the main classifier on all states ever encountered by the learner. The main advantage of DAgger is that the expert teaches the learner how to recover from past mistakes. It’s an active method (we need access to expert themselves) based on FollowTheLeader algorithm (each iteration is one onlinelearning example).
We start with a first policy fully taught by the expert then, we run and see what configurations the learner visits. We generate a new dataset that contains information about how to recover from the errors of . Because we want to have information from both and , we trained on the union of the initial expertonly trajectories together with new generated trajectories. We repeat it at each iteration. We choose the best policy on the validation test.
Theorem 5
Let be the true loss of the best policy, then if there exists and such as
The main algorithmic difference between SEARN[3] and DAgger is in the learning of the classifiers in each iteration and in combining them into a policy. DAgger can combine the training signal obtained from all iterations contrary to SEARN wich only train on iteration ie with no aggregate dataset. SEARN was the first practical method, followed by DAgger. DAgger works for both complex and simple problems, it improves the more data is collected but only needs few iterations to work. So it can be useful for many applications such as handwritten recognition or autonomous driving.
3.6.2 DAgger by coaching
With DAgger, the policy space can be far from the learning policy space and so it limits the learning ability and information might be not inferable from the state. To prevent this, HHH Daumé III et al. proposed, in 2012, the DAgger by coaching algorithm[6] . With this algorithm, we execute easytolearn actions i.e within learner’s ability. When it’s too hard, the coach lowers the goal and teaches gradually.
We define a hope action, easier to achieve than the oracle action and which is not so much worse. Let measure how close the coach is to the oracle : . DAgger by coaching guarantees linear regret.
Theorem 6
Let denote the expected surrogate loss w.r.t. and denote the true loss of the best policy in hindsight with respect to hope actions, then there exists and such as
The DAgger algorithm and its equivalent with coaching are described in Algorithm 5.
3.7 Approximate Policy Iteration with Demonstration (APID)
For the previous algorithms, we assumed that the expert exhibits optimal behaviour and that his demonstrations are abundant. Those assumptions are not always valid in the real world and so, in order to address this challenging issue, we combine both expert and interaction data (i.e., mix LfD and RL). APID (2013) [7] is thus particularly interesting in cases where the expert demonstrations are few or suboptimal. It is a LfD with a regularized Approximate Policy Iteration (API) method, the key idea being that expert’s suggestions are used to define linear constraints which guide the optimization performed by API.
Formally, we place ourselves in the context of the API and use the added information furnished by the experts (even if too few or inaccurate). and denote the value and actionvalue function for , and and denote their corresponding for the optimal policy . We have a set of interaction data , respectively a set of expert examples , representing a n examples sample of (state, action) couples, respectively a m examples sample of (state, demonstrated action) couples. In order to encode the suboptimality of the expert, we add a variable to the actionvalue optimal policy to allow occasional violations of the constraints. Finally, we get a constrained optimization problem. In this approach, we do not have access to the exact Bellman operator but only to samples and we thus use the Projected Bellman error.
3.8 Aggregate Values to Imitate (AggreVaTe)
AggreVaTe, introduced by Ross and Bagnell (2014) [8], is an extension of the DAgger algorithm that learns to choose actions in order to minimize the costtogo (total cost) of the expert rather than the zeroone classification loss of mimicking its actions. For the first iteration, we passively collect data by observing the expert performing the task. In each trajectory, at a uniformly random time , we explore an action in state and observe the costtogo of the expert after performing this action.
We use to denote the expected future costtogo of executing action in state , followed by executing policy for steps.
Exactly as the DAgger algorithm, AggreVaTe collects data through interaction with the learner as :

At each iteration, we use the current learner policy to perform the task, interrupt at a uniformly random time , explore an action a in the current state s, after which control is provided back to the expert to continue up to timehorizon T

It results in new examples of the costtogo of the expert , under the distribution of states visited by the current policy .

Then we aggregate datasets and train on the concatenated datasets
The full algorithm is described in Algorithm 6.
Theorem 7
Let denote the online learning average regret and denote the minimum expected costsensitive classification regret. Then, there exists such as
AggreVaTe can be interpreted as a regret reduction of imitation learning to noregret online learning.
3.9 Extensions
We let for future work the literature review of some of the most recent, exciting and promising work in Imitation Learning. Indeed, OpenAI recently proposed a metalearning framework [9] to achieve imitation learning with a very few expert data. Their goal was to teach a physical robot to stack small color blocks as a child would do. The expert data was given using VR and computer vision and the robot has learnt his stacking task using only one demonstration from an arbitrary situation. The videos are available on their website. They achieved it by pretraining their metaframework on numerous set of tasks using Neural Networks. To train the policy, they mainly used the DAgger algorithm. Then, the robot received one demonstration of an unobserved task and mimicked it.
OpenAI (Ho et Ermon) also proposed an Imitation Learning approach [10] based on Generative Adversarial Network (GAN) which aims at training to learn to mimic expert’s demonstrations without an explicit reward. One of the key parts is that their method is modelfree and also it does not query the expert during learning. Their approach explores randomly how to determine which actions lead to a policy that best mimics the expert behaviour.
4 Experiments
To experiment with imitation learning and especially with the DAgger algorithm, we followed the instructions of a Deep Reinforcement assignment from Berkeley University [11] where the expert demonstrations have already been trained using the OpenAI Gym toolkit and the classifier is a Neural Network trained using TensorFlow.
The goal is to teach a virtual half cheetah to run and leap, in a straightforward way. The learner (our virtual cheetah) is sequentially asking information (the input  i.e. expert data  is an environmentspecific pixel array representing the observation of the said environment) to the expert, then retraining and reasking when needed.
We trained a first policy for the expert data (Figure (a)a), then we runned the first learning policy (Figure (b)b) to get a first dataset. Afterwards, we queried the expert to label the dataset with actions and then we aggregated the dataset. The learner is running forward but its leap landings are approximate at the first iteration. However, it keeps improving itself (Figure (c)c) and eventually, in the end, we keep the best policy chosen on the validation set.
The more rollouts as training data, the better the results and we notice that the loss is converging after less than 30 iterations as depicted in Figure 5.
5 Conclusion
To conclude, let us recall that the goal of the project was to deeply understand the Imitation Learning tasks, review the main algorithms and their regret bounds to achieve these tasks and then compare them. The performance of the different algorithms depends on the tasks and the nature of the input data. Indeed, the expert policy can be restricted or sometimes inaccurate, it might be too expensive to produce full trajectories, and so on and so force. However, DAgger is nowadays the most common algorithm because it generally outperforms the other models. Hence, we applied it using the OpenAI Gym toolkit to measure its performance and analyze the learner progression during the said process. We let, for future work, the comparison with other Imitation Learning approaches using this toolkit.
In short, this project was both enriching and exciting. We had the chance to review the different approaches to perform imitation learning and explore the new applications of this wide field.
References
References
[1] Pieter Abbeel and Andrew Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the Twentyfirst International Conference on Machine Learning, ICML ?04, pages 1?, New York, NY, USA, 2004. ACM.
[2] Stephane Ross and Drew Bagnell. Efficient reductions for imitation learning. In Yee Whye Teh and Mike Titterington, editors, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pages 661?668, Chia Laguna Resort, Sardinia, Italy, 13?15 May 2010. PMLR.
[3] Hal Daumé III, John Langford, and Daniel Marcu. Searchbased structured prediction. CoRR, abs/0907.0786, 2009.
[4] Kshitij Judah, Alan Fern, and Thomas G. Dietterich. Active imitation learning via reduction to I.I.D. active learning. CoRR, abs/1210.4876, 2012.
[5] Stéphane Ross, Geoffrey J. Gordon, and J. Andrew Bagnell. Noregret reductions for imitation learning and structured prediction. CoRR, abs/1011.0686, 2010.
[6] He He, Hal Daumé, III, and Jason Eisner. Imitation learning by coaching. In Proceedings of the 25th International Conference on Neural Information Processing Systems  Volume 2, NIPS?12, pages 3149?3157, USA, 2012. Curran Associates Inc.
[7] Beomjoon Kim, Amirmassoud Farahmand, Joelle Pineau, and Doina Precup. Learning from limited demonstrations. In Proceedings of the 26th International Conference on Neural In formation Processing Systems  Volume 2, NIPS?13, pages 2859?2867, USA, 2013. Curran Associates Inc.
[8] Stéphane Ross and J. Andrew Bagnell. Reinforcement and imitation learning via interactive noregret learning. CoRR, abs/1406.5979, 2014.
[9] Yan Duan, Marcin Andrychowicz, Bradly C. Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, and Wojciech Zaremba. Oneshot imitation learning. CoRR, abs/1703.07326, 2017.
[10] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. CoRR, abs/1606.03476, 2016.
[11] UC Berkeley CS 294: Deep RL assignment 1: Imitation learning. http://rll.berkeley.edu/deeprlcoursesp17/docs/hw1.pdf. Accessed: 20180114.
Appendix A Appendix
Please find, in the following section, proofs (entire proof, extracts or intuitions) of the theorems detailed above, there were proposed by their respective papers :
a.1 Proof Theorem 1 (Supervised Learning)
Let for the expected 01 loss at time of , such that . Note that corresponds to the probability that makes a mistake under distribution . Let represent the probability has not made a mistake (w.r.t. ) in the first step, and the distribution of state is in at time conditioned on the fact it hasn’t made a mistake so far.
If represents the distribution of states at time obtained by following but conditioned on the fact that made at least one mistake in the first visited states. Then,
Now at time , the expected cost of is at most 1 if it has made a mistake so far, or if it hasn’t made a mistake yet. So
Let and represent the probability of mistake of in distribution and . Then
and since , then .
Additionally since , , i.e. . Finally note that
so that
Using these facts we obtain:
a.2 Proof Theorem 2 (Forward Training)
We follow here a similar proof than the previous one. We denote the step cost of executing in initial state s and then following policy . Let and assume that is an upper bound on the loss. At iteration we are only changing the policy at step , so
Solving this recurrence proves :
a.3 Proof Theorem 3 (SEARN)
Let define the expected step cost of executing times and policy at all other steps. The SEARN algorithm seeks to minimize directly the bound :
by choosing to minimize . Using , and denoting , SEARN guarantees :
For each state, the costtogo under the current policy must be estimated for each action during a costsensitive classification problem.
a.4 Proof Theorem 4 (SMILe)
Since for SMILe, will be close to , we can derive bounds on the policy disadvantages. Let denote the expected step cost of executing at steps and the expected step cost of executing times and policy at all other steps. The bound follows from the fact when acts like at time step :
Moreover, if , then (Lemma 4.1 in the Ross and Bagnel [SMILe]):
and if (Lemma 4.2 in the Ross and Bagnel [SMILe]):
By denoting and
it follows that with in and in
:
a.5 Proof Theorem 5 (DAgger)
Let be the true loss of the best policy in hindsight then if is there exists a policy such that
For an arbitrary task cost function , if is an upper bound on the loss with respect to , combining this results with Theorem 2.2 (Forward Training) yields that if is there exists a policy such that
a.6 Proof Theorem 6 (DAgger by coaching)
Similar proof than for the theorem 5 by first deriving a regret bound for coaching.