Project: Train a Quadcopter How to Fly

Design an agent to fly a quadcopter, and then train it using a reinforcement learning algorithm of your choice!

Try to apply the techniques you have learnt, but also feel free to come up with innovative ideas and test them.

Instructions

Take a look at the files in the directory to better understand the structure of the project.

  • task.py: Define your task (environment) in this file.
  • agents/: Folder containing reinforcement learning agents.
    • policy_search.py: A sample agent has been provided here.
    • agent.py: Develop your agent here.
  • physics_sim.py: This file contains the simulator for the quadcopter. DO NOT MODIFY THIS FILE.

For this project, you will define your own task in task.py. Although we have provided a example task to get you started, you are encouraged to change it. Later in this notebook, you will learn more about how to amend this file.

You will also design a reinforcement learning agent in agent.py to complete your chosen task.

You are welcome to create any additional files to help you to organize your code. For instance, you may find it useful to define a model.py file defining any needed neural network architectures.

Controlling the Quadcopter

We provide a sample agent in the code cell below to show you how to use the sim to control the quadcopter. This agent is even simpler than the sample agent that you'll examine (in agents/policy_search.py) later in this notebook!

The agent controls the quadcopter by setting the revolutions per second on each of its four rotors. The provided agent in the Basic_Agent class below always selects a random action for each of the four rotors. These four speeds are returned by the act method as a list of four floating-point numbers.

For this project, the agent that you will implement in agents/agent.py will have a far more intelligent method for selecting actions!

In [1]:
import random

class Basic_Agent():
    def __init__(self, task):
        self.task = task
    
    def act(self):
        new_thrust = random.gauss(450., 25.)
        return [new_thrust + random.gauss(0., 1.) for x in range(4)]

Run the code cell below to have the agent select actions to control the quadcopter.

Feel free to change the provided values of runtime, init_pose, init_velocities, and init_angle_velocities below to change the starting conditions of the quadcopter.

The labels list below annotates statistics that are saved while running the simulation. All of this information is saved in a text file data.txt and stored in the dictionary results.

In [2]:
%load_ext autoreload
%autoreload 2

import csv
import numpy as np
from task import Task

# Modify the values below to give the quadcopter a different starting position.
runtime = 5.                                     # time limit of the episode
init_pose = np.array([0., 0., 10., 0., 0., 0.])  # initial pose
init_velocities = np.array([0., 0., 0.])         # initial velocities
init_angle_velocities = np.array([0., 0., 0.])   # initial angle velocities
file_output = 'data.csv'                         # file name for saved results

# Setup
task = Task(init_pose, init_velocities, init_angle_velocities, runtime)
agent = Basic_Agent(task)
done = False
labels = ['time', 'x', 'y', 'z', 'phi', 'theta', 'psi', 'x_velocity',
          'y_velocity', 'z_velocity', 'phi_velocity', 'theta_velocity',
          'psi_velocity', 'rotor_speed1', 'rotor_speed2', 'rotor_speed3', 'rotor_speed4']
results = {x : [] for x in labels}

# Run the simulation, and save the results.
with open(file_output, 'w') as csvfile:
    writer = csv.writer(csvfile)
    writer.writerow(labels)
    while True:
        rotor_speeds = agent.act()
        _, _, done = task.step(rotor_speeds)
        to_write = [task.sim.time] + list(task.sim.pose) + list(task.sim.v) + list(task.sim.angular_v) + list(rotor_speeds)
        for ii in range(len(labels)):
            results[labels[ii]].append(to_write[ii])
        writer.writerow(to_write)
        if done:
            break

Run the code cell below to visualize how the position of the quadcopter evolved during the simulation.

In [3]:
import matplotlib.pyplot as plt
%matplotlib inline

plt.plot(results['time'], results['x'], label='x')
plt.plot(results['time'], results['y'], label='y')
plt.plot(results['time'], results['z'], label='z')
plt.legend()
_ = plt.ylim()

The next code cell visualizes the velocity of the quadcopter.

In [4]:
plt.plot(results['time'], results['x_velocity'], label='x_hat')
plt.plot(results['time'], results['y_velocity'], label='y_hat')
plt.plot(results['time'], results['z_velocity'], label='z_hat')
plt.legend()
_ = plt.ylim()

Next, you can plot the Euler angles (the rotation of the quadcopter over the $x$-, $y$-, and $z$-axes),

In [5]:
plt.plot(results['time'], results['phi'], label='phi')
plt.plot(results['time'], results['theta'], label='theta')
plt.plot(results['time'], results['psi'], label='psi')
plt.legend()
_ = plt.ylim()

before plotting the velocities (in radians per second) corresponding to each of the Euler angles.

In [6]:
plt.plot(results['time'], results['phi_velocity'], label='phi_velocity')
plt.plot(results['time'], results['theta_velocity'], label='theta_velocity')
plt.plot(results['time'], results['psi_velocity'], label='psi_velocity')
plt.legend()
_ = plt.ylim()

Finally, you can use the code cell below to print the agent's choice of actions.

In [7]:
plt.plot(results['time'], results['rotor_speed1'], label='Rotor 1 revolutions / second')
plt.plot(results['time'], results['rotor_speed2'], label='Rotor 2 revolutions / second')
plt.plot(results['time'], results['rotor_speed3'], label='Rotor 3 revolutions / second')
plt.plot(results['time'], results['rotor_speed4'], label='Rotor 4 revolutions / second')
plt.legend()
_ = plt.ylim()

When specifying a task, you will derive the environment state from the simulator. Run the code cell below to print the values of the following variables at the end of the simulation:

  • task.sim.pose (the position of the quadcopter in ($x,y,z$) dimensions and the Euler angles),
  • task.sim.v (the velocity of the quadcopter in ($x,y,z$) dimensions), and
  • task.sim.angular_v (radians/second for each of the three Euler angles).
In [8]:
# the pose, velocity, and angular velocity of the quadcopter at the end of the episode
print(task.sim.pose)
print(task.sim.v)
print(task.sim.angular_v)
[ 8.85753685 -0.53939342 30.28369968  0.1585529   5.84072981  0.        ]
[6.97912975 0.16448363 4.94089834]
[ 0.23911039 -0.13386613  0.        ]

In the sample task in task.py, we use the 6-dimensional pose of the quadcopter to construct the state of the environment at each timestep. However, when amending the task for your purposes, you are welcome to expand the size of the state vector by including the velocity information. You can use any combination of the pose, velocity, and angular velocity - feel free to tinker here, and construct the state to suit your task.

The Task

A sample task has been provided for you in task.py. Open this file in a new window now.

The __init__() method is used to initialize several variables that are needed to specify the task.

  • The simulator is initialized as an instance of the PhysicsSim class (from physics_sim.py).
  • Inspired by the methodology in the original DDPG paper, we make use of action repeats. For each timestep of the agent, we step the simulation action_repeats timesteps. If you are not familiar with action repeats, please read the Results section in the DDPG paper.
  • We set the number of elements in the state vector. For the sample task, we only work with the 6-dimensional pose information. To set the size of the state (state_size), we must take action repeats into account.
  • The environment will always have a 4-dimensional action space, with one entry for each rotor (action_size=4). You can set the minimum (action_low) and maximum (action_high) values of each entry here.
  • The sample task in this provided file is for the agent to reach a target position. We specify that target position as a variable.

The reset() method resets the simulator. The agent should call this method every time the episode ends. You can see an example of this in the code cell below.

The step() method is perhaps the most important. It accepts the agent's choice of action rotor_speeds, which is used to prepare the next state to pass on to the agent. Then, the reward is computed from get_reward(). The episode is considered done if the time limit has been exceeded, or the quadcopter has travelled outside of the bounds of the simulation.

In the next section, you will learn how to test the performance of an agent on this task.

The Agent

The sample agent given in agents/policy_search.py uses a very simplistic linear policy to directly compute the action vector as a dot product of the state vector and a matrix of weights. Then, it randomly perturbs the parameters by adding some Gaussian noise, to produce a different policy. Based on the average reward obtained in each episode (score), it keeps track of the best set of parameters found so far, how the score is changing, and accordingly tweaks a scaling factor to widen or tighten the noise.

Run the code cell below to see how the agent performs on the sample task.

In [9]:
import sys
import pandas as pd
from agents.policy_search import PolicySearch_Agent
from task import Task

num_episodes = 1000
target_pos = np.array([0., 0., 10.])
task = Task(target_pos=target_pos)
agent = PolicySearch_Agent(task)
scores = {}

for i_episode in range(1, num_episodes+1):
    state = agent.reset_episode() # start a new episode
    while True:
        action = agent.act(state) 
        next_state, reward, done = task.step(action)
        agent.step(reward, done)
        state = next_state
   
        if done:
            print("\rEpisode = {:4d}, score = {:7.3f} (best = {:7.3f}), noise_scale = {}".format(
                i_episode, agent.score, agent.best_score, agent.noise_scale), end="")  # [debug]
            scores[i_episode] = agent.score
            break

    sys.stdout.flush()
Episode = 1000, score =   2.148 (best =   2.245), noise_scale = 3.255

This agent should perform very poorly on this task. And that's where you come in!

Define the Task, Design the Agent, and Train Your Agent!

Amend task.py to specify a task of your choosing. If you're unsure what kind of task to specify, you may like to teach your quadcopter to takeoff, hover in place, land softly, or reach a target pose.

After specifying your task, use the sample agent in agents/policy_search.py as a template to define your own agent in agents/agent.py. You can borrow whatever you need from the sample agent, including ideas on how you might modularize your code (using helper methods like act(), learn(), reset_episode(), etc.).

Note that it is highly unlikely that the first agent and task that you specify will learn well. You will likely have to tweak various hyperparameters and the reward function for your task until you arrive at reasonably good behavior.

As you develop your agent, it's important to keep an eye on how it's performing. Use the code above as inspiration to build in a mechanism to log/save the total rewards obtained in each episode to file. If the episode rewards are gradually increasing, this is an indication that your agent is learning.

In [5]:
## TODO: Train your agent here.
import sys
import numpy as np
from agents.agent import DDPG
from task import Task

num_episodes = 10000
target_pos = np.array([0., 0., 100.]) # Takeoff
task = Task(target_pos=target_pos)
agent = DDPG(task)

results = {'episode':[], 'reward':[], 'm_reward':[], 'x':[], 'y':[], 'z':[]}
e_reward = []

best_score = -np.inf

for i_episode in range(1, num_episodes+1):
    state = agent.reset_episode() # start a new episode
    e_reward = []
    
    while True:
        action = agent.act(state) 
        next_state, reward, done = task.step(action)
        agent.step(action, reward, next_state, done)
        state = next_state
        
        # Save all rewards for the episode
        e_reward.append(reward)
        
        if done:
            total_score = np.sum(e_reward)
            best_score = max(best_score, total_score)
            
            print("\rEpisode = {:4d}, score = {:7.3f} (best = {:7.3f})".format(i_episode, total_score,best_score), end="")  # [debug]
            
            results['episode'].append(i_episode)
            results['reward'].append(total_score)
            results['x'].append(task.sim.pose[0])
            results['y'].append(task.sim.pose[1])
            results['z'].append(task.sim.pose[2])
            
            break

    sys.stdout.flush()
Episode = 10000, score = 114.669 (best = 167.073)

Plot the Rewards

Once you are satisfied with your performance, plot the episode rewards, either from a single run, or averaged over multiple runs.

In [6]:
## TODO: Plot the rewards.
import matplotlib.pyplot as plt
%matplotlib inline

plt.figure(figsize=(16,9))
plt.subplot(211)
#plt.plot(results['episode'], results['mean_reward'], label='mean reward')
plt.plot(results['episode'], results['reward'], label='total reward')
plt.legend()
_ = plt.ylim()

plt.subplot(212)
plt.plot(results['episode'], results['x'], label='x')
plt.plot(results['episode'], results['y'], label='y')
plt.plot(results['episode'], results['z'], label='z')
plt.legend()
_ = plt.ylim()
In [7]:
# Setup
done = False
state = agent.reset_episode()
sim_results = {'episode':[], 'reward':[], 'm_reward':[], 'x':[], 'y':[], 'z':[]}
e_reward = []


# Run the simulation 100 times
for i in range(100):
    state = agent.reset_episode()
    e_results = {'x':[], 'y':[], 'z':[]}
    e_reward = []
    
    while True:
        action = agent.act(state)
        next_state, reward, done = task.step(action)
        if np.isnan(reward):
            print("State: {}".format(next_state))
            print("Done: {}".format(done))
            print("Action: {}".format(action))
            print("Reward: {}".format(reward))

        e_reward.append(reward)
        e_results['x'].append(task.sim.pose[0])
        e_results['y'].append(task.sim.pose[1])
        e_results['z'].append(task.sim.pose[2])

        if done:
            sim_results['episode'].append(i)
            sim_results['reward'].append(np.sum(e_reward))
            sim_results['x'].append(np.mean(e_results['x']))
            sim_results['y'].append(np.mean(e_results['y']))
            sim_results['z'].append(np.mean(e_results['z']))
            break
            
print("Total reward: {:7.3f}".format(np.sum(sim_results['reward'])))
print("Mean reward: {:7.3f}".format(np.mean(sim_results['reward'])))
print("Best reward: {:7.3f}".format(max(sim_results['reward'])))
Total reward: 13967.660
Mean reward: 139.677
Best reward: 166.682
In [8]:
plt.figure(figsize=(16,9))

plt.subplot(211)
plt.plot(sim_results['episode'], sim_results['reward'], label='total reward')
plt.legend()
_ = plt.ylim()

plt.subplot(212)
plt.plot(sim_results['episode'], sim_results['x'], label='x')
plt.plot(sim_results['episode'], sim_results['y'], label='y')
plt.plot(sim_results['episode'], sim_results['z'], label='z')
plt.legend()
_ = plt.ylim()

Reflections

Question 1: Describe the task that you specified in task.py. How did you design the reward function?

Answer: I chose the task to Takeoff from position [0,0,10] to [0,0,100].

I tried to implement some tips from the link provided in the review, but the results were not good as expected. I designed the function adding some rewards and penalties, also involving the velocities with not satisfactory results.

So, I simplified the function adding reward only when the agent is getting closing to the target position from the start position, more reward when it's closest. Perhaps involving the z-velocity I could get a smoothest result compared with the current chart, but as I said before I couldn't find a good balance.

The final function looks like:

if self.sim.pose[2] > 0 and self.sim.pose[2] <= self.target_pos[2]:
    z_dist_reward += 1 - .005 * (self.target_pos[2] - self.sim.pose[2])

First, I check if the agent is between the start position and the target position, to avoid to reward if it's over the target position, then I apply a reward based on the distance to the target, if the agent is closest then I add more reward.

Finally I normalize the result using the tanh function:

reward = np.tanh(z_dist_reward)

Question 2: Discuss your agent briefly, using the following questions as a guide:

  • What learning algorithm(s) did you try? What worked best for you?
  • What was your final choice of hyperparameters (such as $\alpha$, $\gamma$, $\epsilon$, etc.)?
  • What neural network architecture did you use (if any)? Specify layers, sizes, activation functions, etc.

Answer: I used the DDPG algorithm because is the suggested in this project. I tried several values for the hyperparameters but, at the end, I finally use the values used in the DDPG paper because I had better result when I combined all of them.

The hyperparameters values are:

  • Actor lr: 0.0001
  • Critic lr: 0.001
  • L2 decay: 0.01
  • Gamma: 0.99
  • Tau: 0.001
  • Minibatch size: 64
  • Replay buffer size: 1000000
  • Mu: 0
  • Theta: 0.15
  • Sigma: 0.2

For the network architecture I did the same, I followed the paper after trying several combination of hidden layer, units, learning rates, etc. At this point is slow to train but I get better results than the other combinations.

The Actor architecture:

  • (hidden) Dense layer with 400 units and L2 regularizer, BatchNormalization and ReLU activation.
  • (hidden) Dense layer with 300 units and L2 regularizer, BatchNormalization and ReLU activation.
  • (output) Dense layer with RandomUniform initialisation and tanh activation.

The Critic architecture:

  • The hidden layer of the state pathway are the same than the Actor.
  • The action pathway has only one hidden layer with 300 units and same configuration than the others.

Question 3: Using the episode rewards plot, discuss how the agent learned over time.

  • Was it an easy task to learn or hard?
  • Was there a gradual learning curve, or an aha moment?
  • How good was the final performance of the agent? (e.g. mean rewards over the last 10 episodes)

Answer:

The task is hard to learn for the agent, in the sense it's take a lot of episodes to see some valid results.

Looking the training charts we can see that the agent takes a few thousands of episodes to start learning what is the good target position, after that then it's stabilize till the end of the training.

There are a couple of drop offs along the first half of the training but the agent seems to realise that it takes bad actions and fix the problem.

The results of the simulation for 100 episodes shown that the performance of the agent is not bad but it's not good either, the reward chart is only varying between 120 and 160, and the position chart shown the z-index on top of the chart but only at 60 instead of 100.

In general looks good, the reward function works as intended, forcing to the agent to go up while the other axes still around the original values.

Question 4: Briefly summarize your experience working on this project. You can use the following prompts for ideas.

  • What was the hardest part of the project? (e.g. getting started, plotting, specifying the task, etc.)
  • Did you find anything interesting in how the quadcopter or your agent behaved?

Answer: One of the hardest part was getting started. All these new concepts about RL such policies, actions, rewards, etc... it was overwhelming me, I didn't know where to start, change the reward function, modify the agent, the task... so after reading the code provided and follow the suggestions like to use de DDPG and read the paper everything got more clear.

After that, the most difficult part was the reward function. The idea was clear, give some reward when the agent moves up, but how much reward? Only a reward or penalty as well? Try to figure out the balance and the function implementation was difficult.

In [ ]: