The Agent must navigate a world, collecting yellow banana while avoiding blue ones
The movie above is from an Agent I have trained...
The environment is similar to Unity's Banana Collector environment
An agent must be trained an agent to navigate (and collect bananas!) in a large, square world.
A reward of +1 is provided for collecting a yellow banana, and a reward of -1 is provided for collecting a blue banana. Thus, the goal of the agent is to collect as many yellow bananas as possible while avoiding blue bananas.The state space has 37 dimensions and contains the agent's velocity, along with ray-based perception of objects around agent's forward direction. Given this information, the agent has to learn how to best select actions. Four discrete actions are available, corresponding to:
0
- move forward.1
- move backward.2
- turn left.3
- turn right.
The task is episodic, and in order to solve the environment, the agent must get an average score of +13 over 100 consecutive episodes. Which means that once trained to score 13 points in an episode the Agent must continue scoring an average of at least 13 for the next 100 episodes...
The code is based on the DRLND solution for a Deep Q-Network agent that implements an OpenAI Gym LunarLander-v2 environment.
-
Download this GitHub repository, which is based on the original
p1_navigation
project found in the Udacity deep-reinforcement-learning (DRLND) GitHub repository -
Download the Banana environment from one of the links below. You need only select the environment that matches your operating system:
- Linux: click here
- Mac OSX: click here
- Windows (32-bit): click here
- Windows (64-bit): click here
(For Windows users) Check out this link if you need help with determining if your computer is running a 32-bit version or 64-bit version of the Windows operating system.
(For AWS) If you'd like to train the agent on AWS (and have not enabled a virtual screen), then please use this link to obtain the environment.
-
Place the file in the downloaded
DRLND_p1_navigation
GitHub repository, and unzip (or decompress) the file. It can go into a sub-directory, as long as the correct path to the Banana environment is specified in theNavigation.ipynb
you will be running. -
The Banana environment is based on OpenAI gym, so it has a number of dependencies. These are described in the dependencies section of the Udacity DRLND readme.
The Jupyter Notebook Navigation.ipynb
must be followed to reproduce the training performed. It calls Agent and Model code in the files dqn_agent.py
and model.py
.
One thing that needs changing is the bananapath
definition early in the notebook - it must contain the path to wherever you have extracted the Banana environment in point 3 above.
Please see the report.md
for a discussion of the algorith and model, and results of running the experiment.