Deep Reinforcement Learning Nano Degree: Project 2, Continuous Control
Jupyter Notebook 78.51%Python 21.49%
drlnd-p2-continuous-control's Introduction
Udacity Deep Reinforcement Learning: Project 2, Continuous Control
Environment
Reacher Environment
The goal of this problem is for the agent (a double jointed arm) to track a goal location, where it gives a reward of +0.1 for each timestep the hand is in the goal location
For the purposes of this project, the environment is considered solved when the agent achieves an average score (accumulated reward over an episode) of +30.0 over 100 episodes.
State Space:
33-Dimensional continuous observation space, (includes agent position, orientation and there respective velocities)
Action Space:
4-Dimensional continuous space (torque applied to two joints), with each element bounded to be within (-1,1)
Installation and Usage
This notebook can be used by simply opening it in the provided Udacity Project Workspace
In the repository directoy root mv <path/to/Reacher_Linux.zip\> .
Then run conda activate drlnd
In your local directory, to see the trained agent in action: python main.py "./Reacher\_Linux/Reacher.x86\_64"
To train the agent yourself, use the ipynb notebook and use the Udacity Project Workspace as stated above so that you can take advantage of the GPU, hit "run all cells"