Giter Site home page Giter Site logo

obstacle-tower-challenge's Introduction

alt text

Having troubles with your submission? Check the FAQ and recent discussion for possible solutions.

Obstacle Tower Challenge Starter Kit

This repository provides instructions for how to submit to the Obstacle Tower Challenge.

Your goal in the Obstacle Tower is to have your agent traverse the floors of a procedurally generated tower and climb to the highest level possible. Each level is progressively more difficult, and you'll be tested against a towers generated with random seeds your agent hasn't seen before and thus will need to generalize from the 100 provided tower seeds.

Local Setup for Training

Before submitting to the challenge, you will want to train an agent to advance through the Obstacle Tower.

The first step is to clone this repository:

git clone [email protected]:Unity-Technologies/obstacle-tower-challenge.git

Next, install the following dependencies:

  • Python dependencies
pip install -r requirements.txt
  • Obstacle Tower (Your OS) Download the link for your OS here and unzip in the obstacle-tower-challenge folder from the cloned repository.

Finally, you can run the environment using the included agent (in run.py) with random actions:

python run.py

Note: Your Obstacle Tower build must be located at ./ObstacleTower/obstacletower.XYZ from the base of the cloned repository, where XYZ represents the appropriate file extension for your operating system's Obstacle Tower build.

Next steps

Once you've set up your environment, you'll need to train your agent. We've provided a guide for using Google's Dopamine library to train an agent on Google Cloud Platform.

Testing Challenge Evaluation

Before making your challenge submission, you may want to test your agent using a similar environment to the one used for the official challenge evaluation. Your agent and the Obstacle Tower environment will be run in separate Docker containers which can communicate over the local network.

Dependencies

  • Docker See instructions here
  • aicrowd-repo2docker
pip install aicrowd-repo2docker
# or
pip install -r requirements.txt
  • Obstacle Tower (Linux) Download the linux build for docker evaluation here and unzip in the obstacle-tower-challenge folder from the cloned repository.

Build the Docker image

We've provided a build script that uses aicrowd-repo2docker to build an image obstacle_tower_challenge:latest from your repository. Ensure Docker is running on your machine, then run:

./build.sh

Run Docker image

Now that you've built a Docker image with your agent script and the Obstacle Tower environment binary, you can run both the agent and the environment within a separate container:

# Start the container running your agent script.
docker run \
  --rm \
  --env OTC_EVALUATION_ENABLED=true \
  --network=host \
  -it obstacle_tower_challenge:latest ./run.sh

# In another terminal window, execute the environment.
docker run \
  --rm \
  --env OTC_EVALUATION_ENABLED=true \
  --env OTC_DEMO_EVALUATION=true \
  --network=host \
  -it obstacle_tower_challenge:latest ./env.sh

To use GPU, add the tag --runtime=nvidia after docker run.

The environment script should output the evaluation state as it advances, recording overall state as well as the progress within each episode for seeds 101-105:

{"state":"PENDING","floor_number_avg":0.0,"reward_avg":-1.0,"episodes":[],"last_update":"2019-02-09T00:17:15Z"}
{"state":"IN_PROGRESS","floor_number_avg":0.0,"reward_avg":-1.0,"episodes":[{"state":"IN_PROGRESS","seed":101,"floor_number":0,"reward":0.0,"step_count":0}],"last_update":"2019-02-09T00:17:16Z"}
...

Submission

To submit to the challenge you'll need to ensure you've set up an appropriate repository structure, create a private git repository at https://gitlab.aicrowd.com with the contents of your submission, and push a git tag corresponding to the version of your repository you'd like to submit.

Repository Structure

aicrowd.json

Each repository should have a aicrowd.json file with the following fields:

{
    "challenge_id" : "unity-obstacle-tower-challenge-2019",
    "grader_id": "unity-obstacle-tower-challenge-2019",
    "authors" : ["aicrowd-user"],
    "description" : "Random Obstacle Tower agent",
    "gpu": false,
    "debug": false
}

This file is used to identify your submission as a part of the Obstacle Tower Challenge. You must use the challenge_id and grader_id specified above in the submission. The gpu field specifies whether or not your model will require a GPU for evaluation. You can set the debug field to true if you want to view logs of your submission for debugging purposes (more information here).

Submission environment configuration

You can specify your software environment by using all the available configuration options of repo2docker.

For example, to use Anaconda configuration files you can include an environment.yml file:

conda env export --no-build > environment.yml

It is important to include --no-build flag, which is important for allowing your Anaconda config to be replicable cross-platform.

Code Entrypoint

The evaluator will use /home/aicrowd/run.sh as the entrypoint. Please remember to have a run.sh at the root which can instantiate any necessary environment variables and execute your code. This repository includes a sample run.sh file.

Submitting

To make a submission, you will have to create a private repository on https://gitlab.aicrowd.com.

You will have to add your SSH Keys to your GitLab account by following the instructions here. If you do not have SSH Keys, you will first need to generate one.

Then you can create a submission by making a tag push to your repository, adding the correct git remote and pushing to the remote:

cd obstacle-tower-challenge
# Add AICrowd git remote endpoint
git remote add aicrowd [email protected]:<YOUR_AICROWD_USER_NAME>/obstacle-tower-challenge.git
git push aicrowd master

# Create a tag for your submission and push
git tag -am "submission-v0.1" submission-v0.1
git push aicrowd master
git push aicrowd submission-v0.1

# Note : If the contents of your repository (latest commit hash) does not change, 
# then pushing a new tag will not trigger a new evaluation.
# Note : Only tag which begin with "submission-" will trigger an evaluation

You now should be able to see the details of your submission at : gitlab.aicrowd.com/<YOUR_AICROWD_USER_NAME>/obstacle-tower-challenge/issues

obstacle-tower-challenge's People

Contributors

awjuliani avatar biggzlar avatar harper-u3d avatar kwea123 avatar miffyli avatar mtchi avatar shihzy avatar skbly7 avatar spmohanty avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

obstacle-tower-challenge's Issues

Enhancements of docker environment setup

I ran into some problems when launching docker, here are some of my suggestions:

  1. When launching the environment, the correct command should be the following:
docker run \
  --env OTC_EVALUATION_ENABLED=true \
  --env OTC_DEMO_EVALUATION=true \
  --network=host \
  -it obstacle_tower_challenge:latest ./env.sh 5005 ObstacleTower/obstacletower.x86_64

The last two arguments must be provided, since the default path for the environment is set to
/home/otc/ObstacleTower/obstacletower.x86_64 which does not exist.

  1. Please add --rm tag when launching docker containers:
    Do docker run --rm (other parameters...) all the time, otherwise lots of exited docker containers are going to be left behind, which consume disk space.

By the way, somehow the docker that runs ./run.sh doesn't terminate after the evaluation is done, we need to manually kill it by doing ctrl+c. Here's the output after ctrl+c:

^CTraceback (most recent call last):
  File "run.py", line 35, in <module>
    env.close()
  File "/srv/conda/lib/python3.6/site-packages/obstacle_tower_env.py", line 231, in close
    time.sleep(10)
KeyboardInterrupt

It looks like that it is stucked at closing the environment. I'm guessing it is trying to close an environment that is already closed (by the docker running env.sh)?

run.py doesn't accumulate reward

I created a manual agent (action = 18, keep running forward) that sometimes passes a floor or two using run.py as a base, however the episode reward always shows 0.0.
Changing line 11

obs, reward, done, info = env.step(action)

to

obs, rew, done, info = env.step(action)
reward += rew

fixes the issue and shows the rewards correctly.

reset() before first run?

In the run.py code, the first reset() is done after the evaluation of run_episode. Would I get a penalty if I perform the reset before? E.g.:

def run_evaluation(env):
    while not env.done_grading():
        env.reset()
        run_episode(env)

In particular, would this consider the first run as a failure with a reward of 0?

I have made some modifications to the ObstacleTowerEnv, part of which involve some initial setup in the reset() method.

Code entrypoint in the evaluator

The documentation currently uses /home/otc/run.sh as the code entrypoint. While the evaluator uses /home/aicrowd/run.sh as the entrypoint.

This will cause quite some confusion in cases wherer participants drop in a Dockerfile at the root of the repositorry, to override the image building of aicrowd-repo2docker. In which case, then would need to ensure that the entrypoint that the evaluator uses actually exists.

Hence, I believe its best to consistently use the /home/aicrowd/run.sh as the entrypoint in the local build scripts and docs.

pip install -r requirements.txt on MacOS

I installed Python 3.7.2 and update alias for python and pip to point 3.7.2 version

Could not find a version that satisfies the requirement mlagents_envs<0.7,>=0.6.2 (from obstacle-tower-env==1.2->-r requirements.txt (line 1)) (from versions: )
No matching distribution found for mlagents_envs<0.7,>=0.6.2 (from obstacle-tower-env==1.2->-r requirements.txt (line 1))

System
MacOS 10.14.3
Python 3.7.2 (Python 3.7.2 (v3.7.2:9a3ffc0492, Dec 24 2018, 02:44:43) [Clang 6.0 (clang-600.0.57)] on darwin)
pip 19.0.2 (pip 19.0.2 from /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip (python 3.7))

Problem running pip install -r requirements.txt on Windows

I'm on Windows and I'm trying to pip install -r requirements.txt, my python and pip version are:

PS D:\Project\Github\otc-unity\obstacle-tower-challenge> python
Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.

PS D:\Project\Github\otc-unity\obstacle-tower-challenge> pip --version
pip 19.0.2 from c:\users\lgvic\appdata\local\programs\python\python37\lib\site-packages\pip (python 3.7)

When I tried to install dependency, here's the error:

PS D:\Project\Github\otc-unity\obstacle-tower-challenge> pip install -r requirements.txt
Collecting git+git://github.com/Unity-Technologies/[email protected] (from -r requirements.txt (line 1))
  Cloning git://github.com/Unity-Technologies/obstacle-tower-env (to revision v1.1) to c:\users\lgvic\appdata\local\temp\pip-req-build-_0hdw7g9
Collecting aicrowd-repo2docker (from -r requirements.txt (line 2))
  Using cached https://files.pythonhosted.org/packages/9a/39/796d0d4700a8d1315e90176ffc9c7e9c36758a560779da39820187ed3753/aicrowd-repo2docker-0.7.0.tar.gz
Collecting mlagents_envs<0.7,>=0.6.1 (from obstacle-tower-env==1.1->-r requirements.txt (line 1))
  Could not find a version that satisfies the requirement mlagents_envs<0.7,>=0.6.1 (from obstacle-tower-env==1.1->-r requirements.txt (line 1)) (from versions: )
No matching distribution found for mlagents_envs<0.7,>=0.6.1 (from obstacle-tower-env==1.1->-r requirements.txt (line 1))

I have looked around but couldn't find a solution for this. Thanks in advance for any help 😄

Video generation + Event emitters

@harperj : Do you also have an example for video generation ?
And also a bit confused right now about how to "inject" events into the env, so that we can track progress, cumulative rewards, etc.

I ofcourse see the rewards here : https://github.com/Unity-Technologies/obstacle-tower-challenge/blob/master/run.py#L21

But this would be computed by the user generated script, so we cant trust the same.
The ideal case would be, the env be injected with certain event emitters, which keep emitting the latest evaluation state to a particular place. Can you provide more details about how this particular binary was built ? And where we can make changes to the same ?

Camera parameters?

Any chance we could get the results of a camera calibration with the ingame 3rd-person camera. Specifically the camera matrix (focal lengths and optical center) would be of interest for some approaches seen in robotics.

My 1st docker container exit(1) for 'run.sh' and 2nd one exit(127) for 'env.sh' on macOS

Hi~
I am doing “Testing Challenge Evaluation - Run Docker image” in “https://github.com/Unity-Technologies/obstacle-tower-challenge/README.md” like below …

- OS : macOS Mojave version 10.14.1 - Python 3.6 Virtual Environment - Docker Version 2.0.0.3

After building the Docker image,
I start the container running your agent script.

docker run
--env OTC_EVALUATION_ENABLED=true
--network=host
-it obstacle_tower_challenge:latest ./run.sh

In another terminal window, execute the environment.

docker run
--env OTC_EVALUATION_ENABLED=true
--env OTC_DEMO_EVALUATION=true
--network=host
-it obstacle_tower_challenge:latest ./env.sh

But…my docker containers exit like below…Please help me to fix this problem…What did I miss?

<result of ‘docker ps -a’>
skcc-user:~ parksurk$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a60df393ce84 obstacle_tower_challenge:latest "./run.sh" 35 minutes ago Exited (1) 35 minutes ago xenodochial_swanson
5559cdf2d6ac obstacle_tower_challenge:latest "./env.sh" 36 minutes ago Exited (127) 32 minutes ago awesom

< 1st container result>
skcc-user:~ parksurk$ docker run --env OTC_EVALUATION_ENABLED=true --network=host -it obstacle_tower_challenge:latest ./run.sh
root
INFO:mlagents_envs:Start training by pressing the Play button in the Unity Editor.
Traceback (most recent call last):
File "run.py", line 26, in
env = ObstacleTowerEnv(args.environment_filename, docker_training=args.docker_training)
File "/srv/conda/lib/python3.6/site-packages/obstacle_tower_env.py", line 39, in init
self._env = UnityEnvironment(environment_filename, worker_id, docker_training=docker_training)
File "/srv/conda/lib/python3.6/site-packages/mlagents_envs/environment.py", line 69, in init
aca_params = self.send_academy_parameters(rl_init_parameters_in)
File "/srv/conda/lib/python3.6/site-packages/mlagents_envs/environment.py", line 491, in send_academy_parameters
return self.communicator.initialize(inputs).rl_initialization_output
File "/srv/conda/lib/python3.6/site-packages/mlagents_envs/rpc_communicator.py", line 80, in initialize
"The Unity environment took too long to respond. Make sure that :\n"
mlagents_envs.exception.UnityTimeOutException: The Unity environment took too long to respond. Make sure that :
The environment does not need user interaction to launch
The Academy and the External Brain(s) are attached to objects in the Scene
The environment and the Python interface have compatible versions.
skcc-user:~ parksurk$ docker run --env OTC_EVALUATION_ENABLED=true --network=host -it obstacle_tower_challenge:latest ./run.sh
root
INFO:mlagents_envs:Start training by pressing the Play button in the Unity Editor.
Traceback (most recent call last):
File "run.py", line 26, in
env = ObstacleTowerEnv(args.environment_filename, docker_training=args.docker_training)
File "/srv/conda/lib/python3.6/site-packages/obstacle_tower_env.py", line 39, in init
self._env = UnityEnvironment(environment_filename, worker_id, docker_training=docker_training)
File "/srv/conda/lib/python3.6/site-packages/mlagents_envs/environment.py", line 69, in init
aca_params = self.send_academy_parameters(rl_init_parameters_in)
File "/srv/conda/lib/python3.6/site-packages/mlagents_envs/environment.py", line 491, in send_academy_parameters
return self.communicator.initialize(inputs).rl_initialization_output
File "/srv/conda/lib/python3.6/site-packages/mlagents_envs/rpc_communicator.py", line 80, in initialize
"The Unity environment took too long to respond. Make sure that :\n"
mlagents_envs.exception.UnityTimeOutException: The Unity environment took too long to respond. Make sure that :
The environment does not need user interaction to launch
The Academy and the External Brain(s) are attached to objects in the Scene
The environment and the Python interface have compatible versions.

<2nd container result>
skcc-user:~ parksurk$ docker run --env OTC_EVALUATION_ENABLED=true --env OTC_DEMO_EVALUATION=true --network=host -it obstacle_tower_challenge:latest ./env.sh
ENV_PORT=
ENV_FILENAME=
'[' -z '' ']'
ENV_PORT=5005
'[' -z '' ']'
ENV_FILENAME=/home/otc/ObstacleTower/obstacletower.x86_64
touch otc_out.json
APP_PID=7
xvfb-run --auto-servernum '--server-args=-screen 0 640x480x24' /home/otc/ObstacleTower/obstacletower.x86_64 --port 5005 2
TAIL_PID=8
wait 7
tail -f otc_out.json

Unity environment took too long to respond (Windows)

Output when running python run.py:

PS D:\Project\Github\otc-unity\obstacle-tower-challenge> python .\run.py
Traceback (most recent call last):
  File ".\run.py", line 26, in <module>
    env = ObstacleTowerEnv(args.environment_filename, docker_training=args.docker_training)
  File "C:\Users\lgvic\AppData\Local\Programs\Python\Python36\lib\site-packages\obstacle_tower_env.py", line 39, in __init__
    self._env = UnityEnvironment(environment_filename, worker_id, docker_training=docker_training)
  File "C:\Users\lgvic\AppData\Local\Programs\Python\Python36\lib\site-packages\mlagents_envs\environment.py", line 67, in __init__
    aca_params = self.send_academy_parameters(rl_init_parameters_in)
  File "C:\Users\lgvic\AppData\Local\Programs\Python\Python36\lib\site-packages\mlagents_envs\environment.py", line 493, in send_academy_parameters
    return self.communicator.initialize(inputs).rl_initialization_output
  File "C:\Users\lgvic\AppData\Local\Programs\Python\Python36\lib\site-packages\mlagents_envs\rpc_communicator.py", line 79, in initialize
    "The Unity environment took too long to respond. Make sure that :\n"
mlagents_envs.exception.UnityTimeOutException: The Unity environment took too long to respond. Make sure that :
         The environment does not need user interaction to launch
         The Academy and the External Brain(s) are attached to objects in the Scene
         The environment and the Python interface have compatible versions.

The Unity application actually open, but the level doesn't appear, it looks like below:

image

I have a feeling that the Unity app was trying to connect with some port to reach the python app but for some reason cannot and crashed itself internally?

Error : The Unity environment took too long to respond

I have been consistently getting this error when trying to get the OTC env to run :

I have checked in the evaluation binary to the repository, and pushed it (with some other minor fixes) to this branch : https://github.com/Unity-Technologies/obstacle-tower-challenge/tree/aicrowd_debug

mohanty@aicrowd-node-083:~$ docker exec -it obstacle_tower ./run.sh
root
Found path: /home/crowdai/./ObstacleTower/obstacletower.x86_64
Mono path[0] = '/home/crowdai/./ObstacleTower/obstacletower_Data/Managed'
Mono config path = '/home/crowdai/./ObstacleTower/obstacletower_Data/MonoBleedingEdge/etc'
Preloaded 'ScreenSelector.so'
Preloaded 'libgrpc_csharp_ext.x64.so'
PlayerPrefs - Creating folder: /home/crowdai/.config/unity3d/Unity Technologies
PlayerPrefs - Creating folder: /home/crowdai/.config/unity3d/Unity Technologies/ObstacleTower
Logging to /home/crowdai/.config/unity3d/Unity Technologies/ObstacleTower/Player.log
Traceback (most recent call last):
  File "run.py", line 19, in <module>
    env = ObstacleTowerEnv(environment_filename)
  File "/srv/venv/lib/python3.6/site-packages/obstacle_tower_env.py", line 38, in __init__
    self._env = UnityEnvironment(environment_filename, worker_id, docker_training=docker_training)
  File "/srv/venv/lib/python3.6/site-packages/mlagents/envs/environment.py", line 67, in __init__
    aca_params = self.send_academy_parameters(rl_init_parameters_in)
  File "/srv/venv/lib/python3.6/site-packages/mlagents/envs/environment.py", line 493, in send_academy_parameters
    return self.communicator.initialize(inputs).rl_initialization_output
  File "/srv/venv/lib/python3.6/site-packages/mlagents/envs/rpc_communicator.py", line 77, in initialize
    "The Unity environment took too long to respond. Make sure that :\n"
mlagents.envs.exception.UnityTimeOutException: The Unity environment took too long to respond. Make sure that :
	 The environment does not need user interaction to launch
	 The Academy and the External Brain(s) are attached to objects in the Scene
	 The environment and the Python interface have compatible versions.

The Unity environment took too long to respond (Ubuntu 18.04)

Hi,

I have followed the instructions at https://github.com/Unity-Technologies/obstacle-tower-challenge.
The output of "python run.py" is the following:

Found path: /home/theophile/Software/github/obstacle-tower-challenge/./ObstacleTower/obstacletower.x86_64
Mono path[0] = '/home/theophile/Software/github/obstacle-tower-challenge/./ObstacleTower/obstacletower_Data/Managed'
Mono config path = '/home/theophile/Software/github/obstacle-tower-challenge/./ObstacleTower/obstacletower_Data/MonoBleedingEdge/etc'
Preloaded 'ScreenSelector.so'
Preloaded 'libgrpc_csharp_ext.x64.so'
Logging to /home/theophile/.config/unity3d/Unity Technologies/ObstacleTower/Player.log
Traceback (most recent call last):
File "run.py", line 26, in
env = ObstacleTowerEnv(args.environment_filename, docker_training=args.docker_training)
File "/home/theophile/anaconda3/envs/obstacle-tower-challenge/lib/python3.6/site-packages/obstacle_tower_env.py", line 39, in init
self._env = UnityEnvironment(environment_filename, worker_id, docker_training=docker_training)
File "/home/theophile/anaconda3/envs/obstacle-tower-challenge/lib/python3.6/site-packages/mlagents_envs/environment.py", line 69, in init
aca_params = self.send_academy_parameters(rl_init_parameters_in)
File "/home/theophile/anaconda3/envs/obstacle-tower-challenge/lib/python3.6/site-packages/mlagents_envs/environment.py", line 491, in send_academy_parameters
return self.communicator.initialize(inputs).rl_initialization_output
File "/home/theophile/anaconda3/envs/obstacle-tower-challenge/lib/python3.6/site-packages/mlagents_envs/rpc_communicator.py", line 80, in initialize
"The Unity environment took too long to respond. Make sure that :\n"
mlagents_envs.exception.UnityTimeOutException: The Unity environment took too long to respond. Make sure that :
The environment does not need user interaction to launch
The Academy and the External Brain(s) are attached to objects in the Scene

I'm running the code in a condo environment with the following packages:

Name Version Build Channel
absl-py 0.7.0 pypi_0 pypi
aicrowd-repo2docker 0.7.0 pypi_0 pypi
astor 0.7.1 pypi_0 pypi
atomicwrites 1.3.0 pypi_0 pypi
attrs 18.2.0 pypi_0 pypi
backcall 0.1.0 pypi_0 pypi
bleach 1.5.0 pypi_0 pypi
ca-certificates 2019.1.23 0
certifi 2018.11.29 py36_0
chardet 3.0.4 pypi_0 pypi
cycler 0.10.0 pypi_0 pypi
dbus 1.13.6 h746ee38_0
decorator 4.3.2 py36_0
defusedxml 0.5.0 pypi_0 pypi
docker 3.7.0 pypi_0 pypi
docker-pycreds 0.4.0 pypi_0 pypi
docopt 0.6.2 pypi_0 pypi
entrypoints 0.3 pypi_0 pypi
escapism 1.0.0 pypi_0 pypi
expat 2.2.6 he6710b0_0
fontconfig 2.13.0 h9420a91_0
freetype 2.9.1 h8a8886c_1
future 0.17.1 pypi_0 pypi
gast 0.2.2 pypi_0 pypi
glib 2.56.2 hd408876_0
gmp 6.1.2 h6c8ec71_1
grpcio 1.11.1 pypi_0 pypi
gst-plugins-base 1.14.0 hbbd80ab_1
gstreamer 1.14.0 hb453b48_1
gym 0.11.0 pypi_0 pypi
html5lib 0.9999999 pypi_0 pypi
icu 58.2 h9c2bf20_1
idna 2.8 pypi_0 pypi
ipykernel 5.1.0 py36h39e3cac_0
ipython 7.2.0 py36h39e3cac_0
ipython-genutils 0.2.0 pypi_0 pypi
ipython_genutils 0.2.0 py36_0
ipywidgets 7.4.2 py36_0
jedi 0.13.2 pypi_0 pypi
jinja2 2.10 pypi_0 pypi
jpeg 9b h024ee3a_2
jsonschema 2.6.0 pypi_0 pypi
jupyter 1.0.0 py36_7
jupyter-console 6.0.0 pypi_0 pypi
jupyter_client 5.2.4 py36_0
jupyter_console 6.0.0 py36_0
jupyter_core 4.4.0 py36_0
kiwisolver 1.0.1 pypi_0 pypi
libedit 3.1.20181209 hc058e9b_0
libffi 3.2.1 hd88cf55_4
libgcc-ng 8.2.0 hdf63c60_1
libpng 1.6.36 hbc83047_0
libsodium 1.0.16 h1bed415_0
libstdcxx-ng 8.2.0 hdf63c60_1
libuuid 1.0.3 h1bed415_2
libxcb 1.13 h1bed415_1
libxml2 2.9.9 he19cac6_0
markdown 3.0.1 pypi_0 pypi
markupsafe 1.1.0 pypi_0 pypi
matplotlib 3.0.2 pypi_0 pypi
mistune 0.8.4 py36h7b6447c_0
mlagents-envs 0.6.2 pypi_0 pypi
more-itertools 6.0.0 pypi_0 pypi
nb_conda 2.2.1 py36_0
nb_conda_kernels 2.2.0 py36_1
nbconvert 5.4.1 pypi_0 pypi
nbformat 4.4.0 py36_0
ncurses 6.1 he6710b0_1
notebook 5.7.4 py36_0
numpy 1.14.5 pypi_0 pypi
obstacle-tower-env 1.1 pypi_0 pypi
openssl 1.1.1a h7b6447c_0
pandoc 2.2.3.2 0
pandocfilters 1.4.2 py36_1
parso 0.3.4 pypi_0 pypi
pcre 8.42 h439df22_0
pexpect 4.6.0 py36_0
pickleshare 0.7.5 pypi_0 pypi
pillow 5.4.1 pypi_0 pypi
pip 19.0.1 py36_0
pluggy 0.8.1 pypi_0 pypi
prometheus-client 0.5.0 pypi_0 pypi
prometheus_client 0.5.0 py36_0
prompt_toolkit 2.0.8 py_0
protobuf 3.6.1 pypi_0 pypi
ptyprocess 0.6.0 pypi_0 pypi
py 1.7.0 pypi_0 pypi
pyglet 1.3.2 pypi_0 pypi
pygments 2.3.1 pypi_0 pypi
pyparsing 2.3.1 pypi_0 pypi
pyqt 5.9.2 py36h05f1152_2
pytest 3.10.1 pypi_0 pypi
python 3.6.8 h0371630_0
python-dateutil 2.8.0 pypi_0 pypi
python-json-logger 0.1.10 pypi_0 pypi
pyyaml 3.13 pypi_0 pypi
pyzmq 17.1.2 pypi_0 pypi
qt 5.9.7 h5867ecd_1
qtconsole 4.4.3 pypi_0 pypi
readline 7.0 h7b6447c_5
requests 2.21.0 pypi_0 pypi
ruamel-yaml 0.15.88 pypi_0 pypi
scipy 1.2.1 pypi_0 pypi
send2trash 1.5.0 pypi_0 pypi
setuptools 40.7.3 py36_0
sip 4.19.8 py36hf484d3e_0
six 1.12.0 pypi_0 pypi
sqlite 3.26.0 h7b6447c_0
tensorboard 1.7.0 pypi_0 pypi
tensorflow 1.7.1 pypi_0 pypi
termcolor 1.1.0 pypi_0 pypi
terminado 0.8.1 pypi_0 pypi
testpath 0.4.2 py36_0
tk 8.6.8 hbc83047_0
tornado 5.1.1 pypi_0 pypi
traitlets 4.3.2 py36_0
urllib3 1.24.1 pypi_0 pypi
wcwidth 0.1.7 pypi_0 pypi
webencodings 0.5.1 py36_1
websocket-client 0.54.0 pypi_0 pypi
werkzeug 0.14.1 pypi_0 pypi
wheel 0.32.3 py36_0
widgetsnbextension 3.4.2 pypi_0 pypi
xz 5.2.4 h14c3975_4
zeromq 4.3.1 he6710b0_3
zlib 1.2.11 h7b6447c_3

I don't know how to proceed from the error message, could you help me?
Thank you very much, very excited about this challenge!

The problem with build.sh script

Hi guys, does anybody know hot to solve the problem?
I'm trying to build Docker image but get the following:

Using PythonBuildPack builder
Step 1/33 : FROM nvidia/cuda:9.0-cudnn7-runtime-ubuntu16.04
---> 17c50af840b7
Step 2/33 : ENV DEBIAN_FRONTEND=noninteractive
---> Using cache
---> 4cc3d68f1cfb
Step 3/33 : RUN apt-get update -qq && apt-get install -qq --yes --no-install-recommends locales wget bzip2 && apt-get purge -qq && apt-get clean -qq && rm -rf /var/lib/apt/lists/*
---> Using cache
---> e02598cfe940
Step 4/33 : RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen
---> Using cache
---> 78779c78f8fe
Step 5/33 : ENV LC_ALL en_US.UTF-8
---> Using cache
---> 3deae9a42bbd
Step 6/33 : ENV LANG en_US.UTF-8
---> Using cache
---> c4e0f5830699
Step 7/33 : ENV LANGUAGE en_US.UTF-8
---> Using cache
---> de53df7bf289
Step 8/33 : ENV SHELL /bin/bash
---> Using cache
---> 92f4d829dfee
Step 9/33 : ARG NB_USER
---> Using cache
---> 29372e79581b
Step 10/33 : ARG NB_UID
---> Using cache
---> fb1d01ee055c
Step 11/33 : ENV USER ${NB_USER}
---> Using cache
---> a21170deec3c
Step 12/33 : ENV HOME /home/${NB_USER}
---> Using cache
---> 7f25f615c066
Step 13/33 : RUN adduser --disabled-password --gecos "Default user" --uid ${NB_UID} ${NB_USER}
---> Running in 6d6a7f099ee6
adduser: The UID 0 is already in use.
Removing intermediate container 6d6a7f099ee6
The command '/bin/sh -c adduser --disabled-password --gecos "Default user" --uid ${NB_UID} ${NB_USER}' returned a non-zero code: 1

I used sudo rights to run build, in other way I cannot run as non-root user because of the following error

docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', PermissionError(13, 'Permission denied'))

Multiple issues with docker, python versions and build script

I am trying to test my environment with docker on a GCloud VM.

I noticed multiple issues while trying to build and run the docker:

1. The only version working with the README tutorial is python 3.6

This is kind of annoying:

  • 3.5 does not work due to aicrowd-repo2docker using f-strings.
  • 3.7 does not work because ml-agents cannot be installed with 3.7

Python 3.7 can be used, but aicrowd-repo2docke must be installed without using requirements.txt.

A note should be added to the README. I have python 3.5 by default, and I compiled python 3.7 from scratch thinking it would work, just to notice it does not with ml-agents... Had I known, I would have built python 3.6.

2. Small issue with build.sh

This:

./build.sh

...does not work if the shell is not bash-compliant (e.g. fish). A shabang should be added, or the line should be changed bash build.sh.

3. Cannot run the docker containers if there are agents running

The docker containers cannot be launch if there are agents running aside on the same host due to the --network=host. And the worker ID cannot be changed without modifying the source code of run.py.

4. Cannot run the docker containers

Even after modifying the worker ID or trying to put the two dockers on a docker network --network=ot-network, the agent fails to launch with a unity time-out exception:

mlagents_envs.exception.UnityTimeOutException: The Unity environment took too long to respond. Make sure that :
         The environment does not need user interaction to launch
         The Academy and the External Brain(s) are attached to objects in the Scene
         The environment and the Python interface have compatible versions.

I am using a GCloud VM created following the tutorial. I tried running sudo /usr/bin/X :0 and adding --env DISPLAY=:0 to the docker command line but it did not work.

Observation is only of shape (84, 84, 3)?

Hi,

I'm confused by the shape of the observation.

In the docs of the environemnt's repository, env.observation_space returns "Tuple(Box(168, 168, 3), Discrete(5), Box(1,))".

If I call this in run.py of the challenge's repository it outputs "Box(84,84,3)".

Is this the intended behavior?
(run on windows)

opencv requirement

I need to use opencv-python in my submission, however, this requires some dependencies to be installed via apt get:

apt-get install libsm6
apt-get install libxrender1
apt-get install libxext-dev

Therefore, I need to be able to add them in the Dockerfile when building the image, how can I do this?

What is the correct directory structure for submitting agent?

Hi very last-minute issue I'm having with my submission. I have no clue how to structure my directory so that it actually uses my trained agent. I see "run.sh" which calls "run.py", but how does this relate to the GCP example? Specifically, if I trained my agent with this command:

python -um dopamine.discrete_domains.train --base_dir=/tmp/dopamine --gin_files='dopamine/agents/rainbow/configs/rainbow_otc.gin'

Then what edits to run.sh/py do I need to make, and/or where do I copy the dopamine files, such that it works correctly in the docker setup? Thanks for any help.

expected fps for environment

What is the expected fps for this unity environment? Especially in a parallel environment. If I run tens of them on the same GPU, will there be context switching impacting performance? Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.