Giter Site home page Giter Site logo

ntnu-arl / aerial_gym_simulator Goto Github PK

View Code? Open in Web Editor NEW
277.0 13.0 43.0 218.45 MB

Aerial Gym Simulator - Isaac Gym Simulator for Aerial Robots

Home Page: https://ntnu-arl.github.io/aerial_gym_simulator/

License: BSD 3-Clause "New" or "Revised" License

Python 99.96% Shell 0.04%

aerial_gym_simulator's Introduction

License Code style: black

Welcome to the Aerial Gym Simulator repository. Please refer to our documentation for detailed information on how to get started with the simulator, and how to use it for your research.

The Aerial Gym Simulator is a high-fidelity physics-based simulator for training Micro Aerial Vehicle (MAV) platforms such as multirotors to learn to fly and navigate cluttered environments using learning-based methods. The environments are built upon the underlying NVIDIA Isaac Gym simulator. We offer aerial robot models for standard planar quadrotor platforms, as well as fully-actuated platforms and multirotors with arbitrary configurations. These configurations are supported with low-level and high-level geometric controllers that reside on the GPU and provide parallelization for the simultaneous control of thousands of multirotors.

This is the second release of the simulator and includes a variety of new features and improvements. Task definition and environment configuration allow for fine-grained customization of all the environment entities without having to deal with large monolithic environment files. A custom rendering framework allows obtaining depth, and segmentation images at high speeds and can be used to simulate custom sensors such as LiDARs with varying properties. The simulator is open-source and is released under the BSD-3-Clause License.

Aerial Gym Simulator allows you to train state-based control policies in under a minute:

Aerial Gym Simulator

And train vision-based navigation policies in under an hour:

RL for Navigation

Equipped with GPU-accelerated and customizable ray-casting based LiDAR and Camera sensors with depth and segmentation capabilities:

Depth Frames 1 Lidar Depth Frames 1

Seg Frames 1 Lidar Seg Frames 1

Features

Important

Support for Isaac Lab and Isaac Sim is currently under development. We anticipate releasing this feature in the near future.

Please refer to the paper detailing the previous version of our simulator to get insights into the motivation and the design principles involved in creating the Aerial Gym Simulator: https://arxiv.org/abs/2305.16510 (link will be updated to reflect the newer version soon!).

Why Aerial Gym Simulator?

The Aerial Gym Simulator is designed to simulate thousands of MAVs simultaneously and comes equipped with both low and high-level controllers that are used on real-world systems. In addition, the new customized ray-casting allows for superfast rendering of the environment for tasks using depth and segmentation from the environment.

The optimized code in this newer version allows training for motor-command policies for robot control in under a minute and vision-based navigation policies in under an hour. Extensive examples are provided to allow users to get started with training their own policies for their custom robots quickly.

Citing

When referencing the Aerial Gym Simulator in your research, please cite the following paper

@misc{kulkarni2023aerialgymisaac,
      title={Aerial Gym -- Isaac Gym Simulator for Aerial Robots}, 
      author={Mihir Kulkarni and Theodor J. L. Forgaard and Kostas Alexis},
      year={2023},
      eprint={2305.16510},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2305.16510}, 
}

If you use the reinforcement learning policy provided alongside this simulator for navigation tasks, please cite the following paper:

@misc{kulkarni2024reinforcementlearningcollisionfreeflight,
      title={Reinforcement Learning for Collision-free Flight Exploiting Deep Collision Encoding}, 
      author={Mihir Kulkarni and Kostas Alexis},
      year={2024},
      eprint={2402.03947},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2402.03947}, 
}

Quick Links

For your convenience, here are some quick links to the most important sections of the documentation:

Contact

Mihir Kulkarni     Email   GitHub   LinkedIn   X (formerly Twitter)

Welf Rehberg      Email   GitHub   LinkedIn

Theodor J. L. Forgaard     Email   GitHb   LinkedIn

Kostas Alexis      Email   GitHub   LinkedIn   X (formerly Twitter)

This work is done at the Autonomous Robots Lab, Norwegian University of Science and Technology (NTNU). For more information, visit our Website.

Acknowledgements

This material was supported by RESNAV (AFOSR Award No. FA8655-21-1-7033) and SPEAR (Horizon Europe Grant Agreement No. 101119774).

This repository utilizes some of the code and helper scripts from https://github.com/leggedrobotics/legged_gym and IsaacGymEnvs.

FAQs and Troubleshooting

Please refer to our website or to the Issues section in the GitHub repository for more information.

aerial_gym_simulator's People

Contributors

mihirk284 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aerial_gym_simulator's Issues

RuntimeError: nvrtc: error: invalid value for --gpu-architecture (-arch)

Hello,

I am a researcher from IMEC involved in the SPEAR project. I'm eager to explore your simulation tool before the kickoff meeting so that I can formulate questions and concerns for WP2. However, I encountered an error while trying to install the 'aerial_gym_simulator,' as shown in the image below:
Screenshot from 2023-11-07 10-51-02
I have attempted to find a solution by searching the Nvidia forums and discovered that others have experienced a similar issue related to the RTX 4090 GPU, as documented here. I've tried both conda and docker, but the problem persists.

Could you please provide your suggestions on how to proceed with resolving this issue?
Maybe one solution is changing the driver version to 525?

Thanks

Issue with Importing verifiable_learning Module

Hi,

I am currently trying to run the position_control_example.py script from the aerial_gym package (latest version), but I am encountering an import error related to the verifiable_learning module. Below is the error message I receive:

Traceback (most recent call last):
File "position_control_example.py", line 1, in
from aerial_gym.utils.logging import CustomLogger
File "/home/mychoi/Research/Quadrotor/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/init.py", line 6, in
from .task import *
File "/home/mychoi/Research/Quadrotor/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/task/init.py", line 5, in
from aerial_gym.task.navigation_task.navigation_task import NavigationTask
File "/home/mychoi/Research/Quadrotor/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/task/navigation_task/navigation_task.py", line 10, in
from aerial_gym.utils.vae.vae_image_encoder import VAEImageEncoder
File "/home/mychoi/Research/Quadrotor/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/utils/vae/vae_image_encoder.py", line 3, in
from verifiable_learning.DepthToLatent.networks.VAE.vae import VAE
ModuleNotFoundError: No module named 'verifiable_learning'

Could you please provide guidance on how to resolve this issue? Is there a specific repository or installation method I should follow to obtain the verifiable_learning module?

Thank you for your assistance.

Reward Size Mismatch running RL games play script for a trained checkpoint

I am trying to run the rl_games play script to test the policy I have trained, but I got a series of errors in rl_games that I solved till reaching this error.

Screenshot from 2024-04-22 13-16-05

File "/home/user/roshdim1/anaconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/torch_runner.py", line 123, in run_play
player.run()
File "/home/user/roshdim1/anaconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/common/player.py", line 332, in run
cr += r
RuntimeError: The size of tensor a (8) must match the size of tensor b (13) at non-singleton dimension 2

There appears to be a mismatch in the sizes of the cr and the reward returned from the env. Is there a simpler way to run simulation of a trained checkpoint ?

This is the command I ran

python3 runner.py --play --task="quad_with_obstacles" --num_envs=8 --checkpoint=runs/quad_ppo_22-19-01-34/nn/quad_ppo.pth

Error Running: bash run_trained_navigation_policy.sh

Hello!

I am currently working through the reinforcement learning documentation and encountered an issue while trying to check the performance of the trained model using the script bash run_trained_navigation_policy.sh.

I receive the following error message:
dce_nn_navigation.py: error: unrecognized arguments: --train_dir=/path_to_ws/aerial_gym_simulator/aerial_gym/examples/dce_rl_navigation/selected_network --experiment=selected_network --env=test --obs_key=observations --load_checkpoint_kind=best

Upon inspecting the get_args() method, I noticed that the arguments being passed in the shell script are not recognized by the script.

Could you please let me know if there is a better example I should start with, or if I am running the script incorrectly? Any guidance would be greatly appreciated.

Thank you!

[Request] data collection example

Hello,
I was wondering if it's possible to have an example of how to collect depth images and segmentation images in the environment, without rolling out the drone. I would like just to randomly position the cameras in the environment and collect data.
Thank you.

multiple drone missions

Can this code base be used to train multiple drone missions? Like multiple drone navigation?

Cameras Extraction

image
How can I extract both the RGB and depth images ? I can only find one type of images saved in the full_camera_array ( probably the depth image as it is 1D)

Test with action_mean in cleanrl

I want to use action_mean (instead of random sampling) to test the already trained neural network model in CleanRL. The results are very poor, but using the original random sampling method for testing yields very good results. I would like to ask why this might be happening?

The testing code is modified as shown in the uploaded image, using action_mean for testing.
test

Load policy and run inference

Is there a script to load a train network ( rl_games ) and run inference similar to the play script but with the inferred actions instead of the constant actions in the play script.

Isaac Gym - No Longer Supported

Hello,

Thank you so much for sharing this amazing work. I just have one question. From the website, it is mentioned that this software has no more support here. It says that

please consider using Isaac Lab, an open-source lightweight and performance optimized application for robot learning built on the Isaac Sim platform.

Do you have any plans to shift your simulation to the supported version? If not, could you please shed some light on it please? Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.