Giter Site home page Giter Site logo

mpnet's Introduction

Motion Planning Networks

Implementation of MPNet: Motion Planning Networks. [arXiv1] [arXiv2]

The code can easily be adapted for Informed Neural Sampling.

Contains

  • Data Generation
    • Any existing classical motion planner can be used to generate datasets. However, we provide following implementations in C++:
  • MPNet algorithm
  • A navie python visualization files

Data Description

  • Simple 2D has 7 blocks each of size 5x5 that are placed randomly.
  • Complex 3D contains 10 blocks with sizes as follow:
    • shape=[[5.0,5.0,10.0],[5.0,10.0,5.0],[5.0,10.0,10.0], [10.0,5.0,5.0],[10.0,5.0,10.0],[10.0,10.0,5.0], [10.0,10.0,10.0],[5.0,5.0,5.0],[10.0,10.0,10.0],[5.0,5.0,5.0]]
  • e0-109 has the training and testing paths in 110 different environments.
    • 0-100 environments and 0-4000 paths/environment are for training.
    • Seen test dataset: 0-100 envs and 4000-4200=200 paths/env.
    • Unseen test dataset: 100-110 envs and 0-2000 paths/env.
  • obs_cloud is the point-cloud of randomly generated 30,000 environments.
    • 0-110 corresponds to the same environments for which path data is provided.
    • You may use full dataset to train encoder network via unsupervised learning.
  • obs.dat contains the center location (x,y) of each obstacle in the environments.
  • obs_perm2.dat contains the order in which the blocks should be placed in preset locations given by obs.dat file to setup environments.
    • For instance, in complex 3D, the permutation 8342567901 indicates obstacle #8 of size 10x10x10 should be placed at the location #0 given by obs.dat.

Generating your own data

  • Define a region of operation, for instance in simple2D, it is 20x20
  • Decide how many obstacles (r) you would like to place in the region. In the case of simple2D, we have r=7 5x5 blocks.
  • Generate random N locations to place r obstacles in the region. In the case of simple2D, we generated N=20.
  • For N locations and r obstacles, apply combinatorics, to generate NCr different environments i.e., in simple 2D NCr= 20C7= 77520
    • The obs_perm2 file contains the combinations, for instance 6432150 indicates to place obstacle#6 at location #0.
  • Once obstacles are placed, randomly generate collision-free samples and use them in pairs as stat-goal to generate paths using any classical planne for the training. For classical planners, we recommend using OMPL implementations.

Requirements

  • Data Generation

    1. Install libbot2

      • Make sure all dependencies of libbot2 (e.g., lcm) are installed.
      • Install libbot2 with the local installation procedure.
      • Run "make" in the data_generation folder where the README file is located.
    2. Use any compiler such as Netbeans to load the precomplie code.

      • data_generation/src/rrts_main.cpp contains the main rrt/prrt code.

      • data_generation/viewer/src/viewer_main.cpp contains the visualization code.

        • Also checkout comments in data_generation/viewer/src/renderers/graph_renderer.cpp
      • Note: main_viewer and rrts_main should run in parallel as:

        • rrts_main sends the path solution as well as the tree to the main_viewer to publish through local network.
        • data is transmitted through LCM network protocol.
  • MPNet

Examples

  1. Assuming paths to obstacles point-cloud are declared, train obstacle-encoder: python MPNET/AE/CAE.py

  2. Assuming paths to demonstration dataset and obstacle-encoder are declared, run mpnet_trainer:

    python MPNET/train.py

  3. Run tests by first loading the trained models:

    python MPNET/neuralplanner.py

References

@inproceedings{qureshi2019motion,
  title={Motion planning networks},
  author={Qureshi, Ahmed H and Simeonov, Anthony and Bency, Mayur J and Yip, Michael C},
  booktitle={2019 International Conference on Robotics and Automation (ICRA)},
  pages={2118--2124},
  year={2019},
  organization={IEEE}
}
@inproceedings{qureshi2018deeply,
  title={Deeply Informed Neural Sampling for Robot Motion Planning},
  author={Qureshi, Ahmed H and Yip, Michael C},
  booktitle={2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  pages={6582--6588},
  year={2018},
  organization={IEEE}
}
@article{qureshi2019motion,
  title={Motion Planning Networks: Bridging the Gap Between Learning-based and Classical Motion Planners},
  author={Qureshi, Ahmed H and Miao, Yinglong and Simeonov, Anthony and Yip, Michael C},
  journal={arXiv preprint arXiv:1907.06013},
  year={2019}
}

mpnet's People

Contributors

ahq1993 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mpnet's Issues

How to obtained the computer path using neural planner ?

Hi Thank you for sharing your work,
I am trying to reproduce the results. I downloaded the 2D sample data and run the following commands

Assuming paths to obstacles point-cloud are declared, train obstacle-encoder: python MPNET/AE/CAE.py

Assuming paths to demonstration dataset and obstacle-encoder are declared, run mpnet_trainer:

python MPNET/train.py

it trained correctly but when I run the neural planner using the following command

Run tests by first loading the trained models:
python MPNET/neuralplanner.py

image

it shows the above-stated output. I don't know if it is correct or not, how can I get the computed path to visualize it using visulizer.py.

Thank you

MPNet works on ROS

Hello,
I read the paper of MPNet and it's really amazing.
And you said you implemented this project on ROS with MoveIt! in paper.

So, I would like to know could you provide the information or tutorials of this project on ROS?
If you willing to provide me, I really appreciate it.

CAI

dataset format

@ahq1993
Tks for sharing such a great work.
It seems that libbot2 is not supported on Windows.
So can you please explain for me the format of the dataset after getting through all steps in data generation ?
I will try to generate the dataset by myself after knowing thoroughly the format of needed dataset.

Hi, could you let me know what observations were used in 7 DOF robot environment?

I think that the joint's angle and velocity and link's position and velocity(transitional and rotational) are usually used in a manipulation task.

I want to know which information is considered to the input of the neural network ( without the feature Z from the environment's cloud points ).

Thanks for your kind response in advance ^ ^!

Complex 2D dataset

Hi there,

I’m doing some work based on your project. Thanks for sharing your great work. I really need the data for complex 2D experiment because I do not know the specific details about generating them. Can you put this on Google Drive as well if possible?

Any help or response will be highly appreciated. Thanks in advance!

What is the difference between obstacle point cloud data files in dataset/obs_cloud/ compared to the obs.dat and obs_perm2.dat files in dataset/ directory?

Not sure what the difference is between some files in the provided S2D library.
I am struggling to figure out what the difference is between the obstacle point cloud data files in dataset/obs_cloud/ compared to the obs.dat and obs_perm2.dat files in dataset/ directory.

For the visualizer.py example, point cloud data from obs_cloud/ directory gives a point cloud representation of seven obstacles boxes. However, it looks like neuralplanner.py (via dataloader.py) uses obstacles from obs.dat and obs_perm2.dat, both within the datasets/ directory.

If anybody could explain the difference it would be much appreciated! Thanks!

Upper bound or Lower bound

The size of the 2D environment you provided seems to be 40x40. (probably x:-20~+20, y:-20~+20)
However, when I run the code, the output of the neural net (x or y axis) became upper than 20 or lower than -20. Is there any restriction such as upper bounds or lower bounds to the neural network's output?

What is the point cloud data given to the network?

Hi Ahmed,

I think you have done a great job in the field of Path Planning.
I read your paper and was wondering what was the point cloud data that was fed to the ENet. was it the point cloud data from a sensor at the end effector or any other position or is it the point cloud data that you generate for a known environment. You don't talk about the details of the data in the paper.
what kind of data did point cloud data did you use?

MPNet installation problem (libbot2)

Hello! I am trying to build this package in Ubuntu 18.04.

As written in the instructions in README, I installed the dependencies including libbot2.
However, when I successfully built libbot2 and I tried

$ make

in /MPNet/data_generations folder, error message as bellow came up.

-- No package 'bot2-vis' found
CMake Error at /usr/share/cmake-3.10/Modules/FindPkgConfig.cmake:419 (message):
A required package was not found
Call Stack (most recent call first):
/usr/share/cmake-3.10/Modules/FindPkgConfig.cmake:597 (_pkg_check_modules_internal)
CMakeLists.txt:17 (pkg_check_modules)

I installed libbot with

$ sudo make BUILD_PREFIX=/usr/local

Is there any solution to this problem?

Thank you in advance!!

error: expected constructor, destructor, or type conversion before ‘(’ token 56 | mkdir(env_path.c_str(),ACCESSPERMS);

when compiling the dataset I got the following error

[ 25%] Building CXX object src/CMakeFiles/rrtstar.dir/rrts_main.cpp.o
/home/huyu/github/MPNet/data_generation/rrtstar/src/rrts_main.cpp:56:6: error: expected constructor, destructor, or type conversion before ‘(’ token
   56 | mkdir(env_path.c_str(),ACCESSPERMS); // create folder with env label to store generated trajectories
      |      ^
In file included from /home/huyu/github/MPNet/data_generation/rrtstar/src/rrts_main.cpp:13:
/home/huyu/github/MPNet/data_generation/rrtstar/src/rrts.hpp: In member function ‘int RRTstar::Planner<State, Trajectory, System>::iteration(double (&)[2], double, double) [with State = SingleIntegrator::State; Trajectory = SingleIntegrator::Trajectory; System = SingleIntegrator::System]’:
/home/huyu/github/MPNet/data_generation/rrtstar/src/rrts.hpp:506:11: warning: control reaches end of non-void function [-Wreturn-type]
  506 |     State stateRandom;
      |           ^~~~~~~~~~~
make[4]: *** [src/CMakeFiles/rrtstar.dir/build.make:76:src/CMakeFiles/rrtstar.dir/rrts_main.cpp.o] 错误 1
make[3]: *** [CMakeFiles/Makefile2:125:src/CMakeFiles/rrtstar.dir/all] 错误 2
make[2]: *** [Makefile:136:all] 错误 2
make[1]: *** [Makefile:25:all] 错误 2
make: *** [Makefile:15:all] 错误 2

Can not generate trajectories

Hi, Thanks for your code.

However, our trajectory generation seems does not work well. We can not see a complete trajectory from the viewer.
The viewer shows trajectory generation procedure as follows,
1111
And the recorded result in the file below.
2018-11-22-viewer.03.ppms.gz

Here is our code to generate trajectories.

Thanks for your help.

Question about data generation in gazebo

Hi! Thank you for your generous sharing and I am sure I will learn a lot from it!

I have some confusion and hope to get your answer:

(1) I am curious about how to efficiently build a random environment and collect PCL data.

(2) Does all this work based on Gazebo?

Or maybe I'm missing something in reading your article and code.

Looking forward to your reply, thanks!

Bad loss when training CAE

Hey,

Don't know if this is an issue, but I'm looking into your paper and trying to reproduce your results. However, I'm stuck with the encoder network. I get roughly a meas squared error of approx ~2. I plot the reconstructed data, but I don't think it looks good enough. Did you also experience roughly the same loss or did you match your input data perfectly?

Your'e doing really cool work, keep it up! 👍

undefined reference to `g_thread_init'

When compiling the dataset I got the following error:

CMakeFiles/viewer.dir/main_viewer.cpp.o: In function main': main_viewer.cpp:(.text.startup+0x2c): undefined reference to g_thread_init'

I think this is due to some updated version of the lcm where g_thread_init was removed. If the reference to it in main_viewer.cpp is removed it fixes this issue.

Running neural planner takes a very long time

When running neuralplanner.py on a sample 2D environment that is previously unseen (ie, environment 150 as an example), GPU utilization is only about 10%, and the planner takes about 10 mins to run.

The computer has an RTX 3060 GPU so I would think it should be much faster.

Does it sound like something is configured incorrectly, or is this expected?

Generating paths on sample 2D dataset

Hi,

I have trained mpnet_trainer by running python MPNET/train.py, and am pointing neuralplanner.py to my trained models.

Is neuralplanner.py supposed to be able to generate individual paths for a given environment? If so, it is not clear to me how to pass neuralplanner.py an environment to generate a path for. If there is a way to do this, please let me know!

Thanks

Spencer

Viewer can not be opened

Hi, Thanks for your code.

Can you explain in more detail how to run the main_viewer and rrts_main?

After I successfully build the data_generation, I have got viewer in data_generation/viewer/build/bin/ and rrtstar in data_generation/build/bin/. However, when I try to run them, the terminal gives me ./viewer: error while loading shared libraries: libbot2-vis.so.1: cannot open shared object file: No such file or directory
./rrtstar: error while loading shared libraries: libbot2-core.so.1: cannot open shared object file: No such file or directory

Could you help me with it? Is there any parameter I should send to the script?

Thanks for your help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.