Giter Site home page Giter Site logo

robot-motion / bench-mr Goto Github PK

View Code? Open in Web Editor NEW
76.0 8.0 33.0 656.64 MB

Motion Planning Benchmark

Home Page: https://robot-motion.github.io/bench-mr

License: MIT License

CMake 0.01% C++ 1.14% Shell 0.01% Jupyter Notebook 98.25% MATLAB 0.25% Dockerfile 0.01% Python 0.34%
robotics motion-planning mobile-robots

bench-mr's Introduction

CircleCI Doxygen

Motion Planning Benchmark

Benchmarking motion planners for wheeled mobile robots in cluttered environments on scenarios close to real-world autonomous driving settings.

Dependencies

The following boost libraries (version 1.58+) need to be installed:

  • boost_serialization
  • boost_filesystem
  • boost_system
  • boost_program_options

The provided CHOMP implementation requires, GLUT and other OpenGL libraries to be present, which can be installed through the freeglut3-dev package. PNG via libpng-dev, expat via libexpat1-dev.

Optionally, to support visual debugging, Qt5 with the Charts and Svg modules needs to be installed.

The Python front-end dependencies are defined in python/requirements.txt which can be installed through

pip install -r python/requirements.txt

Using Docker

  1. Build the Docker image

    docker build -t mpb .
  2. Run the image to be able to access the Jupyter Lab instance on port 8888 in your browser from where you can run and evaluate benchmarks:

    docker run -p 8888:8888 -it mpb

    Optionally, you can mount your local mpb copy to its respective folder inside the docker via

    docker run -p 8888:8888 -v $(pwd):/root/code/mpb -it mpb
    # use %cd% in place of $(pwd) on Windows

    Now you can edit files from outside the docker and use this container to build and run the experiments.

    You can connect multiple times to this same running docker, for example if you want to access it from multiple shell instances via

    docker exec -it $(docker ps -qf "ancestor=mpb") bash

    Alternatively, run the provided script ./docker_connect.sh that executes this command.

Build instructions

  1. Check out the submodules

    git submodule init && git submodule update
  2. Create build and log folders

    mkdir build
  3. Build project

    cd build
    cmake ..
    cmake --build . -- -j4

    If you see an error during the cmake .. command that Qt or one of the Qt modules could not be found, you can ignore this message as this dependency is optional.

Getting started

This project contains several build targets in the experiments/ folder. The main application for benchmarking is the benchmark executable that gets built in the bin/ folder in the project directory.

Running a benchmark

It is recommended to run the benchmarks from the Jupyter front-end.

Run jupyter lab from the project folder and navigate to the python/ directory where you can find several notebooks that can execute experiments and allow you to plot and analyze the benchmark results.

Alternatively, you have the option to manually run benchmarks via JSON configuration files that define which planners to execute, and many other settings concerning environments, steer functions, etc.

In the bin/ folder, start a benchmark via

./benchmark configuration.json

where configuration.json is any of the json files in the benchmarks/ folder.

Optionally, if multiple CPUs are available, multiple benchmarks can be run in parallel using GNU Parallel, e.g., via

parallel -k ./benchmark ::: ../benchmarks/corridor_radius_*

This command will execute the experiments with varying corridor sizes in parallel. For more information, consult the GNU Parallel tutorial.

This will eventually output a line similar to

Info:    Saved path statistics log file <...>

The resulting JSON log file can be used for visualizing the planning results and plotting the statistics. To get started, check out the Jupyter notebooks inside the python/ folder where all the plotting tools are provided.

Third-party libraries

This project uses forks from some of the following repositories:

Besides the above contributions, the authors thank Nathan Sturtevant's Moving AI Lab for providing the 2D Pathfinding "MovingAI" Datasets.

Developers

  • Eric Heiden (University of Southern California, Los Angeles, USA)
  • Luigi Palmieri (Robert Bosch GmbH, Corporate Research, Stuttgart, Germany)
  • Leonard Bruns (KTH Royal Institute of Technology, Stockholm, Sweden)
  • Ziang Liu (University of Southern California, Los Angeles, USA)

Citation

Please consider citing our corresponding article:

@article{heiden2021benchmr,
  author={Heiden, Eric and Palmieri, Luigi and Bruns, Leonard and Arras, Kai O. and Sukhatme, Gaurav S. and Koenig, Sven},
  journal={IEEE Robotics and Automation Letters}, 
  title={Bench-MR: A Motion Planning Benchmark for Wheeled Mobile Robots}, 
  year={2021},
  volume={6},
  number={3},
  pages={4536-4543},
  doi={10.1109/LRA.2021.3068913}}

bench-mr's People

Contributors

eric-heiden avatar palmieri avatar pkicki avatar realziangliu avatar roym899 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bench-mr's Issues

Different termination conditions

We should check for each planner which termination conditions it uses to allow for a more fair comparison.

Depending on what we find, maybe we should add support for some other ones (like samples / sample attempts / etc.).

Map from image resolution

Is it possible to load the environment map from the image with certain resolution i.e. 0.2m / px?

build error

I am trying to build on a mac and found Cmake complain

meGrabber.cpp:37:10: fatal error:
'png.h' file not found
#include <png.h>

even though pnglib is installed.

bench-mr ajayp$ ls /usr/local/include/png.h
/usr/local/include/png.h

Hence I added, usr/local/include to CMakeLists.txt
include_directories(/usr/local/include)

But now I am getting lot of errors while building,

bench-MR/bench-mr/third_party/params/include/params.hpp:124:23: error:
use 'template' keyword to treat 'get' as a dependent template name
value_ = j[name_].get();
^
template
bench-MR/bench-mr/third_party/params/include/params.hpp:117:16: error:
'serialize' overrides a member function but is not marked 'override'
[-Werror,-Winconsistent-missing-override]
virtual void serialize(std::ostream &stream) const {
^

any ideas?

Fix SBPL

Need to debug/generate motion primitives (scaling)

Panda gripper upper bounds

Panda gripper fingers start at 0.065 inside sceneXX.yaml but the urdf file states a

causing boundary limits review at the begining of some planners.

Thanks!

CC_DUBINS and HC_REEDS_SHEPP crashing

I looked a bit more into the crashes that I had for these two steering functions, and they actually happen on the master branch too. So should not be related to the sampler.

I get

benchmark: /home/leo/code/mpb/steering_functions/src/hc_cc_state_space/paths.cpp:259: steer::Control subtract_control(const steer::Control&, const steer::Control&): Assertion `sgn(control1.delta_s) * control1.sigma == sgn(control2.delta_s) * control2.sigma' failed.
Aborted (core dumped)

for both CC_DUBINS and HC_REEDS_SHEPP.

Do they actually work for you?
This is the benchmark json I am trying to run (crashes for "steering_type": 5 and "steering_type": 6):

{
  "settings": {
    "auto_choose_distance_computation_method": true,
    "benchmark": {
      "planning": {
        "bfmt": false,
        "bit_star": false,
        "cforest": false,
        "est": false,
        "fmt": false,
        "informed_rrt_star": false,
        "kpiece": false,
        "pdst": false,
        "prm": true,
        "prm_star": false,
        "rrt": false,
        "rrt_sharp": false,
        "rrt_star": false,
        "sbl": false,
        "sbpl_adstar": false,
        "sbpl_anastar": false,
        "sbpl_arastar": false,
        "sbpl_lazy_ara": false,
        "sbpl_mha": false,
        "sorrt_star": false,
        "spars": false,
        "spars2": false,
        "sst": false,
        "stride": false,
        "theta_star": false
      },
      "runs": 1,
      "smoothing": {
        "chomp": false,
        "grips": false,
        "ompl_bspline": false,
        "ompl_shortcut": false,
        "ompl_simplify_max": false
      },
      "steer_functions": []
    },
    "cusp_angle_threshold": 1.0471975511965976,
    "distance_computation_method": 0,
    "env": {
      "collision": {
        "collision_model": 1,
        "robot_shape": [],
        "robot_shape_source": "polygon_mazes/car.svg"
      },
      "goal": {
        "theta": 0.0,
        "x": 0.0,
        "y": 0.0
      },
      "grid": {
        "corridor": {
          "branches": 50,
          "radius": 3.0
        },
        "generator": "corridor",
        "height": 50,
        "random": {
          "obstacle_ratio": 0.1
        },
        "seed": 3,
        "width": 50
      },
      "polygon": {
        "scaling": 0.045454545454545456,
        "source": "polygon_mazes/parking1.svg"
      },
      "start": {
        "theta": 0.0,
        "x": 0.0,
        "y": 0.0
      },
      "type": "grid"
    },
    "estimate_theta": false,
    "evaluate_clearing": true,
    "exact_goal_radius": 0.01,
    "fast_odf_threshold": 10000,
    "interpolation_limit": 500,
    "log_env_distances": false,
    "max_path_length": 10000.0,
    "max_planning_time": 15.0,
    "ompl": {
      "cost_threshold": 100.0,
      "rrt_star": {
        "goal_bias": 0.05,
        "max_distance": 0.0
      },
      "seed": 1,
      "state_equality_tolerance": 0.0001
    },
    "sbpl": {
      "forward_velocity": 0.2,
      "goal_tolerance_theta": 6.283185307179586,
      "goal_tolerance_x": 1.0,
      "goal_tolerance_y": 1.0,
      "initial_solution_eps": 3.0,
      "motion_primitive_filename": "./sbpl_mprim/unicycle_0.25.mprim",
      "num_theta_dirs": 16,
      "resolution": 0.25,
      "scaling": 6.0,
      "search_until_first_solution": false,
      "time_to_turn_45_degs_in_place": 0.6
    },
    "smoothing": {
      "chomp": {
        "alpha": 0.05,
        "epsilon": 4.0,
        "error_tolerance": 1e-06,
        "gamma": 0.8,
        "max_iterations": 1500,
        "nodes": 100,
        "objective_type": 0
      },
      "grips": {
        "eta": 0.9,
        "eta_discount": 0.8,
        "gradient_descent_rounds": 5,
        "max_pruning_rounds": 100,
        "min_node_distance": 3.0
      },
      "ompl": {
        "bspline_epsilon": 0.005,
        "bspline_max_steps": 5,
        "shortcut_max_empty_steps": 0,
        "shortcut_max_steps": 0,
        "shortcut_range_ratio": 0.33,
        "shortcut_snap_to_vertex": 0.005
      }
    },
    "steer": {
      "car_turning_radius": 4.0,
      "hc_cc": {
        "kappa": 0.2,
        "sigma": 0.2
      },
      "posq": {
        "alpha": 3.0,
        "axis_length": 0.54,
        "dt": 0.1,
        "phi": -1.0,
        "rho": 1.0,
        "rho_end_condition": 0.005,
        "v": 1.0,
        "v_max": 1.0
      },
      "sampling_resolution": 0.005,
      "steering_type": 5
    },
    "theta_star": {
      "number_edges": 10
    }
  }
}

CMake version too low for the docker environment

Hi. In the docker environment, the default CMake version for Ubuntu 18.04 is too low and can not parse the CMakelists.txt file of the latest OMPL library. Can you update the docker file so it installs newer CMake?

Applying post-smoothing to all planners

Why?
Smoothing techniques could be applied to all planners, not only non-AO planners. In particular, it may happen that by smoothing AO algorithms can be terminated earlier to yield the same cost that would
be achieved without smoothing.

What?
We can have some experiments where that test the combination post-smoothing with AO algorithms. Not sure if this is easily achievable without touching the OMPL code.

Integration with custom path planner

Hello! Is there any documentation regarding how to use Bench-MR for a custom planner for comparison with different baselines and calculating Benchmarks like AOL?

building issue

When trying to compile, following the readme instructions i receive the following issue:

➜ build git:(master) ✗ cmake ..
-- Found PkgConfig: /usr/bin/pkg-config (found version "0.29.1")
-- Checking for one of the modules 'eigen3>=3'
-- Checking for one of the modules 'libccd;ccd'
-- Checking for one of the modules 'cairo'
CMake Error at CMakeLists.txt:21 (add_subdirectory):
The source directory

/home/user/mpb/params

does not contain a CMakeLists.txt file.

-- Checking for module 'sbpl'
-- No package 'sbpl' found
CMake Error at /usr/share/cmake-3.5/Modules/FindPkgConfig.cmake:367 (message):
A required package was not found
Call Stack (most recent call first):
/usr/share/cmake-3.5/Modules/FindPkgConfig.cmake:532 (_pkg_check_modules_internal)
CMakeLists.txt:36 (pkg_check_modules)

Add new environments

Why?
To further test the planning capabilities, we can expand the environments on which we test the planners

What?
Additional environments can be include:


Update:

-- add procedural generation of polygon based environment (i.e. place random obst., varying corridor size)

Environment seed behavior

I think the environment seed behavior is a bit unusual right now.

Looks as if you generate 50 environments with seed = 0 and then 50 environments with seed = 1, 49 of the environments will be the same (since the seed is incremented by one for each environment and then passed to srand for each generated environment).

Maybe we could change the code to use a C++11 generator used only for environment generation that is seeded once with the provided seed.

Optimizing planners and tracking progress

Hello,

Is there a way to track optimizing planners' progress? I see that intermediate solutions are stored. Are improvements to existing solutions considered intermediate solutions? If the OMPL callback is used, I guess this should be the case, but I just wanted to check anyway.

In the OMPL benchmark, there is a way to record the cost of the current solution every t seconds. I guess this is useful for plotting. Do you also have a way to plot the costs of the intermediate solutions?

I don't think I saw anything related to this in the documentation or the tutorials. If it's already implemented, please point me to the relevant files and I can contribute the tutorial / documentation. :-)

Comparisons between multi-query and single-query planners

Why?
Currently results include all the planners in a single view.

What?
Can we separate the two class of multi-query and single-query planners?
We could have an experiment where we measure the metrics after an initial call to the multi-query planners.

Dynamic environment

Hi!
I am working on my own planner and trying to figure out the ways to test it with dynamic environments. I have read in the article that this functionality will be added later. However if you can recommend me something on it, I would really appreciate it.

-Thanks!

Experiments for the paper

5 type experiments:

  • forward propagations vs steerings
  • varying environments' complexity (corridor radii and obstacle ratio)
  • comparison deterministic sampling vs random sampling vs state lattice
  • Cost function to minimize, i.e. comparison between informed and not informed
  • Computation time in planning phases

(Maybe also -- Asympt vs Feasible+PostSmoothing)

Add deterministic sampling

Why?
Together with uniform sampling, we could include a comparison with deterministic sampling

What?
Add Halton deterministic sampling This will work only with PRM* and FMT*

Add more metrics

Why?
Currently we have metrics only for the final planning results. We could add additional metrics that tests the single units

What?
We can measure\

  • collision checking time;
  • steering function call time;
    It would be useful to report what percentage of computation time is spent on collision detection and NN in each setting. This can inform and motivate work in algorithm design.

Update:
-- add geometric mean
-- mean curvature

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.