Giter Site home page Giter Site logo

dianezhao / predictive-multi-agent-framework Goto Github PK

View Code? Open in Web Editor NEW

This project forked from riddhiman13/predictive-multi-agent-framework

0.0 0.0 0.0 100.65 MB

Repository for predictive dual-arm reactive motion planning

License: BSD 3-Clause "New" or "Revised" License

Shell 0.11% JavaScript 0.04% C++ 94.34% Python 3.31% C 0.03% CSS 0.01% TeX 0.07% Makefile 0.01% HTML 0.14% CMake 1.92% GDB 0.01% NASL 0.02%

predictive-multi-agent-framework's Introduction

Predictive Multi-Agent-Based Planning and Landing Controller

Created by Riddhiman Laha, Marvin Becker, Jonathan Vorndamme, Juraj Vrabel, Luis F.C Figueredo, Matthias A. Müller, and Sami Haddadin from the Technical University of Munich and Leibniz University Hannover. qual_ex_2_sc2

This repository contains the code regarding the paper:
"Predictive Multi-Agent based Planning and Landing Controller for Reactive Dual-Arm Manipulation", IEEE Transactions on Robotics, 2023.

The repository contains all the code necessary for running the predictive multi-agent-based planning and landing controller described in the paper.

Video

The accompanying video for the work can be found here.

Citation

If you find our work useful in your research, please consider citing:

@article{laha2023predictive,
  author={Laha, Riddhiman and Becker, Marvin and Vorndamme, Jonathan and Vrabel, Juraj and Figueredo, Luis F.C. and Müller, Matthias A. and Haddadin, Sami},
  journal={IEEE Transactions on Robotics}, 
  title={Predictive Multi-Agent based Planning and Landing Controller for Reactive Dual-Arm Manipulation}, 
  year={2024},
  volume={40},
  number={},
  pages={864-885},
  doi={10.1109/TRO.2023.3341689}}

Requirements

The code is only tested on Ubuntu 18.04 and Ubuntu 20.04. The following packages need to be installed:

Additional requirements

sudo apt install ros-noetic-rviz-visual-tools ros-noetic-franka-* ros-noetic-moveit ros-noetic-moveit-visual-tools ros-noetic-rosparam-shortcuts

Installation and Compilation

  1. Create a folder for the catkin workspace and clone the repository:
mkdir -p <CatkinWSRootFolder>/pmaf_ws/
cd <CatkinWSRootFolder>/pmaf_ws/
git clone https://github.com/riddhiman13/predictive-multi-agent-framework.git
  1. Initialize catkin workspace, adjust compile arguments and compile:
catkin init
catkin config --cmake-args -DCMAKE_BUILD_TYPE=Release
catkin build

Startup

  1. Start CoppeliaSim in headless mode with a loaded scene, e.g.:
cd <CoppeliaSimRootFolder>
./coppeliaSim.sh -h <CatkinWSRootFolder>/pmaf_ws/src/bimanual_planning_ros/vrep_scenes/dual_arms.ttt
  1. In a new terminal: start the CoppeliaSim interface and load the desired task sequence, e.g.:
roslaunch bimanual_planning_ros vrep_interface_dual_arms.launch task_sequence:=dual_arms_static1
  1. In a new terminal: start the planning and control nodes, e.g.:
roslaunch bimanual_planning_ros planning_moveit_dual_arms.launch
  1. In the same terminal (planning node): hit enter to start planning.

For the simulations with the Kobo robot, replace "dual_arms" with "kobo" in the launch files, load the kobo.ttt in CoppeliaSim and use the corresponding task sequences.

Additional Information

Note that the last obstacle that is defined in the task sequence .yaml is only used for self collision avoidance with repulsive force generation. Also note that touching spheres are not merged automatically and need to be merged manually such that the current vectors are not defined in a way such that the robot is guided in between them (cf. the trap-like obstacle in Fig.~9(c), or the sphere in Fig.~9(b) of the paper).

Detailed information on the scenarios and parameters for all simulations in the paper are available at Dataset: Predictive Multi-Agent based Planning and Landing Controller for Reactive Dual-Arm Manipulation.

Disclaimer

Only the code in src/bimanual_planning_ros is written and owned by us. All other code in this repository is from third parties and belongs to the ROS planning software stack and the MoveIt motion planning framework, for more details see:

predictive-multi-agent-framework's People

Contributors

becker-irt avatar riddhiman13 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.