Giter Site home page Giter Site logo

lardemua / atom Goto Github PK

View Code? Open in Web Editor NEW
241.0 7.0 27.0 378 MB

Calibration tools for multi-sensor, multi-modal robotic systems

License: GNU General Public License v3.0

CMake 20.98% Python 76.13% MATLAB 0.14% C++ 0.64% Dockerfile 0.03% Jinja 2.08%
calibration calibration-toolbox robotics ros

atom's Introduction

ATOM Calibration

A Calibration Framework using the
Atomic Transformations Optimization Method

ATOM is a set of calibration tools for multi-sensor, multi-modal, robotic systems, based on the optimization of atomic transformations as provided by a ROS based robot description. Moreover, ATOM provides several scripts to facilitate all the steps of a calibration procedure.

For instructions on how to install and use, check the documentation:

https://lardemua.github.io/atom_documentation/

and check these examples.

Also, you can take a look at the ATOM youtube playlist.

Support

If this work is helpful for you please cite our paper:

  • Oliveira, M., E. Pedrosa, A. Aguiar, D. Rato, F. Santos, P. Dias, V. Santos, ATOM: A general calibration framework for multi-modal, multi-sensor systems, Expert Systems with Applications (2022), 118000, ISSN 0957-4174, https://doi.org/10.1016/j.eswa.2022.118000. Bibtex.

or any other that is more adequate:

  • Gomes, M. M. Oliveira, V. Santos, ATOM Calibration Framework: Interaction and Visualization Functionalities, Sensors (2023), 23, 936. https://doi.org/10.3390/s23020936. Bibtex.

  • Rato, D., M. Oliveira, V. Santos, M. Gomes, A. Sappa, A sensor-to-pattern calibration framework for multi-modal industrial collaborative cells, Journal of Manufacturing Systems (2022), Volume 64, Pages 497-507, ISSN 0278-6125, https://doi.org/10.1016/j.jmsy.2022.07.006. Bibtex.

  • Pedrosa, E., M. Oliveira, N. Lau, and V. Santos, A General Approach to Hand–Eye Calibration Through the Optimization of Atomic Transformations, IEEE Transactions on Robotics (2021) pp. 1–15, DOI: https://doi.org/10.1109/TRO.2021.3062306, 2021. Bibtex.

  • Aguiar, A., M. Oliveira, E. Pedrosa, and F. Santos, A Camera to LiDAR calibration approach through the Optimization of Atomic Transformations, Expert Systems with Applications (2021) p. 114894, ISSN: 0957-4174, DOI: https://doi.org/10.1016/j.eswa.2021.114894, 2021. Bibtex.

  • Oliveira, M., A. Castro, T. Madeira, E. Pedrosa, P. Dias, and V. Santos, A ROS framework for the extrinsic calibration of intelligent vehicles: A multi-sensor, multi-modal approach, Robotics and Autonomous Systems (2020) p. 103558, ISSN: 0921-8890, DOI: https://doi.org/10.1016/j.robot.2020.103558, 2020. Bibtex.

Contributors

  • Miguel Riem Oliveira - University of Aveiro
  • Afonso Castro - University of Aveiro
  • Eurico Pedrosa - University of Aveiro
  • Tiago Madeira - University of Aveiro
  • André Aguiar - INESC TEC
  • Daniela Rato - University of Aveiro
  • Manuel Gomes - University of Aveiro

Maintainers

  • Miguel Riem Oliveira - University of Aveiro
  • Manuel Gomes - University of Aveiro

atom's People

Contributors

aaguiar96 avatar afonsocastro avatar analtino2021 avatar bernardomig avatar brunofavs avatar danielcoelho112 avatar danifpdra avatar dimaxano avatar eupedrosa avatar jorgefernandes-git avatar kazadhum avatar kovlo avatar manuelgitgomes avatar markushillemann avatar miguelriemoliveira avatar tiagomfmadeira avatar v4hn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

atom's Issues

Create a tweked robot_description param

The goal is to have our code read through a parameter (lets called initial_robot_description).

Then, we tweak this by adding the sufix "_initial" to all links which are listed in the sensors.

But I am not sure this is really needed because of how the new tf handles the fixed transforms (probably as static transforms),

Detected and stored data is the same for different cameras

The collect_and_label_data.py stores the images and the detected chess corners pixel indexes of the top right camera for both camera sensors. There is double equal information when it should be distinct data for distinct cameras.

Decide which code formulation to use

After simlutaneously doing #7, we need to define which code we are going to use, or if some hybrid. You make that decision. We must know this in order to continue the development in a sinlge code.

Consider dynamic transforms for the pre and post transfoms

Hi @afonsocastro ,

this is related to what Jorge Almeida said.

I realized that there is yet another problem. Some sensors are mounted on dynamic kinematic chains. For example the cameras on the PR2 are on top of a moving head.

I think the Sensor class must be re-engineered to support this. The good thing is that it would also solve the problem Jorge Mentioned.

Lets talk about it.

frames

Transforms dict should be within the collections key value

This change implies an adaption of collect_and_label_data.py. Also in main.py, objective_function.py, results_visualization.py, getter_and_setters.py from OptimizationUtils/test/sensor_pose_json_v2, stereocalib_v2.py. Anywhere else?

Dangerous piece of code

class DataCollectorAndLabeler:
    def __init__(self, world_link, output_folder, server, menu_handler, marker_size, chess_numx, chess_numy):
        if os.path.exists(output_folder):
                shutil.rmtree(output_folder)  # Delete old folder

What would you do if a third party software did this on your computer without explicit permission? 🤔

Sensors Pose First Guess not loaded

If we launch roslaunch interactive_calibration atlascar2_calibration.launch and then rosrun interactive_calibration create_first_guess.py -w base_link, we can move the sensors freely and see directly the changes of the point clouds and the images (as it was expected).

Then we can save our best eye-belief config for the sensors pose. The problem is when I try to re-run the launch file with the read_first_guess argument, rviz opens and the sensors pose are exactly the same as in the original urdf robot description.
I found that the problem is related to the bag file that is running because if I test this first guess process without running any bag file it all goes well...
I think that the bag must be publishing some tf's about the sensors pose, but I am not sure of this. @miguelriemoliveira, can you help me figure out what is wrong here?

rviz crashes when changing desktops

When running

roslaunch interactive_marker_test pr2_calibration.launch

whenever I change desktops rviz crashes with the following error:

terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::lock_error> >'
  what():  boost: mutex lock failed in pthread_mutex_lock: Invalid argument
================================================================================REQUIRED process [rviz-3] has died!
process has died [pid 13431, exit code -6, cmd /opt/ros/melodic/lib/rviz/rviz -d /home/mike/catkin_ws/src/AtlasCarCalibration/interactive_marker_test/calibrations/pr2/config.rviz __name:=rviz __log:=/home/mike/.ros/log/c227165a-7bcb-11e9-ba2d-0028f80a63ec/rviz-3.log].
log file: /home/mike/.ros/log/c227165a-7bcb-11e9-ba2d-0028f80a63ec/rviz-3*.log
Initiating shutdown!
================================================================================

Lidar data labelling

Hi @miguelriemoliveira ,
I tested to label the lidar data as you asked and it worked! For now, the labeller allows NaN in range measurements, but we have the problem of only recording some ranges between all the important points. More than that, the size of the cluster collected change in every capture and for each laser...

Sensors first guess's orientation isn't correctly saved

for joint in robot.joints: for sensor in robot.sensors: if sensor.parent == joint.child: optimization_parent_link = joint.parent for mp in marker_poses: (trans, rot) = listener2.lookupTransform(optimization_parent_link, mp.child_frame_id,rospy.Time(0)) if joint.child + "_first_guess" == mp.child_frame_id: joint.origin.xyz[0] = trans[0] joint.origin.xyz[1] = trans[1] joint.origin.xyz[2] = trans[2] joint.origin.rpy[0] = rot[0] joint.origin.rpy[1] = rot[1] joint.origin.rpy[2] = rot[2]

Reset marker poses

Hi @afonsocastro,

we should have an additional option in the rviz menu to reset all marker poses to the original position. Can you implement?

Error parsing PR2 description

@hi @afonsocastro ,

I have an error when I launch create_first_guess.py using the PR2.

Unknown attribute "type" in /robot[@name='pr2']/link[@name='base_laser_link']
Unknown attribute "name" in /robot[@name='pr2']/link[@name='torso_lift_link']/collision[1]
Unknown tag "simulated_actuated_joint" in /robot[@name='pr2']/transmission[@name='torso_lift_trans']
Unknown attribute "type" in /robot[@name='pr2']/link[@name='wide_stereo_optical_frame']
Unknown attribute "type" in /robot[@name='pr2']/link[@name='narrow_stereo_optical_frame']
Unknown attribute "type" in /robot[@name='pr2']/link[@name='laser_tilt_link']
Unknown tag "compensator" in /robot[@name='pr2']/transmission[@name='r_shoulder_pan_trans']
Unknown tag "compensator" in /robot[@name='pr2']/transmission[@name='r_shoulder_lift_trans']
Traceback (most recent call last):
  File "/home/mike/catkin_ws/src/AtlasCarCalibration/interactive_marker_test/src/create_first_guess.py", line 102, in <module>
    xml_robot = URDF.from_parameter_server()
  File "/home/mike/catkin_ws/src/urdf_parser_py/src/urdf_parser_py/urdf.py", line 667, in from_parameter_server
    return cls.from_xml_string(rospy.get_param(key))
  File "/home/mike/catkin_ws/src/urdf_parser_py/src/urdf_parser_py/xml_reflection/core.py", line 612, in from_xml_string

Do you have the same problem?

New command line argument to show images for each camera

Hi @afonsocastro @tiagomfmadeira @eupedrosa

for large optimizations (e.g. 30+ collections) it is not possible to show all camera images. For that reason I added an argument -si or --show_images to the optimization utils sensor_pose_v2/main.py which must be given if you want to see the images.

Example:

test/sensor_pose_json_v2/main.py -json ~/datasets/calib_bad_first_guess/data_collected.json -cradius .5 -csize 0.101 -cnumx 9 -cnumy 6 -vo -si

not the default behavior now is not to see the images...

Devise initial camera to camera optimization

Hi @afonsocastro ,

I think we are now very close to point where we will start with the optimizations.

So you have a new task which is to study the examples in

https://github.com/miguelriemoliveira/OptimizationUtils/tree/master/test

in particular these ones

simple optimization that changes image colors
https://github.com/miguelriemoliveira/OptimizationUtils/blob/master/test/global_color_balancing_oc_dataset.py

camera to N cameras optimization
https://github.com/miguelriemoliveira/OptimizationUtils/blob/master/test/camera_pose_oc_dataset.py

So start studying it and try to set up here an ordered list of stuff you must do to implement the optimization in your case.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.