Giter Site home page Giter Site logo

umich-bipedlab / extrinsic_lidar_camera_calibration Goto Github PK

View Code? Open in Web Editor NEW
351.0 15.0 67.0 42.46 MB

This is a package for extrinsic calibration between a 3D LiDAR and a camera, described in paper: Improvements to Target-Based 3D LiDAR to Camera Calibration. This package is used for Cassie Blue's 3D LiDAR semantic mapping and automation.

Home Page: https://ieeexplore.ieee.org/document/9145571

License: GNU Affero General Public License v3.0

MATLAB 99.79% M 0.21%
lidar 3d-lidar lidar-points camera-calibration point-cloud camera-image transformation lidar-image calibration calibration-toolbox

extrinsic_lidar_camera_calibration's Introduction

extrinsic_lidar_camera_calibration

[Release Note July 2020] This work has been accepted by IEEE Access and has been uploaded to arXiv.

[Release Note March 2020] This is the new master branch from March 2020. The current master branch supports a revised version of the arXiv paper, namely paper. The original master branch from Oct 2019 to March 2020 is now moved to v1-2019 branch, and it supports the functions associated with the first version of the Extrinsic Calibration paper that we placed on the arXiv, namely paper. Please be aware that there are functions in the older branch that have been removed from the current master branch.

Overview

This is a package for extrinsic calibration between a 3D LiDAR and a camera, described in paper: Improvements to Target-Based 3D LiDAR to Camera Calibration (PDF). We evaluated our proposed methods and compared them with other approaches in a round-robin validation study, including qualitative results and quantitative results, where we use image corners as ground truth to evaluate our projection accuracy.

  • Authors: Bruce JK Huang and Jessy W. Grizzle
  • Maintainer: Bruce JK Huang, brucejkh[at]gmail.com
  • Affiliation: The Biped Lab, the University of Michigan

This package has been tested under MATLAB 2019a and Ubuntu 16.04.

[Issues] If you encounter any issues, I would be happy to help. If you cannot find a related one in the existing issues, please open a new one. I will try my best to help!

[Super Super Quick Start] Just to see the results, please clone this repo, download the process/optimized data into load_all_vertices folder and change the path.load_dir to load_all_vertices folder in justCalibrate.m, and then hit run!

[Super Quick Start] If you would like to see how the LiDAR vertices are optimized, please place the test datasets in folders, change the two paths (path.bag_file_path and path.mat_file_path) in justCalibrate.m, and then hit run!

[Developers and Calibrators] Please follow more detail instruction as below.

Abstract

The rigid-body transformation between a LiDAR and monocular camera is required for sensor fusion tasks, such as SLAM. While determining such a transformation is not considered glamorous in any sense of the word, it is nonetheless crucial for many modern autonomous systems. Indeed, an error of a few degrees in rotation or a few percent in translation can lead to 20 cm reprojection errors at a distance of 5 m when overlaying a LiDAR image on a camera image. The biggest impediments to determining the transformation accurately are the relative sparsity of LiDAR point clouds and systematic errors in their distance measurements. This paper proposes (1) the use of targets of known dimension and geometry to ameliorate target pose estimation in face of the quantization and systematic errors inherent in a LiDAR image of a target, (2) a fitting method for the LiDAR to monocular camera transformation that avoids the tedious task of target edge extraction from the point could, and (3) a “cross-validation study” based on projection of the 3D LiDAR target vertices to the corresponding corners in the camera image. The end result is a 50% reduction in projection error and a 70% reduction in its variance.

Performance

This is a short summary from the paper; see PDF for more detail. This table compares mean and standard deviation for baseline and our approach as a function of the number of targets used in training. Units are pixel per corner.

# Tag 2 4 6 8
Baseline (previous state-of-the-art) mean 10.3773 4.9645 4.3789 3.9940
Proposed method - PnP mean 3.8523 1.8939 1.6817 1.7547
Proposed method - IoU mean 4.9019 2.2442 1.7631 1.7837
Baseline (previous state-of-the-art) std 7.0887 1.9532 1.7771 2.0467
Proposed method - PnP std 2.4155 0.5609 0.5516 0.5419
Proposed method - IoU std 2.5060 0.7162 0.5070 0.4566

Application Videos

The 3D-LiDAR map shown in the videos used this package to calibrate the LiDAR to camera (to get the transformatoin between the LiDAR and camera). Briefly speaking, we project point coulds from the LiDAR back to the semantic labeled images using the obtained transformation and then associate labels with the point to build the 3D LiDAR semantic map.

Halloween Edition: Cassie Autonomy

Autonomous Navigation and 3D Semantic Mapping on Bipedal Robot Cassie Blue (Shorter Version)

Autonomous Navigation and 3D Semantic Mapping on Bipedal Robot Cassie Blue (Longer Version)

Quick View

Using the obtained transformation, LiDAR points are mapped onto a semantically segmented image. Each point is associated with the label of a pixel. The road is marked as white; static objects such buildings as orange; the grass as yellow-green, and dark green indicates trees.

Why important?

A calibration result is not usable if it has few degrees of rotation error and a few percent of translation error. The below shows that a calibration result with little disturbance from the well-aigned image.

Presentation and Video

https://www.brucerobot.com/calibration

Calibration Targets

Any square targets would be fine. The dimensions are assumed known. We use fiducial tags that can be detected both from LiDARs and cameras. Physically, they are the same tags. However, if the tag is detected from LiDARs, we call it LiDARTag and on the other hand, if is is detected from cameras, it is called AprilTag. Please check out this link to download the target images. If you use these targets as you LiDAR targets, please cite

@article{huang2019lidartag,
  title={LiDARTag: A Real-Time Fiducial Tag using Point Clouds},
  author={Huang, Jiunn-Kai and Ghaffari, Maani and Hartley, Ross and Gan, Lu and Eustice, Ryan M and Grizzle, Jessy W},
  journal={arXiv preprint arXiv:1908.10349},
  year={2019}
}

note: You can place any number of targets with different size in different datasets.

Installation

  • Which toolboxes are used in this package:
    • MATLAB 2019a
    • optimization_toolbox
    • phased_array_system_toolbox
    • robotics_system_toolbox
    • signal_blocks
  • Dataset: download from here.

Dataset

Please download optimized LiDAR vertices from here and put them into ALL_LiDAR_vertices folder.

Please download point cloud mat files from here and put them into LiDARTag_data folder.

Please download bagfiles from here and put them into bagfiles folder.

Running

[Super Super Quick Start] Just to see the results, please clone this repo, download the process/optimized data into load_all_vertices folder and change the path.load_dir to load_all_vertices folder in justCalibrate.m, and then hit run!

[Super Quick Start] If you would like to see how the LiDAR vertices are optimized, please place the test datasets in folders, change the two paths (path.bag_file_path and path.mat_file_path) in justCalibrate.m, and then hit run!

[Calibrators]

  • Please first try the [Super Super Quick Start] section to ensure you can run this code.
  • Use justCalibrate.m file
  • Find out your camera intrinsic matrix and write them in the justCalibrate.m file.
  • Give initial guess to the LiDAR to camera transformation
  • Edit the trained_ids and skip_indices (ids are from getBagData.m).
  • If you have more validation dataset (containing targets), set the validation_flag to 1 and then use put the related information to getBagData.m.
  • Place several square boards with known dimensions. When placing boards, make sure the left corner is taller than the right corner. We use fiducial tags that can be detected both from LiDARs and cameras. Physically, they are the same tags. However, if the tag is detected from LiDARs, we call it LiDARTag and on the other hand, if is is detected from cameras, it is called AprilTag. Please check out this link to download the target images. If you use these targets as you LiDAR targets, please cite
@article{huang2019lidartag,
  title={LiDARTag: A Real-Time Fiducial Tag using Point Clouds},
  author={Huang, Jiunn-Kai and Ghaffari, Maani and Hartley, Ross and Gan, Lu and Eustice, Ryan M and Grizzle, Jessy W},
  journal={arXiv preprint arXiv:1908.10349},
  year={2019}
}
  • Use you favorite methods to extract corners of camera targets and then write them in getBagData.m. When writing the corners, Please follow top-left-right-bottom order.
  • Given point patches of LiDAR targets, saved them into .mat files and also put them getBagData.m. Please make sure you have correctly match your lidar_target with camera_target.
  • If you have trouble extracting patches of LiDAR targets, or converting bagfiles to mat-files, I have also provided another python script to conver a bagfile to a mat-file and extract patches. Please check out bag2mat.py.
  • RUN justCalibrate.m! That's it!

note: You can place any number of targets with different size in different datasets.

[Developers] Please download all datasets if you like to play around.

[Dataset structure] Put ALL information of datasets into getBagData.m. This funciton returns two data structure: TestData and BagData.

  • TestData contains bagfile and pc_file, where bagfile is the name of the bagfile and pc_file is mat files of FULL scan of point cloud.
  • BagData contatins:
    • bagfile: name of the bagfile
    • num_tag: how many tags in this dataset
    • lidar_target
      • pc_file: the name of the mat file of this target of point cloud
      • tag_size: size of this target
    • camera_target
      • corners: corner coordinates of the camera targets

Qualitative results

For the method GL_1-R trained on S_1, the LiDAR point cloud has been projected into the image plane for the other data sets and marked in green. The red circles highlight various poles, door edges, desk legs, monitors, and sidewalk curbs where the quality of the alignment can be best judged. The reader may find other areas of interest. Enlarge in your browser for best viewing.

Quantitative results

For the method GL_1-R, five sets of estimated LiDAR vertices for each target have been projected into the image plane and marked in green, while the target's point cloud has been marked in red. Blowing up the image allows the numbers reported in the table to be visualized. The vertices are key.

Citations

The detail is described in: Jiunn-Kai Huang and J. Grizzle, "Improvements to Target-Based 3D LiDAR to Camera Calibration" (PDF)(arXiv)

@article{huang2020improvements,
  author={J. {Huang} and J. W. {Grizzle}},
  journal={IEEE Access}, 
  title={Improvements to Target-Based 3D LiDAR to Camera Calibration}, 
  year={2020},
  volume={8},
  number={},
  pages={134101-134110},}

If you use LiDARTag as you LiDAR targets, please cite

@article{huang2019lidartag,
  title={LiDARTag: A Real-Time Fiducial Tag using Point Clouds},
  author={Huang, Jiunn-Kai and Ghaffari, Maani and Hartley, Ross and Gan, Lu and Eustice, Ryan M and Grizzle, Jessy W},
  journal={arXiv preprint arXiv:1908.10349},
  year={2019}
}

extrinsic_lidar_camera_calibration's People

Contributors

brucejk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

extrinsic_lidar_camera_calibration's Issues

What to put in "ALL_LiDAR_vertices"

I have tried following closely the instruction for the calibrators but I came across errors when running the code justCalibrate.m
I used your bagfile lab3-closer-cleaner.bag, changed the name into lab3.bag and:

  1. I used your pixel coordinates of corners from your original getBagData.m
  2. filtered out all the lidar points so that only the big target remained. Then using your Python script, I got the mat file of the big target, which I placed into the LiDARTag_data folder as lab3_big-2020-04-08-21-20.mat
  3. the same as above for the small target (lab3_small-2020-04-09-14-04.mat).
  4. without filtering out, using your Python script I got the mat file of the velodyne_points (lab3_full.mat) and placed it into the LiDARTag_data folder

The getBagData.m looks like this:

     BagData(1).bagfile = "lab3.bag";
     BagData(1).num_tag = 2;
     BagData(1).lidar_target(1).pc_file = 'lab3_big-2020-04-08-21-20.mat';
     BagData(1).lidar_target(1).tag_size = 0.8051;
     BagData(1).lidar_target(2).pc_file = 'lab3_small-2020-04-09-14-04.mat';
     BagData(1).lidar_target(2).tag_size = 0.158;
     BagData(1).lidar_full_scan = "lab3_full.mat";
     BagData(1).camera_target(2).corners = [200, 157, 223, 180;
                                    251, 275, 292, 315;
                                    1, 1, 1, 1];
    BagData(1).camera_target(1).corners = [333, 248, 418, 328;
                                   239, 322, 328, 416;
                                   1, 1, 1, 1]; 


    TestData(1).bagfile = "EECS3.bag";
    TestData(1).pc_file = "velodyne_points-EECS3--2019-09-06-06-19.mat";

and I changed the justCalibrate.m:

trained_ids = [1];
skip_indices = [];

I get the following error:

Refining corners of camera targets ...
********************************************
 Chosen dataset
********************************************
-- Skipped: 
-- Training set: 
lab3.bag
-- Validation set: 
-- Chosen set: 
lab3.bag
********************************************
 Chosen parameters
********************************************
-- validation flag: 0 
-- number of training set: 1
-- number of validation set: 0
-- number of refinement: 5
-- number of LiDARTag's poses: 1
-- number of scan to optimize a LiDARTag pose: 5
********************************************
 Optimizing LiDAR Target Corners
********************************************
Working on lab3.bag -->----Tag 1/2Error using load
Unable to read file 'ALL_LiDAR_vertices/lab3_1__all_scan_corners.mat'. No such file or directory.

Error in getAll4CornersReturnHLT (line 129)
        load(path.load_all_vertices + extractBetween(bag_data.bagfile,"",".bag") + '_' + tag_num + '_' +
        '_all_scan_corners.mat');

Error in justCalibrate_my_system (line 396)
                [BagData(current_index), H_LT] = getAll4CornersReturnHLT(j, opt, ...

There is no lab3_1__all_scan_corners.mat file in the ALL_LiDAR_vertices folder indeed. Have I missed something in the instruction?

Using LiDARS without Rings

Hi!

I noticed that there is a .m file called "splitPointsBasedOnRing". If I am using a LiDAR that does not have rings, does this package still work? I can get the point cloud data from my LiDAR - is that all I will need?

Thanks!

pc_file data structure

Hello,
Firstly I appreciate your great work!

I wonder how to prepare pc_files for lidar tag.

As I understand, before calibrating, I need to segment square target from lidar pointcloud
so I did it with matlab pointcloud library and save it as "mat file".

The pc_file I made with matlab has only X,Y,Z, Intesity values shaped as 1x1 pointcloud struct.

But this code needs point cloud data shaped as "[scan, point, [X, Y, Z, I, R]]".

How did you make Lidar tag pc_file.mat from raw point cloud of velodyne?

Thank you in advance

How to collect data on my lidar_camera system?

hello,
A beautiful tool for calibration.

I want to calibrate my system, I read from paper:

For each scene, we collect approximately 10 s of synchronized data, resulting in approximately
100 pairs of scans and images.

Is that says, only need a 10 s of synchronized data for calibration? need change scene?

Thanks a lot!

Does the extrinsic calibration at short distance impact the results at larger distance?

Hello. Thanks for the great package once again!
I know that to get a better estimation of the camera-lidar extrinsic matrix I need to optimise on more targets. However, how does the distance to the targets impacts the calibration for objects standing further than the calibration range? For example, if I used many targets only at 2-4 meters for your method, how would that affect the projection of lidar points of very distant objects onto the image? Would they be still as well aligned as those objects at 2-4 meters?

calibration of fisheye and Lidar

The fisheye's FOV is 180 degree,and we plane to replace it with 197 degree camera later. I wonder if this method would still work with such a large FOV?You prompt reply will be very much appreciated.

Compatibility with Octave

Hello,
Thanks for the code.

I'm wondering if it is also compatible with octave?!
I run the code with provided data in Octave and couldn't get any result!

Is LiDARTag used in it?

Hi, thanks for your share of this work.

I tried this repository before. The README of this repository and the paper of LiDARTag say that LiDARTag is used in the calibration tool. But I didn't find the LiDARTag detection module in it. So is LiDARTag used in it? Can you point out the use if it is used?

Thank you again.

Some questions about formulas

Hi, this work is really good.

I have been paying attention to it since the first version of this work was released.
When your second edition was released, I carefully read your paper and code. I have some questions about formulas that I would like to ask for you.

  1. In formula 4 of the second version of your paper, I think the formula H_LT should be H_TL( i.e. target to lidar transformation) consistent with your code.

  2. I wonder why the Refinement of the LiDAR Target Vertices is removed in the second version of your paper. I think that part is very reasonable and practical.

  3. In the code of the refinement process, I think the calculation of X_at_lidar_frame is not right, which should be X_at_lidar_frame = inv(H_LT) * inv(H_LC) * X; because L_X_transformed = H_LC * X , X_at_lidar_frame = inv(H_LT) * inv(H_LC) * L_X_transformed, namely, X_at_lidar_frame = inv(H_LT) * X;
    I don’t think that’s what you want in formula 16 of your v1 paper.

Regarding these questions, I want to know your opinions.

Is checkerboard valid target

Is a checkerboard a valid calibration target without any code changes? And does it matter ehat is the size of checkerboard and is code changes needed if different from your paper?

Lidar camera calibration

Hi!

Thank you for the great calibration package. I collected a 50 seconds rosbag with 5Hz frame rate for both the camera and the lidar for calibration, and there are two LidarTags in the scene. This is my calibration result with an SNR_RMSE larger than 600. (Because the lidar I'm using does not have rings, I have commented out all the parts related to baseline and NSNR.)

result

I tried the IoU calibration method. It seems to match the left tag in the image with the right tag in the point cloud.

result2

I also tried to translate the point cloud with an initial guess to match the origin of the camera frame with the lidar frame (since the initial guess in justCalibrate.m only has the rotation part). And trained twice on the same bag I have. This is the result I got with an SNR_RMSE at 579.52.

result3

It seems close but based on the RPY I'm getting, the z-axis of the camera is pointing backward (which should be pointing forward).

These are the results of the vertices in the image:

image3

edge3

  • I'm wondering based on these results, what do you think can possibly be the reason for this inaccurate result? And would train on more bags help improve the result?
  • For the initial guess in line 43 of justCalibrate.m, is it the Euler angles from lidar frame to camera frame? For example, for the following lidar frame and camera frame (the one below is the lidar frame, RGB represents XYZ), should the initial guess be [90, 0, 90] degrees?

Screenshot from 2020-08-29 19-58-01

Different hardware

Hi! I would like to ask how to use different hardware with your code? Namely, I have a different lidar (VLP-16) and a different camera (Ximea). How to adjust your code to use the hardware in the most efficient way?

LiDAR camera data collection

Hi,

I am a bit confused about these instructions in #5:

"To extract corners of camera targets in this repo, we manually click on the corners and write them down in the getBagData.m. This package will refine the clicked corners automatically. When you write down the corners, please follow the top-left-right-bottom order."

What do you mean manually click on the corners? Also what do the numbers in the getBagData.m for these corners refer to? Do you just have an image and then you find the pixel location? Further information would be great. Also regarding the order of writing down the corners is that after they are rotated such that the top left is taller than the right corner? So after this rotation, the first corner would be top left, the second corner would be top right, the third would be bottom left and the fourth would be bottom right?

Thanks!

Unable to read file 'ALL_LiDAR_vertices/lab4-closer-cleaner_1__all_scan_refined_corners.mat'. No such file or directory.

Hi!
I followed the instruction in "installation", "dataset" and then "Running [Super Super Quick Start]". When I click run in "justCalibrate.m" is says:

Unable to read file 'ALL_LiDAR_vertices/lab4-closer-cleaner_1__all_scan_refined_corners.mat'. No such file or directory.

Error in getAll4CornersReturnHLT (line 249)
        load(path.load_all_vertices + extractBetween(bag_data.bagfile,"",".bag") + '_' + tag_num + '_' + path.event_name +
        '_all_scan_refined_corners.mat');

Error in justCalibrate (line 397)
                [BagData(current_index), H_LT] = getAll4CornersReturnHLT(j, opt, ...
 

And indeed there is no such a file in the folder.
Any help would be appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.