Comments (9)
Hi,
Thank you for using our repo! As your title, what corners you are talking about? Are those camera corners or LiDAR vertices? For the camera corners, you need to click on the image and find the coordinates. For LiDAR vertices, this package will optimize them for you by setting (opts.optimizeAllCorners = 1) will do the job. Please also follow here when you try to calibrate your system.
Please let me know if you have other problems!
from extrinsic_lidar_camera_calibration.
Do we need the ALL_LiDAR_vertices folder when we use our own dataset? and how can I create the dataset as what you porvided?
from extrinsic_lidar_camera_calibration.
from extrinsic_lidar_camera_calibration.
Hi,
Please follow the instruction here to collect your datasets. You don't need the ALL_LiDAR_vertices for your own datasets at first. The software will create one and save the LiDAR vertices for you. Please let me know if you have other questions!
from extrinsic_lidar_camera_calibration.
OK! Thanks again, I find that the .mat files have various types,and I would appreciate if you can tell me how to creat the .mat files such as the type of 'full-pc-.mat' as well as the type of 'velodyne_points-EECS3--2019-09-06-06-19.mat' couse I don't know what's datas in that .mat. I have got the mat files like 'big/med/small/-.mat' by using the bag2mat.py.
from extrinsic_lidar_camera_calibration.
Hi,
That's great that you used bag2mat.py
to convert the data already! Please take a look at getBagData.m. There are two types of data:
I) TestData
is for testing/visulization, which does not contain calibration targets. Take TestData(1)
for example,
TestData(1).bagfile = "EECS3.bag";
--> The bagfile you collected for a testing scene.
TestData(1).pc_file = "velodyne_points-EECS3--2019-09-06-06-19.mat";
--> the full set of point cloud extracted from the bagfile using bag2mat.py
.
II) BagData
is for training and validation, which does contain calibration targets. You need to know how many calibration targets are in the scene and the size of each target. For each target, we need LiDAR returns on the target. In other words, a patch of the LiDAR point cloud on the target. This package will use the patch to estimate the LiDAR vertices for you. We also need corner coordinates on the image with the order of top-left-right-bottom.
P.S. If you only need to do calibration once, use bag2mat.py might be faster but if you plan to do it many times, it is recommended to use LiDARTag package to extract returns The LiDAR returns could be extracted by or bag2mat.py
Take BagData(2)
for example, it contains two calibration targets and the bagfile collected for the scene. You will have the following information:
Scene information
BagData(2).bagfile = "lab2-closer.bag";
--> The bagfile you collected for a calibration/validation scene.
BagData(2).num_tag = 2;
--> How many calibration tartets are in the scene.
BagData(2).lidar_full_scan = "velodyne_points-lab2-full-pc--2019-09-05-23-20.mat";
--> full scan of point cloud of the scene extracted by bag2mat.py
, or LiDARTag package.
First target
BagData(2).lidar_target(1).pc_file = 'velodyne_points-lab2-closer-big--2019-09-05-21-51.mat';
--> LiDAR returns on the first calibration target. You can use LiDARTag package to extract the LiDAR returns.
BagData(2).lidar_target(1).tag_size = 0.8051;
--> The size of the first calibration target
BagData(2).camera_target(1).corners = [340, 263, 406, 316; 236, 313, 341, 417; 1, 1, 1, 1];
--> Image coordinates of the first calibration target
Second target
BagData(2).lidar_target(2).pc_file = 'velodyne_points-lab2-closer-small--2019-09-05-21-53.mat';
--> LiDAR returns on the second calibration target . You can use LiDARTag package to extract the LiDAR returns.
BagData(2).lidar_target(2).tag_size = 0.158;
--> The size of the second calibration target
BagData(2).camera_target(2).corners = [197, 153, 220, 176; 250, 273, 292, 315; 1, 1, 1, 1];
--> Image coordinates of the second calibration target
from extrinsic_lidar_camera_calibration.
Hi,
I am going to close this issue. Please feel free to reopen it if you encounter related issues.
from extrinsic_lidar_camera_calibration.
I'm so sorry to reply you so late, because I was busy with other things a few days ago, I used the bag2mat.py you provided to extract the mat files, but the files I extracted are not the same as the ones you provided, but most of them are the same. Only a few of them are different, I don’t understand the reason, if you can help me, I would be very grateful.
from extrinsic_lidar_camera_calibration.
Hi,
That depends on how you extract the point cloud. If you include more points from the patch of a point cloud, it will be different.
from extrinsic_lidar_camera_calibration.
Related Issues (17)
- Compatibility with Octave HOT 2
- Does the extrinsic calibration at short distance impact the results at larger distance? HOT 1
- Using LiDARS without Rings HOT 3
- LiDAR camera data collection HOT 31
- Lidar camera calibration HOT 7
- Thanks for your contributions, and can you tell how the corners are extracted?
- Is LiDARTag used in it? HOT 4
- calibration of fisheye and Lidar HOT 1
- Is checkerboard valid target
- pc_file data structure HOT 12
- Unable to read file 'ALL_LiDAR_vertices/lab4-closer-cleaner_1__all_scan_refined_corners.mat'. No such file or directory. HOT 1
- How to collect data on my lidar_camera system? HOT 15
- Different hardware HOT 1
- Some questions about formulas HOT 8
- What to put in "ALL_LiDAR_vertices" HOT 1
- Should images be rectified as an input to your package? HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from extrinsic_lidar_camera_calibration.