prbonn / overlapnet Goto Github PK
View Code? Open in Web Editor NEWOverlapNet - Loop Closing for 3D LiDAR-based SLAM (chen2020rss)
License: MIT License
OverlapNet - Loop Closing for 3D LiDAR-based SLAM (chen2020rss)
License: MIT License
Dead authors,
Thank you for your work!
I'm trying to reproduce the results reported in the paper using the provided pre-trained model.
I generated the preprocessed data and the ground truths using the scripts demo1 and demo 4, and I'm testing the network with the testing.py file.
Since the pre-trained model use only depths and normals, I expected to obtain a mean rotation error of ~2.97° (as reported in Table V).
However, I'm getting a mean error of 8.90°.
PS: To generate the ground truths I'm using the poses provided with the semantic KITTI dataset
How to generate the train_set for each sequence?
For example, the sequence of 00 contains 4541 scans. The total number of overlaps ratio is 4541 x 4541 = 20620681.
In your split_train_val.py, the ratio of training and testing are 0.9 and 0.1, so the number of training data is 20620681 x 0.9 = 18558612.9.
But in your train_set of sequence 00, the size of overlaps is 90738 x 3.
How can we get the relevant data? I only see the data structure given :(
Dear Authors,
in your paper the time required to generate the geometric input features is reported to be around 10ms.
However, with this code on my computer it takes more than 2 seconds to generate the normal image.
Did you use a different implementation to achieve the runtime reported in your paper? Or am I doing something wrong?
Dear Sir,
You have mentioned the covariance data are included into the odometry folder, And I wonder how can you get the covariance from KITTI dataset ? Or where you suggested the place I can get the reference ?
Thank you for your attention~
Dear author,
when I try to train the model, I always encounter the following error:
Traceback (most recent call last): File "src/two_heads/training.py", line 351, in <module> model.save(weights_filename) File "XXX/anaconda3/envs/OverlapNet_env/lib/python3.7/site-packages/keras/engine/topology.py", line 2580, in save save_model(self, filepath, overwrite, include_optimizer) File "XXX/anaconda3/envs/OverlapNet_env/lib/python3.7/site-packages/keras/models.py", line 111, in save_model 'config': model.get_config() File "XXX/anaconda3/envs/OverlapNet_env/lib/python3.7/site-packages/keras/engine/topology.py", line 2353, in get_config layer_config = layer.get_config() File "XXX/anaconda3/envs/OverlapNet_env/lib/python3.7/site-packages/keras/layers/convolutional.py", line 471, in get_config config = super(Conv2D, self).get_config() File "XXX/anaconda3/envs/OverlapNet_env/lib/python3.7/site-packages/keras/layers/convolutional.py", line 231, in get_config 'bias_initializer': initializers.serialize(self.bias_initializer), File "XXX/anaconda3/envs/OverlapNet_env/lib/python3.7/site-packages/keras/initializers/__init__.py", line 132, in serialize return generic_utils.serialize_keras_object(initializer) File "XXX/anaconda3/envs/OverlapNet_env/lib/python3.7/site-packages/keras/utils/generic_utils.py", line 131, in serialize_keras_object 'config': instance.get_config() TypeError: get_config() missing 1 required positional argument: 'self'
Although I have tried various solutions to solve this problem, I still can't solve it. Thank you very much for your advice.
Best wishes to you.
Hello, I was wondering if there is a way to integrate this work with ROS or not. Say, we are providing the pointcloud messages (scans) in a topic and each message is splitted into 3/4 set of data (normal, range, intensity and semantic) as input (for testing) to the model and the model predicts loop closure candidates as output in a ROS message format.
Do you have any idea about this or do you have any knowledge that someone has already done something regarding this matter ?
This is a great study
I want to compare my method with yours in other dataset.
Thank you and great work!
Hi, thank you for your interesting work.
I am trying to train a model in order to achieve the results obtained in your paper on the KITTI odometry dataset. I followed the steps that you described in this repository and trained the model with multiple KITTI sequences by exploiting also intensity and semantic information. However, the performances I obtained are not good.
Therefore my questions are:
There are network parameters that need to be changed with respect to the default ones?
I noticed that in demo4, when ground truth are generated for the sequence 07, a data normalization step is performed in order to balance the data by considering the overlap rate. However, it seems that the example that you provided is calibrated on that run, therefore how can I perform this task when multiple sequences are considered?
For example, some KITTI sequences have only few samples with an overlap rate >0.5 (e.g. sequence 03 contains only one sample that matches this condition) and at the moment I perform such balancing by considering the overlap distribution across all the KITTI sequences.
Can you explain what "use_class_probabilities_pca" parameter is (network.yml)?
If you could give me any suggestion, it would be great! :)
Thank you and great work!
Dear author, during generating training set and verification set, there are three versions of normalize_data.py, which are as follows,
I want to know which version you are using in the paper to keep different bins the same amount of samples.
It would be great if you could explain more. Thank you very much.
Thanks for your amazing work. I have some questions about overlapnet.
Hi,
Thanks for your awesome work!
I am wondering how can I get the evaluation result of OverlapNet like Fig. 7? This is because I am planning to compare several other methods with OverlapNet, and they are using the top k recall and PR curve as metrics.
Thank you for your very nice work.
This is not an issue about the code, but I have general questions about the OverlapNet, and I couldn't wait for the RSS officially starts :)
in detail, the definition of the overlapness and the idea helped me.
The questions are:
thank you!
Thanks you! But I have a problem for the ground truth of the loop . How do you get the ground truth of the loop close decetion in kitti? The code in the project should be to obtain the true value of the network.
Thanks for your great work!
I want to compare my lopp closure method with Overlap, but I have problem with running the work. I use the pytorch version, and I already generate depth, intensity, normal data. I want to compute a score for every pair, but I have no model weights. So I want to train one, but I can not find which file could generate the 'overlaps/train_set.npz'
. Could you help me, or could you provide a pre-model. Thanks.
Hi Chen, I am really impressed on OverlapNet's excellent performance even using cheap geometric information of point clouds. One main concern of mine is how it works on sparser point clouds, e.g., produced by VLP-16. It's really common in industrial fields that you can't afford 64-beam lidars.
Thank you for your great work and looking forward to your reply.
I used Ubuntu18.04 to run it in terminl. But when i run the demo2_infer.py , i meet the following problem
2022-11-03 20:27:14.318140: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
Traceback (most recent call last):
File "/home/ydragon/Downloads/OverlapNet-master/demo/demo2_infer.py", line 15, in
from infer import *
File "/home/ydragon/Downloads/OverlapNet-master/demo/../src/two_heads/infer.py", line 11, in
import keras
File "/home/ydragon/anaconda3/envs/tensorflow/lib/python3.9/site-packages/keras/init.py", line 3, in
from . import utils
File "/home/ydragon/anaconda3/envs/tensorflow/lib/python3.9/site-packages/keras/utils/init.py", line 25, in
from .multi_gpu_utils import multi_gpu_model
File "/home/ydragon/anaconda3/envs/tensorflow/lib/python3.9/site-packages/keras/utils/multi_gpu_utils.py", line 7, in
from ..layers.merge import concatenate
File "/home/ydragon/anaconda3/envs/tensorflow/lib/python3.9/site-packages/keras/layers/init.py", line 4, in
from ..engine import Layer
File "/home/ydragon/anaconda3/envs/tensorflow/lib/python3.9/site-packages/keras/engine/init.py", line 3, in
from .topology import InputSpec
File "/home/ydragon/anaconda3/envs/tensorflow/lib/python3.9/site-packages/keras/engine/topology.py", line 18, in
from .. import initializers
File "/home/ydragon/anaconda3/envs/tensorflow/lib/python3.9/site-packages/keras/initializers/init.py", line 124, in
populate_deserializable_objects()
File "/home/ydragon/anaconda3/envs/tensorflow/lib/python3.9/site-packages/keras/initializers/init.py", line 82, in populate_deserializable_objects
generic_utils.populate_dict_with_module_objects(
AttributeError: module 'keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'
Strangely ,I didn't enconter this problem in demo1_gen_data.py with"from utils import *" and " import keras". what's wrong with it in demo2_infer.py and infer.py? Could you help me solve it ? Thank you very much .
Thank you for sharing your pytorch code. @Chen-Xieyuanli @laebe
In your recommended Data Structure how to generate loop_gt_seq00_0.3overlap_inactive.npz
file for training?
I have used the tensorflow branch to generate the preprocessed data, but no such file is generated.
Hello, what's the requirements of pytorch version?
Hi!
Thanks for your awesome work!
I have some questions. When you use RangeNet + + to get semantic clues, Are the parameters of RangeNet++ same between KITTI sequence and Ford campus sequence?
If It is different, how do I adjust parameters?
Do I need to train the RangeNet++ by using Ford campus dataset?
Hi! Thanks for sharing your work! This work is quite interesting.
However, I have some confusion about the experiments.
Looking forward to your reply!
Best,
Xin
Hi,
I am referring to Overlapnet for my research, I am using the Oxford Newer College dataset for my study.
The dataset itself comes in rosbag file, I successfully saved the pointcloud msgs into .bin file format same as in the KITTI odometry dataset. Also, I created semantic_probs by inferring using rangenet_lib. I have attached few outputs of demo-1 here.
Could you please tell me how I can verify the correctness of the cues?
Hi, thanks for your great work!
I have a question about how to judge a loop closure True Positive or not. Some papers use the distance between two frames, think it True Positive if the distance less than 3 meters or 4 meters. Is OverlapNet use this method or think it true if overlap lager than threshold.
It would be great if you could explain more. Thanks!
Hi I have a question about the semantic_prob data.
I would like to test with other scan data in KITTI, but I got an error message from the semantic probs that "cannot reshape array of size # into shape (20)".
So I have question below.
Do the label data in the 'data/semantic_probs' are the RangeNet++ result?
What if the answer of the question 1 is YES, what is difference between the RangeNet++ label result and SemanticKITTI label data?
Thank you.
Hi, thanks for your great work!
I have a question while looking into how you generate the ground truth (demo 4).
In the paper, you mentioned that you use P_1 and P_2 and compute the overlap value with equation 3. And the result of it is figure 2 (a).
However, I am still not quite sure how you compute the overlap value ground truth shown in figure (c).
By looking into the com_overlap_yaw() function, you seem to compare the same point cloud with a different pose.
What confuses me is that reference_range and current_range are using the same point cloud, not P_1 and P_2.
It would be great if you could explain more. Thanks!
When I follow the instructions by the authors to run demo1, I met the following error:
Traceback (most recent call last):
File "demo1_gen_data.py", line 69, in <module>
config = yaml.load(open(config_filename))
FileNotFoundError: [Errno 2] No such file or directory: 'config/demo.yml'
I soled it by changing the verision of pyyaml
pip install pyyaml==5.4.1
After doing this, there is still a warning:
demo/demo1_gen_data.py:69: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
config = yaml.load(open(config_filename))
But it can produce the range image correctly.
Record this for someone who may meet the same problem. :)
Dear Author,
Thank you for sharing your work, and I enjoy your paper very much.
I found some problems when I tried to generate train_set under ground_truth folder:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.