alejocb / dpptam Goto Github PK
View Code? Open in Web Editor NEWDPPTAM: Dense Piecewise Planar Tracking and Mapping from a Monocular Sequence
License: GNU General Public License v3.0
DPPTAM: Dense Piecewise Planar Tracking and Mapping from a Monocular Sequence
License: GNU General Public License v3.0
Hello, I am Sanghoon.
First of all, thank you for your effort on this great work.
I have a problem while running "catkin_make --pkg dpptam"
I am constantly getting the following error.
make[2]: *** No rule to make target /usr/lib/x86_64-linux-gnu/libopencv_videostab.so.2.4.8', needed by
/home/sanghoon/catkin_ws/devel/lib/dpptam/dpptam'. Stop.
make[1]: *** [dpptam/CMakeFiles/dpptam.dir/all] Error 2
make: *** [all] Error 2
Invoking "make -j8 -l8" failed
I am using ros-indigo, opencv 2.4.11.
I don't have "libopencv_videostab.so.2.4.8" in "/usr/lib/x86_64-linux-gnu/".
I am not using opencv 2.4.8, and my opencv 2.4.11 is installed in "/usr/local/lib/".
I can't understand why catkin_make is trying to find opencv library in "/usr/lib/x86_64-linux-gnu/".
Is there any configuration file where I can change the settings?
PKG_CONFIG seems to work find for me. But I keep getting the error.
Please take a look, and advise me if there's anything that can be helpful.
Thank you.
I am suspecting that the following calculation is wrong in distances_interval function in DenseMapping.cpp:
float distance = fabs(real_points_in_image.at<float>(ii,0)-C.at<float>(0,0))
+fabs(real_points_in_image.at<float>(ii,1)-C.at<float>(0,1))
+fabs(real_points_in_image.at<float>(ii,2)-C.at<float>(0,2));
The reason is that real_points_in_image is the position vector wrt world frame, and C is the translational vector from camera to world.
I think the correct computation should simply be
float distance = fabs(real_points_in_image.at<float>(ii,0)-t_.at<float>(0,0))
+fabs(real_points_in_image.at<float>(ii,1)-t_.at<float>(1,0))
+fabs(real_points_in_image.at<float>(ii,2)-t_.at<float>(2,0));
t_ has the same magnitude as C but just an opposite sign (and transpose).
The code runs without an error , but the dense mapping doesn't work so well...
Hey,
i would tell you that you should add http://wiki.ros.org/visualization_msgs to your depencies.
I use ROS Melodic on Ubuntu 18.04 LTS and had problems to visualize in rviz.
Your settings will work when you also install visualation_msgs.
For that put in in your catkin workspace and build it.
I hope that will help someone :)
And thanks for your awesome work.
Greetings 4styler
Hi ,
I'm reading the code and have some questions about technical details in the paper "DPPTAM: Dense Piecewise Planar Tracking and Mapping from a Monocular Sequence "
.hope you don't mind. :)
A.
How to select a keyframe? The paper says "The keyframes are selected from the sequence frames using certain heuristics" . And I can't catch that detail from the code.
Which heuristics do you have? Frame count measure or Residual ratio measure or Pose-based measure?
Besides, could you help me to locate the code about initialization of global reference frame and keyframe?
B.
I cannot get the result with myself image sequences at the moment.
Whether the DPPTAM can cope with large textureless area (e.g. white wall or ceiling of room) if they are far away or near the camera?
How about its robustness when the camera is moving in the vertical direction rather than walking parallel to the floor?
I think the correctness of segment blob is critical for me.
Thank you for you outstanding work!
I want to test dpptam with you given example dataset, but it's not working.
Is it possible to use different calibration models in DP-PTAM? e.g all my test datasets have ATAN-model calibrated cameras (fx/width, fy/height, cx/width, cy/height, d)
If so, which models are supported, other than the OpenCV (pinhole) model. How difficult would it be add support if not?
When I do "rosrun dpptam dpptam" and "rosbag play lab_unizar.bag"
I can see movie starting to play in the current frame (/dpptam/camera/image)
But sometime in the middle (before any map is generated in rviz), I get this error:
OpenCV Error: Assertion failed ((type()==0) || (DataType<_Tp>::type == type())) in push_back, file /usr/local/include/opencv2/core/mat.hpp, line 687 terminate called after throwing an instance of 'cv::Exception' what(): /usr/local/include/opencv2/core/mat.hpp:687: error: (-215) (type()==0) || (DataType<_Tp>::type == type()) in function push_back
Aborted (core dumped)
I have opencv 2.4.12 and ros jade.
With my current set-up, I have run many other ros SLAM projects (ORB-SLAM, LSD-SLAM, SVO, REMODE...) and this is the first time getting this error.
Can you please tell me how I can fix this issue? or is there any workaround?
(edit: I have tried again by uninstalling/reinstalling opencv 2.4.12 & with ros indigo. I still get the same error)
The code is very good and amazing.
I have one question deep in my mind for several days after learning the code, hoping to get answer here.
The question is: How do the three threads run synchronously?by the time? or anytingelse I have not found. I found mutex defined but not used.
bruce
Thanks in advance.
Please correct me if I'm wrong.
In the function "join_maps", you used the function "transformed_points" to transform pointClouds3D (wrt world frame) to homogeneous camera coordinate "pointsClouds3Dmap_cam" by computing
points3D_cam = R*points3D + t_r
and projecting it.
This implies R and t is from world to camera transformation.
However, in the function "optimize_camera", R_rel & t_rel are computed as if R&t are from camera to world and R_rel and t_rel are from current to last keyframe.
Would you please tell me what I missed?
This system does not seem to re-localize once it looses localization. Has there been any work in this area on extending dpptam to work with extending saved maps as well.
@alejocb @amiltonwong
After successfully catkin_make the package, I run
rosrun dpptam dpptam
then, got
OpenCV Error: Bad argument (Invalid pointer to file storage) in cvGetFileNodeByName, file /build/buildd/opencv-2.4.8+dfsg1/modules/core/src/persistence.cpp, line 740
terminate called after throwing an instance of 'cv::Exception'
what(): /build/buildd/opencv-2.4.8+dfsg1/modules/core/src/persistence.cpp:740: error: (-5) Invalid pointer to file storage in function cvGetFileNodeByName
I tried to uncomment
chdir("/home/alejo/catkin_ws");
and changed it to my path, but it was still failed to run.
Anytime I run the code on lab_upenn.bag, I meet a core dumped. May be the computing capability of my PC is not enough? Intel® Core™ i5-7300HQ CPU @ 2.50GHz × 4
Many thanks~
Is there any reason why you keep "semidense_mapper->local_map_points" and "semidense_mapper ->points3D_toprint" separate? I'm referring to the giant for loop you have in line 866~1139, and how push back points_aux
to sdm->local_map_points
and points_aux2_print
to sdm->points3D_toprint
Why not just do something like:
semidense_mapper -> local_map_points = semidense_mapper -> points3D_toprint[num_keyframes].clone()
Then you simply use more accurate 3D map points for tracking, right?
I can simply change the code this way, and it will run just as fine.
I would like to know if there was any special reason why you made it this way.
Hi, @alejocb ,
For the k-th keyframe, if I want to find the corresponding tracked 3D points in the 3D map, which function should I look for ?
Best~
Milton
In SemiDenseTracking.cpp "gauss_newton_ic" function, there is a following code:
for (int ii = 0; ii <6;ii++)
{
jacobian.rowRange(0,jacobian.rows).colRange(ii,ii+1) = weight.mul(jacobian_analitic.rowRange(0,jacobian_analitic.rows).colRange(ii,ii+1) );
}
init = (jacobian.t()*jacobian).inv(cv::DECOMP_SVD)*jacobian.t();
cv::Mat init2;
init2 = init*(error_vector_sqrt_inicial.mul(weight)) ;
And in "optimize_camera" function, right after performing "gauss_estimation", you run the following code:
weight[j] = (1 + (error_vector[j]/(variances[j])));
weight[j] = 1/weight[j];
weight[j] = weight[j].mul(weight[j]);
Here are my questions:
Thanks in advance and I will look forward to your reply ! 👍
*** DPPTAM is working ***
*** Launch the example sequences or use your own sequence / live camera and update the file 'data.yml' with the corresponding camera_path and calibration parameters ***
frames_processed -> 100 %
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.2.0) ../modules/core/src/matrix_expressions.cpp:23: error: (-5:Bad argument) Matrix operand is an empty matrix. in function 'checkOperandsExist'
and i don't know how to fix it
dpptam/map is not being published when i use my webcam via usb_cam. The only difference is in the header from what I can see...since running the bag file works in the example. Here are the differences.
There are no error messages. Node graph looks ok. Any idea on what is going on....the marker and other topics are ok...just the point clouds are not published.
Bag file image.
header:
seq: 447
stamp:
secs: 1426957734
nsecs: 606139044
frame_id: /camera_rgb_optical_frame
height: 480
width: 640
encoding: bgr8
is_bigendian: 0
step: 1920
usb_cam image
header:
seq: 577
stamp:
secs: 1504412453
nsecs: 846092342
frame_id: /camera_rgb_optical_frame
height: 480
width: 640
encoding: rgb8
is_bigendian: 0
step: 1920
Hi,
I've calibrated my camera based off of the OpenCV asymmetrical points calibration but DPPTAM fails to initialize, despite my use of heavily textured areas or very sparse areas. I've managed to initialize dpptam on my input only once. Can anyone help me? I've attached my camera's calibration, it's worked for other projects.
cameraMatrix: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 1066.218200176972, 0., 639.5, 0., 1066.218200176972, 359.5, 0., 0., 1. ]
distCoeffs: !!opencv-matrix
rows: 5
cols: 1
dt: d
data: [ 0.1710096450750865, -0.2813834964230731, 0., 0., 0.]
I see that you sometimes used a very fine-tuned thresholds in the code (e.g. spatial threshold = 1.9 , neighbors_consistency_print = 0.93, etc).
Can you tell me how you got these values? Did you use some sort of genetic algorithm to reach the current setting? Or was it entirely done manually by just evaluating the result qualitatively?
I'm also curious how came up with the not-so-fine-tuned values as well, such as the patch size.
I run the ros bag file ( lab_unizar.bag ) and get a very nice dense point cloud right off. but when I use my web cam it takes a really long time to get anything to show up (just not finding enough key frames?) When it does start to create a point cloud it is very slow to add new points. Is the a calibration issue or a web cam issue. How can I tell what to look at to resolve this kind of behavior. Thank you for any help you can offer.
Why is the effect of running the TUM data set so bad that I cannot complete the reconstruction? What could be the reason?
How should the camera parameters in data.yml be adjusted, and what do the values represent?
Dear @alejocb ,
I have questions about the usage, I couldn't see any output in RViz. The following are my steps:
$ cd ~/catkin_ws
$ roscore &
$ rosrun dpptam main
Open an other terminal,
$ rosbag play ~/dpptam-dataset/lab_unizar.bag
Open an other terminal,
$ rosrun rviz rviz
But I couldn't see any output in RViz, and it shows the following error:
Fixed Frame
No tf data. Actual error: Fixed Frame [map] does not exist
Any idea or suggestion to fix it?
Thanks in advance~
Milton
DenseMapping.cpp line 857 is :
variance_points_tracked.at(round(projected_points.at(ii, 0)), round(projected_points.at(ii, 1))) = real_points.at(ii, 6);
The statement :
real_points.at(ii, 6);
really does
real_points.at(ii+1, 0);
Because,
the width (number of columns) of real_points is 6, and it tries to access out the matrix.
However, OpenCV Mat stores data in an one dimensional array which follows the next :
at(WANT_ROW, WANT_COL) == data[WANT_ROW * numCols + WANT_COL]
so when WANT_COL exceeds number of columns, it accesses the next row.
I changed this code to real_points.at(ii, 5), but DPPTAM still worked.
Is this intended or not?
In "gauss_newton_ic" function, when you update the current camera pose, I think you perform eq(4) in the paper in the opposite order (T_n = inv(T_hat)*T_n).
I mean line 1681~1688 in SemiDenseTracking.cpp shows that R2 and t2 become:
R2 = R1.t()*R2;
t2 = R1.t()*(t2-t1);
which means they represent the world to camera transformation where:
If I reverse the order and update T_n = T_n * inv(T_hat), I found that it also tracks well for small movements, but it loses tracking for large movements.
Was it that having T_n = inv(T_hat)*T_n instead of T_n = T_n * inv(T_hat) worked better when you tested the code, and is that why it's different from the method described in the paper?
Or did I simply miss something? Please let me know :)
Dear @alejocb
After I issue the following commands ,
-> roscore &
-> rosrun dpptam main
I came across the following error.
root@milton-ThinkPad-T450:/catkin_ws/src# rosrun dpptam main/catkin_ws/src#
terminate called after throwing an instance of 'boost::filesystem::filesystem_error'
what(): boost::filesystem::create_directory: No such file or directory: "src/dpptam/src/map_and_poses"
Aborted (core dumped)
root@milton-ThinkPad-T450:
It seems the process can't find this path (the path already exists):
/root/catkin_ws/src/dpptam/src/map_and_poses
But then I switch into "/root/catkin_ws/" and issue the same command, it works.
It seems command "rosrun dpptam main" should be issue under /root/catkin_ws/, and it will find that path.
Dear @alejocb,
I am trying to run the demo, but there has a problem: Segmentation fault (core dumped). Can you tell me how to slove it ?
*** DPPTAM is working ***
*** Launch the example sequences or use your own sequence / live camera and update the file 'data.yml' with the corresponding camera_path and calibration parameters ***
frames_processed -> 8.33333 %
Segmentation fault (core dumped)
Thanks in advance~
Ailin
Dear @alejocb,
I am reading the code to further understand dpptam, so there may be some questions thrown by me in this period, hope you don't mind. :)
In SemidenseTracking.h, what's the meaning/usage for the following member variables:
int *cont_frames; //? Is that the count number of frames?
int last_cont_frames; //??
Thanks in advance~
Milton
Hi @alejocb , thank you for your hard work on the code!
I encountered the same problem as @sunghoon031 did in #9 .
My OpenCV version is 2.4.8 which is the latest version in Ubuntu 14.04 LTS, reinstalling ROS and OpenCV didn't resolve it.
Then I decided to debug DPPTAM, found out that the push_back() at DenseMapping.cpp Line 605 is the cause.
cv::Mat distances_btw_3dpoints(0,1,CV_32FC1);
for (int ii = 0; ii < 20; ii++)
{
int random_row1 = (rand() % points_sup_total.rows);
int random_row2 = (rand() % points_sup_total.rows);
// Line 605
distances_btw_3dpoints.push_back(cv::norm(points_sup_total.colRange(0,3).row(random_row1)-points_sup_total.colRange(0,3).row(random_row2),cv::NORM_L1) / 3);
}
cv::norm() returns a 8-byte double type float, but the element type of distances_btw_3dpoints is CV_32FC1, which is a 4-byte float type float, that's the reason why the assertion inside push_back() is failed. The problem is solved after adding a type casting.
What really confuses me is this problem seems kinda random. For testing I created a virtual machine running the same operation system (Ubuntu 14.04 LTS), then set the entire ROS environment from clean install, build DPPTAM, and it runs perfectly, but it shouldn't because the type of element is different.
I am reading your paper and investigated the code a bit. I love your method and the amazing result it produces !! Here are some questions I have:
I will look forward to your reply.
Thanks in advance !
Cheers,
Seong.
Dear @alejocb ,
My system is Ubuntu 14.04 with ROS Indigo,
My catkin directory structure is as follows:
Base path: /root/catkin_ws
Source space: /root/catkin_ws/src
Build space: /root/catkin_ws/build
Devel space: /root/catkin_ws/devel
Install space: /root/catkin_ws/install
I download dpptam package in /root/catkin_ws/src
--> cd /root/catkin_ws/src
--> git clone https://github.com/alejocb/dpptam.git
Then I switch back into ~/catkin_ws and issue command catkin_make --pkg dpptam as standard build way. But it shows "Packages "dpptam" not found in the workspace"
root@milton-ThinkPad-T450:~/catkin_ws# catkin_make --pkg dpptam
Base path: /root/catkin_ws
Source space: /root/catkin_ws/src
Build space: /root/catkin_ws/build
Devel space: /root/catkin_ws/devel
Install space: /root/catkin_ws/install
Packages "dpptam" not found in the workspace
Then I try command catkin_make dpptam, but still did not succeed, see below:
https://goo.gl/b64rt9
Any idea or suggestion to fix this issue?
Thanks in advance~
Milton
Hi,
I would like to take advantage of external pose updates, and use these over the tracking system proposed in DP-PTAM. Essentially, I would like to utilize only the mapping component, while the tracking information comes from an external (time-synchronised) source. Any help? :)
Since dp-ptam, being a monocular system cannot estimate the metric scale.
Thanks!
Kabir
Correct me if I'm wrong, but I think you made a mistake in line 769 SemiDenseMapping.cpp
if (cont_depths2reg > 0 )
I think this should be bigger than 1 not zero. Because cont_depths2reg
will never be 0 since it's always counting itself.
I fixed this issue in my pull-request #28
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.