andyzeng / tsdf-fusion Goto Github PK
View Code? Open in Web Editor NEWFuse multiple depth frames into a TSDF voxel volume.
Home Page: http://andyzeng.github.io/
License: BSD 2-Clause "Simplified" License
Fuse multiple depth frames into a TSDF voxel volume.
Home Page: http://andyzeng.github.io/
License: BSD 2-Clause "Simplified" License
hello @andyzeng
I want to reconstruct the sculpture,the input image is pictures of each side. but the output is always one side .how can I get the all side reconstruction. my result is below:
resource image:
result of one side:(input image is one side)
result of all side(input image is all side,)
Dear @andyzeng
Thank you for sharing your work.
I want to generate my own 3D volume data by changing some of your source code. However, I am confused by some parameters in this work. It will be appreciated that if you can give me some help.
First is the camera intrinsics. I did some research and found it is a 3x3 matrix. Then I looked through your work and found that the old and new version of this work are using different camera intrinsic matrix. Is it because you used different camera between these two work? In my own project, I'm using a Kinect as the depth camera. Can you tell me how can I set the camera intrinsics in my work?
The second is about the camera pose. It seems a 4x4 matrix and changes with every single depth camera. Can you tell me what is this parameter and how to set it?
The last is the input number. In your demo, you set the input number to 50. I changed the number to regenerate a voxel volume. However, when I changed the number to 1, the voxel volume turned to nothing. I am wondering can I generate a voxel volume from a single depth image? And is it the right way to change the input number to 1 if this work can generate a voxel volume from a single depth image?
Thant you
Sincerely yours
Tony
Hi,
Is what I can calculate tsdf for a cloud of points not for depth images ?
You can help me with an idea
Thank you for your help.
hi, @andyzeng ,
what means on registered depth maps?? can i use the depth in the code as following
Hi, Im using intel realsense D415 CAMERA. I can extract the depth and rgb image ffrom the frame but I couldn't find a way to extract the frame.pose.txt mentioned in your dataset. Can you share, if you have a code to extract the .pose.txt file from every frame along with the rgb and depth images while the camera is capturing. It would be very helpful!!
I want to use rtabmap to rebuilt 3d models , so i want to add tsdf to improve the final result.
In demo.cu function ::Integrate
,the distant is got by the code float diff = (depth_val - pt_cam_z) * sqrtf(1 + powf((pt_cam_x / pt_cam_z), 2) + powf((pt_cam_y / pt_cam_z), 2))
.
But i don't know why multiply sqrtf(1 + powf((pt_cam_x / pt_cam_z), 2) + powf((pt_cam_y / pt_cam_z), 2))
.
Is there anyone could give me some help?Thanks a lot!
Hi @andyzeng
I'm trying to use your code (tsdf2mesh.m function) to generate the 3d point cloud.
From MSR 7-scene dataset, when I project the 3d point cloud generated by your function into image domain, it shows some misalignment. It could be my mistake (I considered extrinsic pose between depth and color camera).
Have you tried your function for MSR 7-scene dataset? or any suggestion for this kind of procedure?
@andyzeng
Sorry but could you please specify the format of camera poses in the data you use in demo code?
Because there're just 6 numbers in 6DoF format of pose or 7 numbers as in TUM RGB-D Trajectories Dataset but you have up to 16 params here (I noticed that 4 last ones is the same for all).
And I saw that most repos use 6DoF as camera poses, so if you have time then you can mark this as Pull request
.
Thanks in advance!
Hi . I have configured the code on Windows 10 + VS2013+ CUDA9.2+OpenCV3.0. And I run the demo successfully. But the result point cloud file(tsdf.ply) looks not correct. did there any body know why this happen?
I did not change the default parameters....
Here is the result.
Has someone faced with this issue while compiling?
99 errors detected in the compilation of "tmp/tmpxft 00001527 00000000-9 demo.spp1.ii".
Thank you.
@andyzeng hi,
how can I put the absolute scale to the result??
I had an error while compiling using ./compile.sh
.
The error message:
Segmentation fault (core dumped)
Is there someone who had the same problem, and solved?
Hello
I 'm just wondering is there a way to go back from Point cload to a TSDF volume ?
Hi, I have generated the binary file and am trying to convert it to the ply file. But when I run the tsdf2mesh.m, an error shown below pops up:
Index in position 1 exceeds array bounds.
Error in tsdf2mesh (line 29)
meshPoints(1,:) = voxelGridOrigin(1) + points(2,:)*voxelSize; % x y axes
are swapped from isosurface
I check that the variable tsdf is not empty but the
fv = isosurface(tsdf, 0)
would give empty results for fv
, thus empty points
and faces
.
I am wondering how to solve this problem. Thanks!
I find Pyhon version of tsdf not only have color but also can run without matlab. I have make my C++ version TSDF add colored , but I have no idea have to replace matlab. I know the key part is marching_cubes_lewiner (Python) and isosurface(matlab). Any wary to replace them in C++? Any ideal and suggesion is OK !
Thanks!
I'm currently using the Realsense D415 camera to gather RGBD data of an object, specifically a bottle. This camera is attached to the end-effector of a 6DOF robot. I use the robot to gather real-world pose coordinates of the camera. I use this kinematics data as the pose of the camera for both translation and rotation. This program works if I perform translation across one axis. However, it doesn't work when I perform a rotation.
Reading the papers linked in the README, I can't seem to figure out if their pose data is relative to the world's origin or if they're absolute physical locations, such as ones received from a robot's kinematics data. Do you know if I should perform any additional transformations on my pose data, similar to how the C++ programs calculate the inverse of the base frame and multiply that with each subsequent frame?
The first image is with the camera just performing a translation across one axis. The second image is the camera performing a translation across the same axis, but also performing a 0 to 360 degree rotation along the way.
EDIT: By the way, I have double-checked the camera intrinsics, and have used two different realsense depth cameras and I'm encountering similar issues in both. Also, I have verified the positions of the end-effector (which is carrying the camera) are correct.
Hi,thank you for publishing your code.
Can i ask what version of CUDA are you using? I used 9.1 and it is encounter an fatal error with cudaMalloc when running demo.
My nvidia driver version is 384.111 and graphic processor is Quadro M2200.
I use the tsdf2mesh.m to create a 3D surface mesh. I use the data in the project.
But when I run the tsdf2mesh.m,I wait 4 hours,my MATLAB is still busy.
What's the problem?
Thanks for your reply.
Is there something in the works for radial and tangential distortion correction? Since the Integrate
function loops over the voxel grid, am I right in saying that implementing something like this would involve computing a mesh-grid of the image dimensions (640 x 480 for instance) and use that to compute the radial distance from the camera optical axis? You then only consider the point [pt_cam_x, pt_cam_y, pt_cam_z]
if it is within the camera dimensions?
Line 22 in ef59a08
Thanks
@andyzeng Whether this is the method to get .bin file from .png file used in SSCNET or not?
VC++ 15 on release mode is giving stack overflow if we initialize depth_im on stack
Can this code work with Cuda 9.1 and Visual C++ 14.0 compiler?
Hi,I want to build a map with RGB information,How to use RGB image to add color information?
Look forward to your reply,Thank you!
HI,
I have a depthmaps that looks like this: click
I've tried using them in your code in the form as show on the picture, and after making them grayscale, but generated point-cloud seems to be empty with those.
Any advices - how I should prepare pictures to make them work with your code ?
Regards.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.