gerhardr / kfusion Goto Github PK
View Code? Open in Web Editor NEWThis is an implementation sketch of the KinectFusion system described by Newcombe et al.
License: Other
This is an implementation sketch of the KinectFusion system described by Newcombe et al.
License: Other
I found two problems with the way kfusion measures how long the individual kernels take:
return double(std::clock())/CLOCKS_PER_SEC
This is not an atomic clock, and worse it depends on the CPU activity. Changing it to
struct timespec clockData;
clock_gettime(CLOCK_MONOTONIC, &clockData);
return (double) clockData.tv_sec + clockData.tv_nsec / 1000000000.0;
makes the timing more accurate, and also shows that kfusion is around 10% faster than measured with the old way.
cudaDeviceSynchronize()
, but kfusion doesn't do that in many places (apart from the total
time, which consequently is correct). Adding it before each Stats.sample
fixes that, and and changes my measurement for integrate
from 0.00*
milliseconds to 3.*
milliseconds, which makes a lot of sense.Note though that cudaDeviceSynchronize()
can slow things sometimes, but hasn't done so in my measurements.
diff --git a/kfusion.h b/kfusion.h
index 436e336..18cc5ea 100644
--- a/kfusion.h
+++ b/kfusion.h
@@ -1,6 +1,18 @@
#ifndef KFUSION_H
#define KFUSION_H
+
+#if defined(__GNUC__)
+ // circumvent packaging problems in gcc 4.7.0
+ #undef _GLIBCXX_ATOMIC_BUILTINS
+ #undef _GLIBCXX_USE_INT128
+
+ // need c headers for __int128 and uint16_t
+ #include <limits.h>
+ #include <stdint.h>
+#endif
+
+
#include <iostream>
#include <vector>
diff --git a/kfusion.cu b/kfusion.cu
index 8741b1b..f7cf22f 100644
--- a/kfusion.cu
+++ b/kfusion.cu
@@ -84,9 +84,9 @@ __global__ void vertex2normal( Image<float3> normal, const Image<float3> vertex
if(pixel.x >= vertex.size.x || pixel.y >= vertex.size.y )
return;
- const float3 left = vertex[make_uint2(max(pixel.x-1,0), pixel.y)];
+ const float3 left = vertex[make_uint2(max(int(pixel.x)-1,0), pixel.y)];
const float3 right = vertex[make_uint2(min(pixel.x+1,vertex.size.x-1), pixel.y)];
- const float3 up = vertex[make_uint2(pixel.x, max(pixel.y-1,0))];
+ const float3 up = vertex[make_uint2(pixel.x, max(int(pixel.y)-1,0))];
const float3 down = vertex[make_uint2(pixel.x, min(pixel.y+1,vertex.size.y-1))];
if(left.z == 0 || right.z == 0 || up.z == 0 || down.z == 0) {
when building on Windows with VS2010 & cuda6.5, the following error occurs:
1>------ Build started: Project: ZERO_CHECK, Configuration: Debug Win32 ------
1>Build started 2016/7/22 16:21:33.
1>InitializeBuildStatus:
1> Creating "Win32\Debug\ZERO_CHECK\ZERO_CHECK.unsuccessfulbuild" because "AlwaysCreate" was specified.
1>CustomBuild:
1> All outputs are up-to-date.
1>FinalizeBuildStatus:
1> Deleting file "Win32\Debug\ZERO_CHECK\ZERO_CHECK.unsuccessfulbuild".
1> Touching "Win32\Debug\ZERO_CHECK\ZERO_CHECK.lastbuildstate".
1>
1>Build succeeded.
1>
1>Time Elapsed 00:00:00.03
2>------ Build started: Project: kfusion, Configuration: Debug Win32 ------
2>Build started 2016/7/22 16:21:33.
2>InitializeBuildStatus:
2> Touching "kfusion.dir\Debug\kfusion.unsuccessfulbuild".
2>CustomBuild:
2> Building NVCC (Device) object CMakeFiles/kfusion.dir/Debug/kfusion_generated_helpers.cu.obj
2> helpers.cu
2>
2>cl : Command line warning D9025: overriding '/Od' with '/O2'
2>
2>cl : Command line error D8016: '/RTC1' and '/O2' command-line options are incompatible
2>
2> CMake Error at kfusion_generated_helpers.cu.obj.cmake:206 (message):
2> Error generating
2> E:/Github/kfusion/_build.vc10/CMakeFiles/kfusion.dir//Debug/kfusion_generated_helpers.cu.obj
2>
2>
2>
2>Build FAILED.
2>
2>Time Elapsed 00:00:00.31
========== Build: 1 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
Have someone tried building this repo on windows?
Hi, thank you for sharing this code, it's very helpful for a starter in 3D reconstruction like me. But I found that the raycasted depth map does not match the raw depth map, I use opencv to get the raycasted depth map, the code is something like this:
cv::Mat vertex_raycasted(480, 640, CV_32FC3);
cudaMemcpy(vertex_raycasted.data, kfusion.vertex.data(), 480640sizeof(float)*3, cudaMemcpyDeviceToHost);
vectorcv::Mat channels_of_vertex;
cv::split(vertex_raycasted, channels_of_vertex);
cv::Mat xmap = (channels_of_vertex[0] - 0.4) * 1000;
cv::Mat ymap = (channels_of_vertex[1] - 0.4) * 1000;
cv::Mat depthmap_raycasted = channels_of_vertex[2];
depthmap_raycasted *= 1000;
and compare it with original depth image or the raw depth map in kfusion, using this code:
cv::Mat vertex_input(480, 640, CV_32FC3);
cudaMemcpy(vertex_input.data, kfusion.inputVertex[0].data(), 480640sizeof(float)*3, cudaMemcpyDeviceToHost);
vectorcv::Mat channels_of_vertex2;
cv::split(vertex_input, channels_of_vertex2);
xmap = (channels_of_vertex2[0] - 0.4) * 1000;
ymap = (channels_of_vertex2[1] - 0.4) * 1000;
cv::Mat depthmap_input = channels_of_vertex2[2];
depthmap_input *= 1000;
the original depth image is very similar to raw depth map, since only bilateral filter is applied, this is common. But it is strange that the raycasted depth map is different from the original depth image and raw depth map, thought they looks very similar. I transformed them into point cloud in matlab, there is a big translation and rotation between them, and the translation and rotation is changing with time. I mounted the camera on a robot arm, and found that the pose computed by TSDF is rather accurate, only few millimeter's error. It confused me for a long time, can you help me?
the first image is raw depth map, and second is raycasted depth map, they look alike, but the depth values are very different.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.