Comments (10)
Yeah this is definitely feasible, color is not required and is ignored if not provided.
To get started you would add another ImageSourceEngine
(e.g. PMDEngine
). Within the new engine setup the device, register the depth image callback, and provide those images such that getImages()
can provide them to the ITMMainEngine.
Overall this process is pretty straightforward and is a bit simpler for devices such as the RealSense (e.g. dev->get_frame_data()
) which provide synchronous access to depth images. So you may have to add some locking and image caching to get this working.
from infinitam.
I have an initial PicoFlexxEngine working. it's not returning anything for the RGB image. for the depth image, it's returning short values for each pixel as millimeters reported by the Pico Flexx camera.
the GUI application seems to get the data, it displays it live in a color coded manner with different distance being different colors. so far so good.
but, how do I make this work otherwise? :)
my expectation was that the app would build a 3D model of an area that I 'show it' through the depth camera. but I don't see a 3D map / model being built by the GUI app. is there something additional to this?
(sorry about the naive question)
from infinitam.
try going frame by frame with 'n' to see if it reconstruct anything.
note that I've also tried the standard PMD camera and the results were too bad to work with -- the depth is not accurate enough to track with using just ICP.
i've not tried with the colour tracker though. I expect that to work better.
from infinitam.
sometimes it starts to create a reconstruction, but the results are not really good. I understand that IMU fusion should be off.
for the camera calibration, what format is the calibration file in? I guess that should help as well if I supply this data properly.
regarding using a color tracker - I need something that works in a totally dark environment without human visible light. thus that doesn't seem to be an option
from infinitam.
you should not use IMU fusion unless you have an IMU attached to the device and with a correct calibrator class.
camera calibration I used for the pico would have this format:
--
width_rgb height_rgb
focal_length_x_rgb focal_length_y_rgb
principal_point_x_rgb principal_point_y_rgb
width_depth height_depth
focal_length_x_depth focal_length_y_depth
principal_point_x_depth principal_point_y_depth
affine ratio_to_m 0.0
--
ratio_to_m can be something like 0.0002 and is a factor used convert the depth measurement from the camera to meters.
overall I really don't expect the camera to work very well with the ICP tracker. I'd much rather suggest some other type of tracking (perhaps from a monocular tracker ?) with infinitam used just for fusion.
from infinitam.
When I was trying a similar PMD camera I was able to get pretty reasonable results with ICP. It didn't work well until I realized they were packing the depth data in a weird way (I think the first 3 bits are confidence value, so you can just shift those out of the way).
Regarding the intrinsics, I set those programatically similar to how its done in the RealSenseEngine:
this->calib.intrinsics_d.SetFrom(intrinsics_depth.fx, intrinsics_depth.fy,
intrinsics_depth.ppx, intrinsics_depth.ppy,
requested_imageSize_d.x, requested_imageSize_d.y);
You can query the PMD device for focal length, principle point, and image size and just pass them as above.
from infinitam.
tnx I made it work this way. also added the grayscale image for RGB data, but it actually seems to reduce the quality of the results, as the grayscale shows the camera illuminaton pattern strongly and also shows the short range of this active illumination
I'll clean up the code and put in a pull request if interesting
questions / comments:
it seems that if the 't' key is pressed to turn off sensor fusion - map building stops. even though there is no IMU in the picture, e.g. none on the camera and none on my laptop :)
at the same time, is there a 'definitive' way to turn off map making, and just use tracking without changing the 3D map?
from infinitam.
pressing 't' turns off fusion (so all map updates). that's the definitive way :).
from infinitam.
tnx
another question (sorry to bloat the thread): is there a way to load a previously saved mesh (that was mapped through a previous session) when starting the InfiniTAM app? so that the mesh is re-used and it doesn't have to be built again?
also, I've set the camera calibration parameters programmatically as described by Conner above. should I / is there a way to set the depth measurement ratio as well? currently I'm sending depth information as millimeters
from infinitam.
created pull request with the initial implementation: #58
from infinitam.
Related Issues (20)
- Supporting iPhone Depth Data
- from rgb to depth HOT 10
- Show nothing when running InfiniTAM.exe with files in Teddy HOT 5
- Multi-camera Application HOT 1
- FileBasedTracker doesnโt fuse the scene HOT 2
- How big is the voxel grid? HOT 1
- Saving 3D Reconstruction
- Python bindings HOT 6
- Azure Kinect Support HOT 1
- C1083 Cannot open include file: 'GL/glut.h': No such file or directory HOT 5
- Extracting the current VoxelGrid for an additional processing step HOT 1
- iterations algorithm HOT 1
- Python Buffer access to ITMLocalVBA HOT 1
- Cannot access the voxel blocks on device using Nsight VSE Debugger HOT 1
- ITMLocalVBA alone extractable as 3D Reconstruction? HOT 1
- cannot read images HOT 11
- Which version is the newest HOT 1
- Project doesn't build
- Does InfiniTAM support for MS Azure Kinect (soon)?
- Blensor Scans
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from infinitam.