Comments (7)
Hi @jumarini your question is a good one and has been discussed a bit here. I'd suggest reading that for a start off.
In the meantime, here is some other info:
First, the unit vector data are exposed by the O3D303 camera but as of right now, per @graugans at IFM, in the current firmware implementation those data are not correct and will not compute the cartesian data from depth properly. My understanding is that it will be fixed in the next firmware release.
However, I do not think you are stuck, if what you really want is the cartesian data. We expose that in two ways: via the cloud
topic (in meters) which will be registered to the depth data or via the xyz_image
topic (in mm) which is just a 3-channel cv::Mat where each of the three image planes are the cartesian x-y-z respectively (as opposed to color planes). Again this is registered to the depth data.
Even once the unit vector data are being exposed by the camera properly, I am still unsure if we really need to expose it via the ROS interface. This is really something that I'd like to throw out there for discussion and to see what our user base needs. Right now, as the data are coming off the camera the cartesian data are transformed from the IFM optical frame to a coord-frame consistent with ROS conventions in a just-in-time manner. This happens here. Of course, the unit vector data would convert the depth data to the IFM optical frame cartesian coords then another transform would be needed to make those data consistent with the cloud
and xyz_image
coords. Again, a use case may be out there that would benefit from that, but right now it is not clear.
I'd like to work toward a "version 1.0.0" of both libo3d3xx
and o3d3xx-ros
before the end of the year (???) , and this is something I'd like to have resolved. I'd be interested to get your opinion and learn more about your use case.
Sorry for the long-winded note, but I thought the background would be useful.
from o3d3xx-ros.
We have fixed the issues with unit vectors and extrinsic calibration. I guess the new firmware will be released when we official release the O3D300 smart sensor at SPS Drives in Nürnberg Germany. I hope we will soon release a document describing how to deal with the unit vectors and extrinsic calibration.
from o3d3xx-ros.
Thanks Christian. Is the planned release version 1.2.533?
from o3d3xx-ros.
Thanks for your explanation, Tom.
Issue #14 was an interesting read, though I'm not familiar with the PCIC interface. The unit vectors sound like the intrinsic calibration and camera matrix I'm looking for to project the depth map to X,Y,Z.
I'm working on a sensor accuracy & noise quantification task for a commercial indoor mobile robot project. Sensing different materials, glossiness, sizes, and under different lighting and polarization conditions.
As stated, I am trying to use a standardized, lower-bandwidth method of grabbing frames. The ROS depth image is the way to do it. I know the O3D has low bandwidth requirements right now ("just use the cloud, dummy!"), but the Kinect 2 and stereo camera I'm using are much more intensive on bandwidth. They all have a depth map in common. Just think - you could tell all those Kinect depth map users out there to simply swap in the O3D and compute the same XYZ data in the same way. Nice.
I've already grabbed cloud
data, and it certainly sounds like I could use this xyz_image
and get the data I need, but they're just not the standard low-bandwidth representation for 3D sensors I'm looking for.
from o3d3xx-ros.
@tpanzarella It looks like the management skips 1.2.xxx series and heads to 1.3.xxx But for the O3D3xx there is not much changed. Except the 100K Parameter which will be changed to a more generic resolution Parameter which defaults to 23K.
from o3d3xx-ros.
@jumarini Got it. I understand where you are coming from now. As you can see from above, it looks like @graugans and team have all of this ready to go now and will be officially shipping sensors with the new firmware by the end of November (we will also make it available for download here once the code is officially released by IFM so you can flash your existing sensors).
I cannot promise you a firm date as of now regarding when all of this will be implemented and pushed out into the libo3d3xx
and o3d3xx-ros
repos, but we will come up with a solution. Once I get working on it, I can start pushing out my development branch to github if you want to work with an unstable/work-in-progress until the release it ready. I hope in the mean time, you can work with the xyz_image
to keep moving forward with your application.
@graugans Thanks for the info. Based on my current work queue, I may be better suited to jump right to 1.3.xxx as well. Likely, IFM would not want sensors out there with the 1.2.xxx firmware if it is not officially supported anyway.
from o3d3xx-ros.
Computing the 3D coords from the radial distance image, unit vectors, and extrinsics is now demonstrated in the examples
pseudo-module of libo3d3xx
repository. The file ex-cartesian.cpp
shows how to do this. For a ROS version of this, see the test/test_camera.py
file in o3d3xx-ros
-- while the test script is in Python, it should be straight forward enough to translate it to C++ if that is the desired. At the very least you can glean the topics for which the pertinent data are published.
from o3d3xx-ros.
Related Issues (20)
- install issues test errors HOT 5
- SWTrigger Service doesn't return error codes HOT 3
- only half of the information from zyx_image HOT 4
- using mask to lower the amount of information to pc HOT 9
- catkin_make run_tests failure HOT 8
- Getting Libo3d3xx on Yocto HOT 5
- Unable to change SymmetryThreshold of Imager HOT 3
- Amplitude image scaling HOT 2
- Problem with nodes while launching ros HOT 4
- Private node handle for topics and services HOT 1
- Doubt on the dataframe HOT 2
- Time stamp on images incorrect HOT 7
- Incorporate filtering at a ROS node/driver level HOT 5
- Compiling o3d3xx-ros HOT 2
- -
- Can't find libopencv_core3.so.3.2 HOT 5
- Where on the physical camera is o3d3xx/camera_link located? HOT 2
- PointCloud<pcl::PointWithRange> range data HOT 3
- IFM 03X101 HOT 1
- crash
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from o3d3xx-ros.