Giter Site home page Giter Site logo

o3d3xx-ros's Introduction

⚠️ DEPRECATION NOTICE: The libo3d3xx libary has been superceded by ifm3d. Similarly, this package has been deprecated in favor of ifm3d-ros which is also available in the ROS distribution (starting with ROS Kinetic)

o3d3xx-ros

o3d3xx-ros is a wrapper around libo3d3xx enabling the usage of IFM Efector O3D3xx ToF cameras from within ROS software systems.

Software Compatibility Matrix

o3d3xx-ros version libo3d3xx version ROS distribution(s)
0.1.3 0.1.9 Indigo
0.1.4 0.1.9 Indigo
0.1.5 0.1.11 Indigo
0.1.6 0.1.11 Indigo
0.1.7 0.2.0 Indigo
0.1.8 0.2.0 Indigo
0.2.0 0.3.0 Indigo
0.2.1 0.4.0 Indigo
0.2.2 0.4.0, 0.4.1, 0.4.2 Indigo
0.2.3 0.4.3 Indigo, Kinetic
0.2.4 0.4.4 Indigo, Kinetic
0.2.5 0.4.5, 0.4.6, 0.4.8, 0.4.9, 0.5.0, 0.6.0, 0.7.0 Indigo, Kinetic

Prerequisites

  1. Ubuntu 14.04 or 16.04
  2. ROS Indigo or Kinetic
  3. libo3d3xx

Additionally, your compiler must support C++11. This package has been validated with:

  • g++ 4.8.2 on Ubuntu 14.04 LTS, ROS Indigo
  • g++ 5.3.x on Ubuntu 16.04 LTS, ROS Kinetic

Building and Installing the Software

NOTE: Since we are talking about ROS here, we assume you are on Ubuntu Linux.

You should first ensure that you have installed ROS by following these instructions. The desktop-full installation is highly recommended.

Next, you should be sure to install libo3d3xx. This ROS package assumes you have installed libo3d3xx via the supported debian installer. Step-by-step instructions for that process are located here. If you already have libo3d3xx installed you can skip this step. However, it is important to point out that the table above denotes a software compatibility matrix and you should be sure to link o3d3xx-ros against a compatible version of libo3d3xx.

We now move on to building o3d3xx-ros.

Building and installing o3d3xx-ros is accomplished by utilizing the ROS catkin tool. There are many tutorials and other pieces of advice available on-line advising how to most effectively utilize catkin. However, the basic idea is to provide a clean separation between your source code repository and your build and runtime environments. The instructions that now follow represent how we choose to use catkin to build and permanently install a ROS package from source.

First, we need to decide where we want our software to ultimately be installed. For purposes of this document, we will assume that we will install our ROS packages at ~/ros. For convenience, we add the following to our ~/.bash_profile:

if [ -f /opt/ros/indigo/setup.bash ]; then
    source /opt/ros/indigo/setup.bash
fi

cd ${HOME}

export LPR_ROS=${HOME}/ros

if [ -d ${LPR_ROS} ]; then
    for i in $(ls ${LPR_ROS}); do
        if [ -d ${LPR_ROS}/${i} ]; then
            if [ -f ${LPR_ROS}/${i}/setup.bash ]; then
                source ${LPR_ROS}/${i}/setup.bash --extend
            fi
        fi
    done
fi

Next, we need to get the code from github. We assume we keep all of our git repositories in ~/dev.

$ cd ~/dev
$ git clone https://github.com/lovepark/o3d3xx-ros.git

We now have the code in ~/dev/o3d3xx-ros. Next, we want to create a catkin workspace that we can use to build and install that code from. It is the catkin philosophy that we do not do this directly in the source directory.

$ cd ~/dev
$ mkdir o3d3xx-catkin
$ cd o3d3xx-catkin
$ mkdir src
$ cd src
$ catkin_init_workspace
$ ln -s ~/dev/o3d3xx-ros o3d3xx

So, you should have a catkin workspace set up to build the o3d3xx-ros code that looks basically like:

[ ~/dev/o3d3xx-catkin/src ]
tpanzarella@jelly: $ pwd
/home/tpanzarella/dev/o3d3xx-catkin/src

[ ~/dev/o3d3xx-catkin/src ]
tpanzarella@jelly: $ ls -l
total 0
lrwxrwxrwx 1 tpanzarella tpanzarella 49 Dec  2 15:26 CMakeLists.txt -> /opt/ros/indigo/share/catkin/cmake/toplevel.cmake
lrwxrwxrwx 1 tpanzarella tpanzarella 32 Dec  2 15:24 o3d3xx -> /home/tpanzarella/dev/o3d3xx-ros

Now we are ready to build the code.

$ cd ~/dev/o3d3xx-catkin
$ catkin_make -DCATKIN_ENABLE_TESTING=ON
$ catkin_make run_tests
$ catkin_make -DCMAKE_INSTALL_PREFIX=${LPR_ROS}/o3d3xx install

The ROS package should now be installed in ~/ros/o3d3xx. To test everything out you should open a fresh bash shell, and start up a ROS core:

$ roscore

Open another shell and start the primary camera node:

$ roslaunch o3d3xx camera.launch ip:=192.168.10.69

NOTE: The IP address of your camera may differ. If you are using the factory default (192.168.0.69), you do not need to specify it on the above roslaunch line.

Open another shell and start the rviz node to visualize the data coming from the camera:

$ roslaunch o3d3xx rviz.launch

At this point, you should see an rviz window that looks something like:

rviz1

Congratulations! You can now utilize o3d3xx-ros.

Nodes

/o3d3xx/camera

This node provides a real-time feed to the camera data. This node is started from the primary camera.launch file:

$ roslaunch o3d3xx camera.launch

The naming of the camera can be customized via the ns (namespace) and nn (node name) command line arguments passed to the camera.launch file. For example, if you specify your roslaunch command as:

$ roslaunch o3d3xx camera.launch ns:=robot nn:=front_camera

The node will have the name /robot/front_camera in the ROS computation graph.

Published Topics

     <tr>
         <td>/o3d3xx/camera/amplitude</td>
         <td>sensor_msgs/Image</td>
         <td>16-bit gray scale encoding of the sensor Amplitude image
         normalized wrt exposure time. </td>
     </tr>
     <tr>
         <td>/o3d3xx/camera/raw_amplitude</td>
         <td>sensor_msgs/Image</td>
         <td>16-bit gray scale encoding of the sensor Amplitude image
         non-normalized.</td>
     </tr>
     <tr>
         <td>/o3d3xx/camera/cloud</td>
         <td>sensor_msgs/PointCloud2</td>
         <td>
         A 3D PCL point cloud of point type `XYZI`. In this encoding the
         intensity channel is represented by the corresponding pixel's
         amplitude data. The units of this point cloud are in meters.
         </td>
     </tr>
     <tr>
         <td>/o3d3xx/camera/confidence</td>
         <td>sensor_msgs/Image</td>
         <td>
         An 8-bit mono image encoding of the confidence image. The meaning
         of each bit of each pixel value is discussed in the official IFM
         documentation for the camera.
         </td>
     </tr>
     <tr>
         <td>/o3d3xx/camera/depth</td>
         <td>sensor_msgs/Image</td>
         <td>
         A 16-bit mono image encoding of the radial depth map from the
         camera. The depth units are in millimeters.
         </td>
     </tr>
     <tr>
         <td>/o3d3xx/camera/depth_viz</td>
         <td>sensor_msgs/Image</td>
         <td>
         A rendering of the depth image utilizing a colormap more
         human-friendly for visualization purposes. For performance
         reasons, messages are only published to this topic when the
         `publish_viz_images` parameter is set to true at launch time.
         </td>
     </tr>
     <tr>
         <td>/o3d3xx/camera/good_bad_pixels</td>
         <td>sensor_msgs/Image</td>
         <td>
         A binary image showing good vs. bad pixels on the pixel array. Bad
         pixels can be caused by numerous reasons (e.g., motion blur over
         an integration/exposure timestep). Visualizing this data is useful
         for when you are tuning your imager parameters. For performance
         reasons, messages are only published to this topic when the
         `publish_viz_images` parameter is set to true at launch time.
         </td>
     </tr>
     <tr>
         <td>/o3d3xx/camera/xyz_image</td>
         <td>sensor_msgs/Image</td>
         <td>
         An OpenCV image encoding (CV_16SC3) of the same point cloud that
         is published to `/o3d3xx/camera/cloud` where the three image planes
         are 0 = x, 1 = y, 2 = z. Units are in millimeters yet the coord
         frame is consistent with the point cloud.
         </td>
     </tr>
     <tr>
         <td>/o3d3xx/camera/unit_vectors</td>
         <td>sensor_msgs/Image</td>
         <td>
         An OpenCV image encoding (CV_32FC3) of the rotated unit vectors
         that can be used together with the translation vector from the
         camera extrinsics and the radial distance image to compute the
         cartesian coordinates for each pixel in the imager array off-board
         the camera. This topic is latched.
         </td>
     </tr>
     <tr>
         <td>/o3d3xx/camera/extrinsics</td>
         <td><a href="msg/Extrinsics.msg">Extrinsics.msg</a></td>
         <td>
         Extrinsics as reported by the camera. The translation vector here
         is used together with the unit vectors and the radial distance
         image to compute the Cartesian coordinates for each pixel in the
         imager array. It should be noted that for this message, the
         translational units are in mm and the rotational units are in
         degrees. This is to be consistent with the camera eventhough it is
         not necessarily consistent with ROS conventions. Usage of this
         message is really for very specialized use-cases.
         </td>
     </tr>
Topic Message Description

Advertised Services

Service Name Service Definition Description
/o3d3xx/camera/Config Config.srv Mutates camera settings based upon an input JSON file. NOTE: Due to what appears to be limitations in the YAML parsing of the stock ROS `rosservice` command line tool (i.e., it does not handle JSON as string payload well) you will have to use the /o3d3xx/camera/config_node to configure the camera. This is explained in further detail below.
/o3d3xx/camera/Dump Dump.srv Dumps the current configuration of the camera to a JSON string. The output of this dump is suitable for editing and passing to the `Config` service for configuring the camera.
/o3d3xx/camera/GetVersion GetVersion.srv Returns the current version of the underlying libo3d3xx library that this ROS node is linked to.
/o3d3xx/camera/Rm Rm.srv Removes an application from the camera. This service will restrict removing the current active application.
/o3d3xx/camera/Trigger Trigger.srv Sends a software trigger signal to the camera.

Parameters

NameData TypeDescription
ip string IP address of the camera
xmlrpc_port int TCP port the camera's XMLRPC server is listening on
password string Password to use to connect to the camera
schema_mask uint16_t Mask controlling which image types to stream back from the camera. This is useful for numerous reasons. It allows for conserving bandwidth between the host computer and the camera or lessens the CPU cycles required by the camera to compute image data which may result in an increased frame rate. See the o3d3xx-schema command line tool for generating custom masks.
timeout_millis int Time, in milliseconds, to block when waiting for a frame from the camera before timing out.
timeout_tolerance_secs double Time, in seconds, to wait before trying to restart the underlying framegrabber if it is currently experiencing timeouts while trying to capture image data.
publish_viz_images bool In general, for a runtime system, the core data a system will want from this camera include the `cloud`, `depth`, `amplitude`, and `confidence` images. This node will always publish those data. However, if you set this parameter to `true` a few additional images are published. These are `depth_viz` and `good_bad_pixels` (they are described above in the `Topics` section). These viz images are intended for human analysis and visualization in `rviz`.
assume_sw_trigger bool If your application intends to use software triggering (as opposed to a free-running framegrabber), this parameter does four things. First, it will not print out framegrabber timeout messages to the rosconsole. Second, it will set, a-priori, the `timeout_millis` parameter to 500 milliseconds (this is so that the node will release the framegrabber mutex in the main publishing loop). Third, this will set the `timeout_tolerance_secs` to 600 seconds so that the framegrabber is not constantly getting recycled unnecessarily. Fourth, it sets the `GLOG_minloglevel` environment variable to 3 (ERROR). This is so the underlying `libo3d3xx` library does not fill your log files with timeout messages which it emits at the WARNING level.
frame_id_base string tf frame prefix

/o3d3xx/camera_tf

This node is of type tf/static_transform_publisher. It establishes a frame_id for the camera in the global tf tree. This node is launched from the primary camera.launch file:

$ roslaunch o3d3xx camera.launch

When run as above, the tf publishing node would be named /o3d3xx/camera_tf and the camera coordinate frame would be o3d3xx/camera_link in the tf tree.

You can customize this naming via the frame_id_base arg or implicitly through the ns (namespace) and nn (node name) command line arguments passed to the camera.launch file. For example, if you specify your roslaunch command as:

$ roslaunch o3d3xx camera.launch ns:=robot nn:=front_camera

The node name will be /robot/front_camera_tf and the camera frame will be robot/front_camera_link in the tf tree.

/o3d3xx/camera/config_node

This node is used as a proxy to simplify calling the /o3d3xx/camera/Config service offered by the /o3d3xx/camera node. It was noted above that there appears to be a limitation in the YAML parser of the ROS rosservice command line tool. Specifically, it seems that it is not capable of assigning a JSON string to a variable. This is the reason for this node. This is not a long-running node but rather works like a typical command-line tool would: you invoke it, it runs, and exits. The following command line will launch this node:

$ roslaunch o3d3xx config.launch

Parameters

NameData TypeDescription
infile string By default, this node will read `stdin` for a JSON string to use to pass to the `/o3d3xx/camera/Config` service. However, if this parameter is specified it will read the JSON from this file.

/rviz

This package offers a launch script that wraps the execution of rviz so that the display will be conveniently configured for visualizing the /o3d3xx/camera data. To launch this node:

$ roslaunch o3d3xx rviz.launch

Running the command as above will color the point cloud with the data from the normalized amplitude image (i.e., the intensity).

The rviz window should look something like (assuming you are coloring the point cloud with the intensity data):

rviz1

/o3d3xx/camera/XXX_throttler

This package offers a launch script that wraps the topic_tools/throttler node so that it can throttle the core topics from the camera. Specifically, it will throttle /o3d3xx/camera/cloud to /o3d3xx/camera/cloud_throttle, /o3d3xx/camera/amplitude to /o3d3xx/camera/amplitude_throttle, /o3d3xx/camera/raw_amplitude_throttle, /o3d3xx/camera/depth to /o3d3xx/camera/depth_throttle, /o3d3xx/camera/confidence to /o3d3xx/camera/confidence_throttle. To launch this node:

$ roslaunch o3d3xx throttled.launch

By default, it will throttle the above named topics to 1 Hz. You can change the frequency with the hz command line argument. For example, to send data at 2 Hz:

$ roslaunch o3d3xx throttled.launch hz:=2.0

Using this launch file to launch this set of nodes is strictly optional. We have found use for it in two ways. First, to slow down the publishing frequency of the topics when used in conjunction with the /o3d3xx/camera/file_writer node for collecting data (i.e., in those instances when we really do not need all the data but rather some subsampling of it). Second, if we are running the camera on a drone (for example) that has a slower radio link down to a ground control station running rviz where we want to see what the camera sees while the drone is in flight. Clearly there are other uses for this, YMMV.

Configuring Camera Settings

Configuring the camera is accomplished by passing a JSON string to the /o3d3xx/camera/config_node which will call the /o3d3xx/camera/Config service to mutate the camera settings. Using a JSON string to configure the camera has the following primary benefits:

  1. Configuration is declarative. The camera configuration will reflect that which is described by the JSON file.
  2. The JSON data is human-consumable and easily edited in a text editor. This makes it very convenient for headless embedded systems.
  3. The JSON data is machine parsable, so configuring the camera on the fly via programmable logic is also possible.

There are also a few downfalls to using JSON. Most notably the lack of comments and an enforceable schema. One could argue that the latter keeps things flexible. None-the-less, JSON is the format used by libo3d3xx and, by extension, this ROS package.

An exemplary JSON file is shown here (this is the result of calling the /o3d3xx/camera/Dump service on a development system). When passing a JSON string (like the previously linked to file) to the /o3d3xx/camera/Config service (or to the /o3d3xx/camera/config_node) the following rules are used to configure the camera:

  1. The Device section is processed and saved on the camera.
  2. The Apps section is processed. For each app:
  3. If the Index key is present, a current app at that Index is looked up. If present, it is edited to reflect the data in the JSON file. If an app at that Index is not present, a new app is created with the parameters from the JSON file. It is not guaranteed that the new app will have the specified Index.
  4. If the Index key is not present, a new app is created with the parameters as specified in the JSON file.
  5. The active application is set by consulting the desired index of the ActiveApplication from the Device section of the JSON. If the specified Index does not exist, the active application is not set.
  6. The Net section is processed. A reboot of the camera may be necessary after changing the camera's network parameters. Additionally, you will likely need to restart the /o3d3xx/camera node pointing it to the new IP address (if that is what you changed).

It should also be noted that any portion of the JSON tree can be specfied to configure only that part of the camera. The only rule to follow is that all keys should be fully qualified. For example, to simply set the active application, you can use a JSON snippet like this:

{
    "o3d3xx":
    {
        "Device":
        {
            "ActiveApplication": "2"
        }
    }
}

The above snippet is provided as an example here. To apply this to your camera, you can:

$ roslaunch o3d3xx config.launch infile:=/path/to/ex_set_active.json

It was also noted above that the /o3d3xx/camera/config_node will read stdin by default, so you could also:

$ echo '{"o3d3xx":{"Device":{"ActiveApplication":"2"}}}' | roslaunch o3d3xx config.launch

Here is another example JSON file. This one will add a new application to the camera, using the default values for the high-dynamic range imager. We note that this application is added to the camera because no Index is specified for the application. If an Index were specfied, the application at the specified Index, if present, would be edited to reflect this configuration.

In general, a simple way to configure camera settings without having to memorize the JSON syntax would be to simply dump the current camera settings to a file:

$ rosservice call /o3d3xx/camera/Dump > /tmp/camera.json

Then, open /tmp/camera.json with a text editor to create a declarative JSON configuration for your camera. You should be sure to delete the variable names from the rosservice output if you are following this example word-for-word. Additionally, you can delete any unnecessary keys if you would like, however it is not strictly necessary as the /o3d3xx/camera/Config service will leave unedited values unchanged on the camera. Once you have a configuration that you like, you can:

$ roslaunch o3d3xx config.launch infile:=/tmp/camera.json

You can check that your configuration is active by calling the /o3d3xx/camera/Dump service again.

TODO

Please see the Github Issues.

LICENSE

Please see the file called LICENSE.

AUTHORS

Tom Panzarella [email protected]

o3d3xx-ros's People

Contributors

bigcmos avatar mikeferguson avatar shaun-edwards avatar theseankelly avatar tpanzarella avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

o3d3xx-ros's Issues

only half of the information from zyx_image

Hello,

We are using the xyz_image (topic) to determine the x,y,z value of a pixel. When we read of this topic it looks like we only get half of the information. because it is a 16-bit value but we get two times an 8-bit. if we combine those 8-bits we get the correct distance. but by using this way we only get half of the pixels.

are we doing something wrong or is this normal?
I hope someone can help.

// ROS
#include <ros/ros.h>
#include <geometry_msgs/Twist.h>
#include <stdlib.h>
#include <std_msgs/String.h>

// OpenCV
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/Image.h>
#include <sensor_msgs/image_encodings.h>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>

using namespace cv;
namespace enc = sensor_msgs::image_encodings;

// Read message
void read_image(const sensor_msgs::ImagePtr& msg)
{
// Read image
cv_bridge::CvImagePtr img_ptr;

try
	{
  		img_ptr = cv_bridge::toCvCopy(msg, enc::TYPE_16SC3);
	}
catch (cv_bridge::Exception& e)
	{
  		ROS_ERROR("cv_bridge exception: %s", e.what());
  		return;
	}

// Split channels
Mat channels[3];
split(img_ptr->image, channels);

// Create mat objects for calcuations
Mat img_z = Mat(channels[0].rows, channels[0].cols, CV_16SC1);
Mat img_x = Mat(channels[1].rows, channels[1].cols, CV_16SC1);
Mat img_y = Mat(channels[2].rows, channels[2].cols, CV_16SC1);

// z-as matrix maken
for (int row = 0; row < channels[0].rows; row++)
{	
	for (int col = 0; col < channels[0].cols*2; col = col+2)
	{
		int8_t val0 = channels[0].at<uchar>(row, col);
		int8_t val1 = channels[0].at<uchar>(row, col+1);
		int16_t result = (val1 << 8 | val0);			
		img_z.at<uchar>(row, col/2) = result;	
	}

}
// x-as matrix maken
for (int row = 0; row < channels[1].rows; row++)
{
	for (int col = 0; col < channels[1].cols*2; col = col+2)
	{
		int8_t val0 = channels[1].at<uchar>(row, col);
		int8_t val1 = channels[1].at<uchar>(row, col+1);
		int16_t result = (val1 << 8 | val0);			
		img_x.at<uchar>(row, col/2) = result;	
		
	}

}
// y-as matrix maken
for (int row = 0; row < channels[2].rows; row++)
{
	for (int col = 0; col < channels[2].cols*2; col = col+2)
	{
		int8_t val0 = channels[2].at<uchar>(row, col);
		int8_t val1 = channels[2].at<uchar>(row, col+1);
		int16_t result = (val1 << 8 | val0);			
		img_y.at<uchar>(row, col/2) = result;	
	}

}

// foto z
namedWindow("img_z", CV_WINDOW_AUTOSIZE);
imshow("img_z", img_z);
waitKey(1);

}

// Start and initialize the node
int main(int argc, char **argv)
{
// Initialize ros node
ros::init(argc, argv, "listener");
ros::NodeHandle nh;

// Subscribe to the topic
ros::Subscriber sub = nh.subscribe("o3d3xx/camera/xyz_image", 100, read_image);

// Wait for messages
ros::spin();

}

inquiry: expected output of rosnode info ...

When I launch the camera, I get the following information. It also lists the node running on localhost.

wegunterjr@ros-VB:~$ roslaunch o3d3xx camera.launch 
... logging to /home/wegunterjr/.ros/log/94d49286-92d6-11e5-b905-08002798ae28/roslaunch-ros-VB-10343.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://ros-VB:45169/

SUMMARY
========

PARAMETERS
 * /o3d3xx/camera/ip: 192.168.0.69
 * /o3d3xx/camera/password: 
 * /o3d3xx/camera/publish_viz_images: True
 * /o3d3xx/camera/timeout_millis: 500
 * /o3d3xx/camera/xmlrpc_port: 80
 * /rosdistro: indigo
 * /rosversion: 1.11.16

NODES
  /o3d3xx/
    camera (o3d3xx/o3d3xx_node)
    camera_tf (tf2_ros/static_transform_publisher)

ROS_MASTER_URI=http://localhost:11311

core service [/rosout] found
process[o3d3xx/camera-1]: started with pid [10361]
process[o3d3xx/camera_tf-2]: started with pid [10362]

What should I expect the output to be when I run rosnode info?
Shouldn't it repeate the host URI, etc?

rosnode info /o3d3xx/
Node [/o3d3xx/]
Publications: None

Subscriptions: None

Services: None

cannot contact [/o3d3xx/]: unknown node

Amplitude image scaling

I have a similar issue to #23. I am trying to use the /ifm/amplitude image in order to perform extrinsic calibration of the sensor location. However, the default amplitude image is very dark and the calibration target cannot be seen in the image.
frame0000
I looked at the data and noticed that there is information there, but it appears to be scaled like it should be mono8, but it is being published as mono16. Is the data supposed to be mono16? I tried publishing as a mono8, but it is too bright and still wouldn't work for extrinsic calibration.
frame0003
I then scaled the image by the mean and published as the original mono16. With this I got a much better result.
frame0001
What is the scaling for the amplitude image and why is the image so dark by default? Or is the image supposed to be dark? We have auto exposure turned on and are using the less than 5m setting, so I would think that the image should be brighter. Would it be possible to change the scaling of the image prior to publishing in order to enable extrinsic calibration using the amplitude image?

camera_info topic?

Perhaps I'm stuck in stereo camera land, but it seems there should be a camera_info topic to go along with the depth image (http://www.ros.org/reps/rep-0118.html).

I'm trying to convert depth images to 3D coordinates in a standardized way for multiple kinds of sensors - Kinect, CRL stereo, O3D. I can do the projection on the other units using their camera_info camera matrix, but o3d3xx isn't publishing one.

Am I missing something? How do I get X and Y scale from the depth image?

Interpreting /cloud message

I'm working with @bsbc trying to use the pointcloud ROS message from the ifm camera and have a couple questions about it. Here is a sample of message contents:

height: 132
width: 176
fields:

name: x
offset: 0
datatype: 7
count: 1

name: y
offset: 4
datatype: 7
count: 1

name: z
offset: 8
datatype: 7
count: 1

name: intensity
offset: 16
datatype: 7
count: 1

is_bigendian: False

point_step: 32

row_step: 5632

data: [0, 0, 192, 127, 0, 0, 192, 127, 0, 0, 192, 127, 0, 0, 128, 63, 0, 0, 22, 68, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 192, 127, 0, 0, 192, 127, 0, 0, 192, 127, 0, 0, 128, 63, 0, 0, 30, 68, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 192, 127, 0, 0, 192, 127, 0, 0, 192, 127, 0, 0, 128, 63, 0, 128, 35, 68, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...

I show data for 3 points (96 bytes). My questions are:

  • it seems there should be 32 bytes per pixel, so 13217632=743424 bytes, but the size of the message seems to be half of that, 371712
  • according to the format, bytes 12-15 don't have meaning, and yet they are not zero, what do they mean? (in the example, they are all 0,0,128,63 which is float 1.0)
  • 0,0,192,127 is float nan, so the three points shown above are invalid, I guess, but still they have different intensity values (if bytes 16-19 are indeed intensity; the float values above are 600.0, 632.0 and 654.0). Are these values meaningful?

Thanks for your help.

Can't find libopencv_core3.so.3.2

Hello,

I performed an update using the Ubuntu 16.04 package manager but now I get the following error when trying to start my o3d303 camera:

/home/bloom/catkin_ws/devel/lib/o3d3xx/o3d3xx_node: error while loading shared libraries: libopencv_core3.so.3.2: cannot open shared object file: No such file or directory

If I look into /opt/ros/kinetic/lib, i see libopencv_core3.so, libopencv_core3.so.3.3, and libopencv_core3.so.3.3.1, but not what I need.

How do I fix this?

install issues test errors

Hello,

I'm having issues installing this Ros package.
I installed the libo3d3xx version0.4.5 and use Ros Indigo.
I use the newest version of this ros package.

It always fails at the catkin_make run_tests

sjors@sjors-VirtualBox:~/dev/o3d3xx-catkin$ catkin_make -DCATKIN_ENABLE_TESTING=ON
Base path: /home/sjors/dev/o3d3xx-catkin
Source space: /home/sjors/dev/o3d3xx-catkin/src
Build space: /home/sjors/dev/o3d3xx-catkin/build
Devel space: /home/sjors/dev/o3d3xx-catkin/devel
Install space: /home/sjors/dev/o3d3xx-catkin/install

Running command: "cmake /home/sjors/dev/o3d3xx-catkin/src -DCATKIN_ENABLE_TESTING=ON -DCATKIN_DEVEL_PREFIX=/home/sjors/dev/o3d3xx-catkin/devel -DCMAKE_INSTALL_PREFIX=/home/sjors/dev/o3d3xx-catkin/install -G Unix Makefiles" in "/home/sjors/dev/o3d3xx-catkin/build"

-- Using CATKIN_DEVEL_PREFIX: /home/sjors/dev/o3d3xx-catkin/devel
-- Using CMAKE_PREFIX_PATH: /opt/ros/indigo
-- This workspace overlays: /opt/ros/indigo
-- Using PYTHON_EXECUTABLE: /usr/bin/python
-- Using Debian Python package layout
-- Using empy: /usr/bin/empy
-- Using CATKIN_ENABLE_TESTING: ON
-- Call enable_testing()
-- Using CATKIN_TEST_RESULTS_DIR: /home/sjors/dev/o3d3xx-catkin/build/test_results
-- Found gtest sources under '/usr/src/gtest': gtests will be built
-- Using Python nosetests: /usr/bin/nosetests-2.7
-- catkin 0.6.18
-- BUILD_SHARED_LIBS is on
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- ~~ traversing 1 packages in topological order:
-- ~~ - o3d3xx
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- +++ processing catkin package: 'o3d3xx'
-- ==> add_subdirectory(o3d3xx)
-- Boost version: 1.54.0
-- Found the following Boost libraries:
-- filesystem
-- Using these message generators: gencpp;genlisp;genpy
-- o3d3xx: 1 messages, 5 services
-- Configuring done
-- Generating done
-- Build files have been written to: /home/sjors/dev/o3d3xx-catkin/build

Running command: "make -j1 -l1" in "/home/sjors/dev/o3d3xx-catkin/build"

[ 0%] Built target _o3d3xx_generate_messages_check_deps_Rm
[ 0%] Built target _o3d3xx_generate_messages_check_deps_Config
[ 0%] Built target _o3d3xx_generate_messages_check_deps_Dump
[ 0%] Built target std_msgs_generate_messages_cpp
[ 0%] Built target _o3d3xx_generate_messages_check_deps_Extrinsics
[ 0%] Built target _o3d3xx_generate_messages_check_deps_GetVersion
[ 0%] Built target _o3d3xx_generate_messages_check_deps_Trigger
[ 27%] Built target o3d3xx_generate_messages_cpp
[ 31%] Built target o3d3xx_config_node
[ 31%] Built target std_msgs_generate_messages_lisp
[ 59%] Built target o3d3xx_generate_messages_lisp
[ 59%] Built target std_msgs_generate_messages_py
[ 95%] Built target o3d3xx_generate_messages_py
[ 95%] Built target o3d3xx_generate_messages
[100%] Built target o3d3xx_node
sjors@sjors-VirtualBox:~/dev/o3d3xx-catkin$ catkin_make run_tests
Base path: /home/sjors/dev/o3d3xx-catkin
Source space: /home/sjors/dev/o3d3xx-catkin/src
Build space: /home/sjors/dev/o3d3xx-catkin/build
Devel space: /home/sjors/dev/o3d3xx-catkin/devel
Install space: /home/sjors/dev/o3d3xx-catkin/install

Running command: "cmake /home/sjors/dev/o3d3xx-catkin/src -DCATKIN_DEVEL_PREFIX=/home/sjors/dev/o3d3xx-catkin/devel -DCMAKE_INSTALL_PREFIX=/home/sjors/dev/o3d3xx-catkin/install -G Unix Makefiles" in "/home/sjors/dev/o3d3xx-catkin/build"

-- Using CATKIN_DEVEL_PREFIX: /home/sjors/dev/o3d3xx-catkin/devel
-- Using CMAKE_PREFIX_PATH: /opt/ros/indigo
-- This workspace overlays: /opt/ros/indigo
-- Using PYTHON_EXECUTABLE: /usr/bin/python
-- Using Debian Python package layout
-- Using empy: /usr/bin/empy
-- Using CATKIN_ENABLE_TESTING: ON
-- Call enable_testing()
-- Using CATKIN_TEST_RESULTS_DIR: /home/sjors/dev/o3d3xx-catkin/build/test_results
-- Found gtest sources under '/usr/src/gtest': gtests will be built
-- Using Python nosetests: /usr/bin/nosetests-2.7
-- catkin 0.6.18
-- BUILD_SHARED_LIBS is on
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- ~~ traversing 1 packages in topological order:
-- ~~ - o3d3xx
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- +++ processing catkin package: 'o3d3xx'
-- ==> add_subdirectory(o3d3xx)
-- Boost version: 1.54.0
-- Found the following Boost libraries:
-- filesystem
-- Using these message generators: gencpp;genlisp;genpy
-- o3d3xx: 1 messages, 5 services
-- Configuring done
-- Generating done
-- Build files have been written to: /home/sjors/dev/o3d3xx-catkin/build

Running command: "make run_tests -j1 -l1" in "/home/sjors/dev/o3d3xx-catkin/build"

Removing test result files from '/home/sjors/dev/o3d3xx-catkin/build/test_results/o3d3xx'

  • removing '/home/sjors/dev/o3d3xx-catkin/build/test_results/o3d3xx/nosetests-test.xml'
  • removing '/home/sjors/dev/o3d3xx-catkin/build/test_results/o3d3xx/rosunit-test_camera.xml'
  • removing '/home/sjors/dev/o3d3xx-catkin/build/test_results/o3d3xx/rostest-test_o3d3xx.xml'
  • removing '/home/sjors/dev/o3d3xx-catkin/build/test_results/o3d3xx/rosunit-test_services.xml'
    Built target clean_test_results_o3d3xx
    Built target tests
    -- run_tests.py: execute commands
    /opt/ros/indigo/share/rostest/cmake/../../../bin/rostest --pkgdir=/home/sjors/dev/o3d3xx-catkin/src/o3d3xx --package=o3d3xx --results-filename test_o3d3xx.xml --results-base-dir /home/sjors/dev/o3d3xx-catkin/build/test_results /home/sjors/dev/o3d3xx-catkin/src/o3d3xx/test/o3d3xx.test
    ... logging to /home/sjors/.ros/log/rostest-sjors-VirtualBox-18103.log
    [ROSUNIT] Outputting test results to /home/sjors/dev/o3d3xx-catkin/build/test_results/o3d3xx/rostest-test_o3d3xx.xml
    terminate called after throwing an instance of 'o3d3xx::error_t'
    what(): XMLRPC call timed out
    terminate called after throwing an instance of 'o3d3xx::error_t'
    what(): XMLRPC call timed out
    Traceback (most recent call last):
    File "/home/sjors/dev/o3d3xx-catkin/src/o3d3xx/test/test_services.py", line 66, in
    sys.exit(main())
    File "/home/sjors/dev/o3d3xx-catkin/src/o3d3xx/test/test_services.py", line 62, in main
    rostest.rosrun('o3d3xx', 'test_services', TestServices, sys.argv)
    File "/opt/ros/indigo/lib/python2.7/dist-packages/rostest/init.py", line 145, in rosrun
    result = rosunit.create_xml_runner(package, test_name, result_file).run(suite)
    File "/opt/ros/indigo/lib/python2.7/dist-packages/rosunit/xmlrunner.py", line 233, in run
    test(result)
    File "/usr/lib/python2.7/unittest/suite.py", line 70, in call
    return self.run(*args, **kwds)
    File "/usr/lib/python2.7/unittest/suite.py", line 108, in run
    test(result)
    File "/usr/lib/python2.7/unittest/case.py", line 395, in call
    return self.run(*args, **kwds)
    File "/usr/lib/python2.7/unittest/case.py", line 331, in run
    testMethod()
    File "/home/sjors/dev/o3d3xx-catkin/src/o3d3xx/test/test_services.py", line 18, in test_services
    rospy.wait_for_service("/Dump")
    File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 159, in wait_for_service
    raise ROSInterruptException("rospy shutdown")
    rospy.exceptions.ROSInterruptException: rospy shutdown
    testtest_camera ... ok
    testtest_services ... ERROR!
    ERROR: max time [60.0s] allotted for test [test_services] of type [o3d3xx/test_services.py]
    File "/usr/lib/python2.7/unittest/case.py", line 331, in run
    testMethod()
    File "/opt/ros/indigo/lib/python2.7/dist-packages/rostest/runner.py", line 148, in fn
    self.test_parent.run_test(test)
    File "/opt/ros/indigo/lib/python2.7/dist-packages/rostest/rostest_parent.py", line 132, in run_test
    return self.runner.run_test(test)
    File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/launch.py", line 681, in run_test
    (test.time_limit, test.test_name, test.package, test.type))

[ROSTEST]-----------------------------------------------------------------------

[o3d3xx.rosunit-test_camera/test_camera][FAILURE]-------------------------------
False is not true
File "/usr/lib/python2.7/unittest/case.py", line 331, in run
testMethod()
File "/home/sjors/dev/o3d3xx-catkin/src/o3d3xx/test/test_camera.py", line 82, in test_camera
self.assertTrue(self.success)
File "/usr/lib/python2.7/unittest/case.py", line 424, in assertTrue
raise self.failureException(msg)

SUMMARY

  • RESULT: FAIL
  • TESTS: 1
  • ERRORS: 1
  • FAILURES: 1

rostest log file is in /home/sjors/.ros/log/rostest-sjors-VirtualBox-18103.log
-- run_tests.py: verify result "/home/sjors/dev/o3d3xx-catkin/build/test_results/o3d3xx/rostest-test_o3d3xx.xml"
Built target _run_tests_o3d3xx_rostest_test_o3d3xx.test
Built target _run_tests_o3d3xx_rostest
-- run_tests.py: execute commands
/usr/bin/cmake -E make_directory /home/sjors/dev/o3d3xx-catkin/build/test_results/o3d3xx
/usr/bin/nosetests-2.7 -P --process-timeout=60 --where=/home/sjors/dev/o3d3xx-catkin/src/o3d3xx/test --with-xunit --xunit-file=/home/sjors/dev/o3d3xx-catkin/build/test_results/o3d3xx/nosetests-test.xml


Ran 0 tests in 0.000s

OK
-- run_tests.py: verify result "/home/sjors/dev/o3d3xx-catkin/build/test_results/o3d3xx/nosetests-test.xml"
Built target _run_tests_o3d3xx_nosetests_test
Built target _run_tests_o3d3xx_nosetests
Built target _run_tests_o3d3xx
Built target run_tests

Time stamp on images incorrect

The ROS time stamp that is applied to the images coming from the sensor uses ros::Time::now(), see here.

The real time stamp is likely older than this, when taking into account the exposure time and the on-board processing.

Is there a way to estimate this latency and apply it to the ROS time stamp?

No camera output

Hi,
I'm using a O3D311 and I'm not getting the grayscale image as shown in README.md. The Amplitude image is as shown below. I have updated the camera firmware to latest version. Am I doing something wrong? Or does this camera not support 2D image output?

o3d311

camera config error

Using Goldeneye-C3_1.3.712.swu on a o3d303, we ran into some problems, maybe related to using the updated firmware.

Here is the process we followed:
Using Windows:

  1. updated the firmware to Goldeneye-C3_1.3.712.swu with the VisionAssistant SW
  2. created different applications (settings) using the VisionAssistant SW

Using Ubuntu 14.04:
3) launched the camera with the standard o3dxxx-ros launch process: ‘roslaunch o3d3xx camera.launch ip:=192.168.0.69’
4) verified that things work using rviz
5) dumped the settings using with ‘rosservice call ?/?/Dump > settings.json’
6) then tried to load the ‘settings.json’ file back to the device with ‘roslaunch o3d3xx config.launch infile:=settings.json’, which is the point where things started to go wrong, despite that the output of this process indicates that the configuration went through without an error:

The camera.launch process started timing out, and continues to do so after relaunching and power-cycling. Lost connection to the camera.

We had to go back to step 1 in order to recover this.

Getting Libo3d3xx on Yocto

I would love to use this library on the Intel edison.
The problem is that there are certain packages that I cannot find with opkg.
opkg does not have packages
LIB_glog
LIB_xmlrpc_clientxx
LIB_xmlrpcxx

these are all named differently with apt-get but they are non existent for opkg.

is this possible to use libo3d3xx on Yocto or am I wasting my time.

Incorporate filtering at a ROS node/driver level

I would like to incorporate my own filtering techniques on the sensor outputs (pointclouds and potentially 2D images). In order to achieve the best performance, this can be done in two ways:

  • nodelets - eliminates costly message copying and sending, but requires the ROS node to be written in a certain way (which it currently is not)
  • filter at a driver level - embed filtering at a driver level, but this brings up several questions:
    1. Can filters be applied at an image level (easier with opencv) that affect the point cloud output?
    2. Is there a generic way to do this? Are there a set of filters that we can agree should be supported in general? Here are my desired filters (in order they would be applied):

There are more filtering options with 2D images. Ideally, these would be applied to the 2D images and then the pointcloud recalculated. However, as near as I can tell, the point cloud is calculated on the sensor and not in the ROS node.

IFM 03X101

Hello, Happy New Year! I have a IFM 03X101 Camera (3D-Camera). Can someone tell me if this package will work with my version of Camera? In the description of the package it is written that it works with 03D3XX camera versions. Therefore, I am unsure as to usability of the package with the version of the camera I have.

Private node handle for topics and services

The o3d3xx_node uses a public node handle to create all publishers and to get all parameters. If there are multiple cameras, each of these topics/parameters have to be remapped which results in a lot of excess remapping. It would be nice to have all of the topics and parameters use a private node handle to facilitate using multiple cameras at once without the need for lots of remapping.

Cannot configure O3D303

When I want to use ROS to configure the O3D303 camera by using "roslaunch o3d3xx config.launch infile:=/home/administrator/wspace/src/o3d3xx/json/ex_camera2.json", I always get the message:
[ INFO] [1458240896.994179177]: status=-9014
[ INFO] [1458240896.994316742]: msg=XMLRPC value out of range

I also tried to use "rosservice call /o3d3xx/camera/Dump > /tmp/camera.json" to get the current setting, do nothing and run "roslaunch o3d3xx config.launch infile:=/tmp/camera.json" I will get the message:
[ INFO] [1458241148.110545217]: status=-1
[ INFO] [1458241148.110678292]: msg=(1): expected object or array

Does anybody know how to fix it? Thank you!

PointCloud<pcl::PointWithRange> range data

Hi,

I am having some trouble understand the point cloud data I am getting from my camera. I subscribed to the point cloud data from the camera, by defining the callback function in this manner:

void cloud_cb (const pcl::PointCloud<pcl::PointWithRange>::ConstPtr& input_const_ptr) {...}

I did this to obtain the range values associated to each point relative to the origin. However, I have been getting huge "range" values that does not seem to be in the standard units like meters or mm(as seen below). Is there a certain conversion factor for me to attain the actual range from the camera to each individual point? Thanks!

# .PCD v0.7 - Point Cloud Data file format
VERSION 0.7
FIELDS x y z range
SIZE 4 4 4 4
TYPE F F F F
COUNT 1 1 1 1
WIDTH 176
HEIGHT 132
VIEWPOINT 0 0 0 1 0 0 0
POINTS 23232
DATA ascii
1.929 1.251 0.90799999 233
1.97 1.258 0.92400002 247
2.0020001 1.2589999 0.935 250
2.003 1.24 0.93099999 248
2.0090001 1.225 0.93000001 239
2.016 1.21 0.93000001 232
2.0179999 1.193 0.92799997 199
2.0309999 1.182 0.93000001 148
2.04 1.169 0.93099999 162
2.053 1.158 0.93400002 157
2.0569999 1.142 0.93199998 173
2.0639999 1.127 0.93199998 188

trouble decoding compressed depth map from IFM

Looking at the raw depth values, I see 16-bit integers measuring what I assume are milimeters.
Looking at the compressed topic (i.e. from a rosbag), I decompress with cv2, something like this:
pic = cv2.imdecode(np.fromstring(msg.data,np.uint8),cv2.CV_LOAD_IMAGE_UNCHANGED)
But what comes out is an array of uint8, mostly equal to 255, and looking vaguely like the raw depth image.

Is there a different way I'm supposed to be decompressing the depth image?

how to diagnose IFM

I checked all the ros topic for IFM, I didn't find any diagnostic topic. If so, how could we know the status of IFM? Thank you!

Extrinsics not latched

We are using /o3d3xx/camera/extrinsics and /o3d3xx/camera/unit_vectors for ray-tracing.

We noticed that the unit_vectors are published once (i.e. latched) but the extrinsics are published continuously.

We were wondering why this is happening. Is it possible that the extrinsincs change during operation / after camera (factory) calibration?

Saturated pixels

When using the VisionAssistant SW it is possible to see "saturated" pixels.

However, o3d3xx-ros does not seem to publish information about saturation.

We wanted to use the saturation info to distinguish between pixels that are either too close or too far away and do not report valid point-cloud data:

  • typically close pixels in addition to having very low amplitude and confidence values, they are also over-saturated
  • distant pixels also have low amplitude and confidence, but not low saturation.

How would it be possible to access the saturation info using o3d3xx-ros package?

Alternatively, would there be a different way to differentiate between "close" vs. "far away" objects/pixels that do not report valid depth data?

Thanks

Generate point cloud from depth image

I am using the depth image to detect any irregularities or obstacles in the path of an AGV with ifm o3d303 mounted. The process involves removing any depth data other than obstacles from the image through a mask created by gradient based processing. I would like to convert this selective depth image to a point cloud. Is it possible to do using this package?

P.S I have a working version of this using openni to convert selective depth image to point cloud, I am looking for a similar functionality.

Compiling o3d3xx-ros

Hi guys,

I have a IFM O3D302 ToF camera and wanted to install ros drivers. After I installed all the required libraries, and went through several issues (from my end), I am stuck now with the following error when I catkin_make my workspace.

I updated the firmware to the latest, installed modules within libo3d3xx correctly (image, camera, pcicclient, framegrabber) and i am running Ubuntu 14.04 and ROS indigo, pcl 1.7

i tried to uninstall opencv completely and reinstall only what is required by this package for ros opencv (i.e., cv-bridge, image-transport, camera-info-manager).

Any help/ suggestion is highly appreciated

/usr/bin/ld: warning: libopencv_core.so.3.2, needed by /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../../lib/libo3d3xx_image.so, may conflict with libopencv_core.so.2.4
/usr/bin/ld: CMakeFiles/o3d3xx_node.dir/src/o3d3xx_node.cpp.o: undefined reference to symbol '_ZN2cv6String10deallocateEv'
//usr/local/lib/libopencv_core.so.3.2: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
make[2]: *** [/home/efmzafra/Development/hyg_ida/devel/lib/o3d3xx/o3d3xx_node] Error 1
make[1]: *** [o3d3xx-ros/CMakeFiles/o3d3xx_node.dir/all] Error 2
make: *** [all] Error 2
Invoking "make -j8 -l8" failed

Catkin Regression?

It seems that catkin is not respecting the -DCMAKE_INSTALL_PREFIX option these days. Thus, our install instructions in the README are now not working. I am going to look into a fix to this and update the install instructions appropriately.

SWTrigger Service doesn't return error codes

I'm using the software trigger service to get data, and I'm successfully able to data. However, when I introduce an error condition (such as no camera connected) and trigger the SWTrigger service, I get the response status as 0(OK). I would expect that I get an error code for being unable to talk to the camera. I'm not sure if this is an issue with the ROS part or with the library. Can someone confirm that I should be getting an error code from the software trigger service for cases such as no camera at IP address?
Thanks

catkin_make run_tests failure

Hey Tom,
Thanks for you help. I'm having trouble with running the unit tests. It seems to be a connectivity issue, which another user had reported, #27 . However, my o3d is using the default IP address (the same as in camera.launch). I can also ping the camera. I have the same error with both the current github head and also version 0.2.5. I'm running Ubuntu trusty with ros Indigo. The error I see is :

[ROSTEST]-----------------------------------------------------------------------

[o3d3xx.rosunit-test_camera/test_camera][FAILURE]-------------------------------
False is not true
File "/usr/lib/python2.7/unittest/case.py", line 331, in run
testMethod()
File "/home/sroth/dev/o3d3xx-catkin/src/o3d3xx/test/test_camera.py", line 73, in test_camera
self.success = self.compute_cartesian()
File "/home/sroth/dev/o3d3xx-catkin/src/o3d3xx/test/test_camera.py", line 172, in compute_cartesian
self.assertTrue(x_mask.sum() == 0)
File "/usr/lib/python2.7/unittest/case.py", line 424, in assertTrue
raise self.failureException(msg)

[o3d3xx.rosunit-test_services/test_services][passed]

SUMMARY

  • RESULT: FAIL
  • TESTS: 2
  • ERRORS: 0
  • FAILURES: 1

Thanks again for the help.
Stephan

Output pointcloud that can be recognized as from a projective device?

How do I get a point cloud from an o3d303 camera that PCL can recognize as from a projective device?

I'm trying to use the nodelet pcl/StatisticalOutlierRemoval on the output of a point cloud from an o3d303 camera. PCL complains that the cloud is not from a projective device. It says this:

[pcl::OrganizedNeighbor::radiusSearch] Input dataset is not from a projective device!
  Residual (MSE) 0.000133, using 1188 valid points

The definition of a projective device is here

Problem with nodes while launching ros

Hello, im receiving this while trying to launch the camera module:

root@ramon-VirtualBox:/home/ramon# roslaunch /home/ramon/proyecto_camara/o3d3xx-ros/launch/camera.launch ip:=192.168.1.73
... logging to /root/.ros/log/d14f125c-04ba-11e7-821b-0800272f682b/roslaunch-ramon-VirtualBox-7512.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://ramon-VirtualBox:44047/

SUMMARY

PARAMETERS

  • /o3d3xx/camera/assume_sw_triggered: False
  • /o3d3xx/camera/frame_id_base: o3d3xx/camera
  • /o3d3xx/camera/ip: 192.168.1.73
  • /o3d3xx/camera/password:
  • /o3d3xx/camera/publish_viz_images: True
  • /o3d3xx/camera/schema_mask: 15
  • /o3d3xx/camera/timeout_millis: 500
  • /o3d3xx/camera/timeout_tolerance_secs: 5.0
  • /o3d3xx/camera/xmlrpc_port: 80
  • /rosdistro: kinetic
  • /rosversion: 1.12.7

NODES
/o3d3xx/
camera (o3d3xx/o3d3xx_node)
camera_tf (tf2_ros/static_transform_publisher)

ROS_MASTER_URI=http://localhost:11311

core service [/rosout] found
ERROR: cannot launch node of type [o3d3xx/o3d3xx_node]: o3d3xx
ROS path [0]=/opt/ros/kinetic/share/ros
ROS path [1]=/opt/ros/kinetic/share
process[o3d3xx/camera_tf-2]: started with pid [7530]

And this one with the rviz.launch:

root@ramon-VirtualBox:/home/ramon# roslaunch /home/ramon/proyecto_camara/o3d3xx-ros/launch/camera.launch ip:=192.168.1.73
... logging to /root/.ros/log/d14f125c-04ba-11e7-821b-0800272f682b/roslaunch-ramon-VirtualBox-7512.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://ramon-VirtualBox:44047/

SUMMARY

PARAMETERS

  • /o3d3xx/camera/assume_sw_triggered: False
  • /o3d3xx/camera/frame_id_base: o3d3xx/camera
  • /o3d3xx/camera/ip: 192.168.1.73
  • /o3d3xx/camera/password:
  • /o3d3xx/camera/publish_viz_images: True
  • /o3d3xx/camera/schema_mask: 15
  • /o3d3xx/camera/timeout_millis: 500
  • /o3d3xx/camera/timeout_tolerance_secs: 5.0
  • /o3d3xx/camera/xmlrpc_port: 80
  • /rosdistro: kinetic
  • /rosversion: 1.12.7

NODES
/o3d3xx/
camera (o3d3xx/o3d3xx_node)
camera_tf (tf2_ros/static_transform_publisher)

ROS_MASTER_URI=http://localhost:11311

core service [/rosout] found
ERROR: cannot launch node of type [o3d3xx/o3d3xx_node]: o3d3xx
ROS path [0]=/opt/ros/kinetic/share/ros
ROS path [1]=/opt/ros/kinetic/share
process[o3d3xx/camera_tf-2]: started with pid [7530]

Looks like ros problem. Any idea?

Greetings and thanks,
Juan Ramón Gómez Martín.

Camera config to extract full resolution image

We have a O3D303 sensor mounted on an AGV at a height of 1 m & 45 deg angle facing downwards. The sensor's primary purpose is to detect obstacles in 1.5 m range in-front of the AGV. I am looking for some camera config changes to suit the purpose.

  1. How to get full resolution(352 x 264) images from the sensor?
    • Required to detect poles & bollards in the path, more room to filter the output data
  2. How to set sensor to use single frequency for TOF?
    • For higher frame acquisition & heating concerns
  3. Ideal integration time for a moving platform (< 1 m/s)?

hello from lib03d3xx - catkin_make having cv issues

viki@c3po:~/dev/o3d3xx-catkin$ catkin_make verbose=1
Base path: /home/viki/dev/o3d3xx-catkin
Source space: /home/viki/dev/o3d3xx-catkin/src
Build space: /home/viki/dev/o3d3xx-catkin/build
Devel space: /home/viki/dev/o3d3xx-catkin/devel
Install space: /home/viki/dev/o3d3xx-catkin/install
####
#### Running command: "make cmake_check_build_system" in "/home/viki/dev/o3d3xx-catkin/build"
####
####
#### Running command: "make verbose=1 -j2 -l2" in "/home/viki/dev/o3d3xx-catkin/build"
####
[  0%] Built target std_msgs_generate_messages_cpp
[  0%] Built target _o3d3xx_generate_messages_check_deps_Rm
[  0%] Built target _o3d3xx_generate_messages_check_deps_Dump
[  0%] Built target _o3d3xx_generate_messages_check_deps_Config
[  0%] Built target _o3d3xx_generate_messages_check_deps_GetVersion
[  0%] Built target std_msgs_generate_messages_lisp
Linking CXX executable /home/viki/dev/o3d3xx-catkin/devel/lib/o3d3xx/o3d3xx_file_writer_node
[  0%] Built target std_msgs_generate_messages_py
[ 25%] Built target o3d3xx_generate_messages_cpp
[ 50%] Built target o3d3xx_generate_messages_lisp
[ 81%] Built target o3d3xx_generate_messages_py
[ 87%] Built target o3d3xx_node
Scanning dependencies of target o3d3xx_config_node
[ 93%] CMakeFiles/o3d3xx_file_writer_node.dir/src/o3d3xx_file_writer_node.cpp.o: In function `cv::operator<<(cv::FileStorage&, charBuilding CXX object o3d3xx/CMakeFiles/o3d3xx_config_node.dir/src/o3d3xx_config_node.cpp.o
 const*)':
o3d3xx_file_writer_node.cpp:(.text+0xa5): undefined reference to `cv::operator<<(cv::FileStorage&, cv::String const&)'
CMakeFiles/o3d3xx_file_writer_node.dir/src/o3d3xx_file_writer_node.cpp.o: In function `cv::FileStorage& cv::operator<< <cv::Mat>(cv::FileStorage&, cv::Mat const&)':
o3d3xx_file_writer_node.cpp:(.text+0x282): undefined reference to `cv::error(int, cv::String const&, char const*, char const*, int)'
o3d3xx_file_writer_node.cpp:(.text+0x2a9): undefined reference to `cv::write(cv::FileStorage&, cv::String const&, cv::Mat const&)'
CMakeFiles/o3d3xx_file_writer_node.dir/src/o3d3xx_file_writer_node.cpp.o: In function `cv::String::String(char const*)':
o3d3xx_file_writer_node.cpp:(.text._ZN2cv6StringC2EPKc[_ZN2cv6StringC5EPKc]+0x4f): undefined reference to `cv::String::allocate(unsigned long)'
CMakeFiles/o3d3xx_file_writer_node.dir/src/o3d3xx_file_writer_node.cpp.o: In function `cv::String::~String()':
o3d3xx_file_writer_node.cpp:(.text._ZN2cv6StringD2Ev[_ZN2cv6StringD5Ev]+0x14): undefined reference to `cv::String::deallocate()'
CMakeFiles/o3d3xx_file_writer_node.dir/src/o3d3xx_file_writer_node.cpp.o: In function `cv::String::String(std::string const&)':
o3d3xx_file_writer_node.cpp:(.text._ZN2cv6StringC2ERKSs[_ZN2cv6StringC5ERKSs]+0x69): undefined reference to `cv::String::allocate(unsigned long)'
CMakeFiles/o3d3xx_file_writer_node.dir/src/o3d3xx_file_writer_node.cpp.o: In function `O3D3xxFileWriterNode::ImageCb(boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&, std::string const&)':
o3d3xx_file_writer_node.cpp:(.text._ZN20O3D3xxFileWriterNode7ImageCbERKN5boost10shared_ptrIKN11sensor_msgs6Image_ISaIvEEEEERKSs[_ZN20O3D3xxFileWriterNode7ImageCbERKN5boost10shared_ptrIKN11sensor_msgs6Image_ISaIvEEEEERKSs]+0x4d8): undefined reference to `cv::FileStorage::FileStorage(cv::String const&, int, cv::String const&)'
o3d3xx_file_writer_node.cpp:(.text._ZN20O3D3xxFileWriterNode7ImageCbERKN5boost10shared_ptrIKN11sensor_msgs6Image_ISaIvEEEEERKSs[_ZN20O3D3xxFileWriterNode7ImageCbERKN5boost10shared_ptrIKN11sensor_msgs6Image_ISaIvEEEEERKSs]+0x5fc): undefined reference to `cv::imwrite(cv::String const&, cv::_InputArray const&, std::vector<int, std::allocator<int> > const&)'
collect2: error: ld returned 1 exit status
make[2]: *** [/home/viki/dev/o3d3xx-catkin/devel/lib/o3d3xx/o3d3xx_file_writer_node] Error 1
make[1]: *** [o3d3xx/CMakeFiles/o3d3xx_file_writer_node.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
Linking CXX executable /home/viki/dev/o3d3xx-catkin/devel/lib/o3d3xx/o3d3xx_config_node
[ 93%] Built target o3d3xx_config_node
make: *** [all] Error 2
Invoking "make verbose=1 -j2 -l2" failed
viki@c3po:~/dev/o3d3xx-catkin$ 

crash

lcl@xrobot:~/ros/devel$ roslaunch o3d3xx camera.launch ip:=192.168.10.69
... logging to /home/lcl/.ros/log/50f749d4-9be4-11ea-9e2a-00505629bb94/roslaunch-xrobot-12847.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://xrobot:37324/

SUMMARY

PARAMETERS

  • /o3d3xx/camera/assume_sw_triggered: False
  • /o3d3xx/camera/frame_id_base: o3d3xx/camera
  • /o3d3xx/camera/ip: 192.168.10.69
  • /o3d3xx/camera/password:
  • /o3d3xx/camera/publish_viz_images: True
  • /o3d3xx/camera/schema_mask: 15
  • /o3d3xx/camera/timeout_millis: 500
  • /o3d3xx/camera/timeout_tolerance_secs: 5.0
  • /o3d3xx/camera/xmlrpc_port: 80
  • /rosdistro: kinetic
  • /rosversion: 1.12.14

NODES
/o3d3xx/
camera (o3d3xx/o3d3xx_node)
camera_tf (tf2_ros/static_transform_publisher)

ROS_MASTER_URI=http://localhost:11311

process[o3d3xx/camera-1]: started with pid [12865]
process[o3d3xx/camera_tf-2]: started with pid [12866]
terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injectorstd::logic_error >'
what(): character conversion failed
[o3d3xx/camera-1] process has died [pid 12865, exit code -6, cmd /home/lcl/ros/devel/lib/o3d3xx/o3d3xx_node cloud:=camera/cloud depth:=camera/depth depth_viz:=camera/depth_viz amplitude:=camera/amplitude raw_amplitude:=camera/raw_amplitude confidence:=camera/confidence unit_vectors:=camera/unit_vectors good_bad_pixels:=camera/good_bad_pixels xyz_image:=camera/xyz_image extrinsics:=camera/extrinsics GetVersion:=camera/GetVersion Dump:=camera/Dump Config:=camera/Config Rm:=camera/Rm Trigger:=camera/Trigger __name:=camera __log:=/home/lcl/.ros/log/50f749d4-9be4-11ea-9e2a-00505629bb94/o3d3xx-camera-1.log].
log file: /home/lcl/.ros/log/50f749d4-9be4-11ea-9e2a-00505629bb94/o3d3xx-camera-1*.log

Unable to change SymmetryThreshold of Imager

Following a recent discussion with IFM reps, I was trying to change the symmetry threshold of the imager to eliminate some over saturated pixels due to motion. I tried changing symmetry threshold through the config node but it's not reflected in the imager. ( json dump just returns a 0 value after the config change)

"Imager": { "SymmetryThreshold": "0" }

I dug deeper & found this commnet in libo3d3xx/modules/camera/src/libo3d3xx_camera/imager_config.cpp
//! @todo: This data type may change. It is not clear that this is even //! implemented in the camera yet. I believe it has to do with flagging a //! pixel as bad due to motion artifacts across exposure times ... but it //! does not appear to be documented. int SymmetryThreshold() const noexcept;

I was wondering if it's not an accessible parameter or is it not implemented?
@tpanzarella, @GarrettPlace any comments?

Doubt on the dataframe

Hello again,

We have a doubt about the frame we get through this lib: In the PCL file (through ex-file_io), we obtain 4 data per point. No problem with the first 3 points (coordinates of the point), but what about the last point, the intensity? Is it tied to one of the coordinates (x or y) or to the IR reflection?

By the way, the dataframe we get, in what order is displayed? From left to right and from top to bottom?

Thanks,
Juan Ramón Gómez Martín.

subscribing to messages...easy question, i know

So, I am just trying out a couple of things and using an example that subscribes to a turtlesim Pose message. I am wanting to replicate that with an o3d3xx message, but I may not be thinking of it correctly:

code snip:

#include <ros/ros.h>
#include <turtlesim/Pose.h>

#include <iomanip>  //for std::precision, std::fixed

// a callback function. Executed each time a new pose message arrives
void poseMessageReceived(const turtlesim::Pose& msg)
{
    ROS_INFO_STREAM(std::setprecision(2) << std::fixed
      << " position =(" << msg.x << " ," << msg.y << ")"
      << " direction =" << msg.theta );
}

I Wonder how to get access to the messages. Do I do something like this at the top of my cpp and then create a callback function that is looking for that message?
#include <sensor_msg/Image.h>
I have nothing to back that up, but just reaching...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.