Giter Site home page Giter Site logo

o3d3xx-ros's People

Contributors

bigcmos avatar mikeferguson avatar shaun-edwards avatar theseankelly avatar tpanzarella avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

o3d3xx-ros's Issues

Cannot configure O3D303

When I want to use ROS to configure the O3D303 camera by using "roslaunch o3d3xx config.launch infile:=/home/administrator/wspace/src/o3d3xx/json/ex_camera2.json", I always get the message:
[ INFO] [1458240896.994179177]: status=-9014
[ INFO] [1458240896.994316742]: msg=XMLRPC value out of range

I also tried to use "rosservice call /o3d3xx/camera/Dump > /tmp/camera.json" to get the current setting, do nothing and run "roslaunch o3d3xx config.launch infile:=/tmp/camera.json" I will get the message:
[ INFO] [1458241148.110545217]: status=-1
[ INFO] [1458241148.110678292]: msg=(1): expected object or array

Does anybody know how to fix it? Thank you!

only half of the information from zyx_image

Hello,

We are using the xyz_image (topic) to determine the x,y,z value of a pixel. When we read of this topic it looks like we only get half of the information. because it is a 16-bit value but we get two times an 8-bit. if we combine those 8-bits we get the correct distance. but by using this way we only get half of the pixels.

are we doing something wrong or is this normal?
I hope someone can help.

// ROS
#include <ros/ros.h>
#include <geometry_msgs/Twist.h>
#include <stdlib.h>
#include <std_msgs/String.h>

// OpenCV
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/Image.h>
#include <sensor_msgs/image_encodings.h>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>

using namespace cv;
namespace enc = sensor_msgs::image_encodings;

// Read message
void read_image(const sensor_msgs::ImagePtr& msg)
{
// Read image
cv_bridge::CvImagePtr img_ptr;

try
	{
  		img_ptr = cv_bridge::toCvCopy(msg, enc::TYPE_16SC3);
	}
catch (cv_bridge::Exception& e)
	{
  		ROS_ERROR("cv_bridge exception: %s", e.what());
  		return;
	}

// Split channels
Mat channels[3];
split(img_ptr->image, channels);

// Create mat objects for calcuations
Mat img_z = Mat(channels[0].rows, channels[0].cols, CV_16SC1);
Mat img_x = Mat(channels[1].rows, channels[1].cols, CV_16SC1);
Mat img_y = Mat(channels[2].rows, channels[2].cols, CV_16SC1);

// z-as matrix maken
for (int row = 0; row < channels[0].rows; row++)
{	
	for (int col = 0; col < channels[0].cols*2; col = col+2)
	{
		int8_t val0 = channels[0].at<uchar>(row, col);
		int8_t val1 = channels[0].at<uchar>(row, col+1);
		int16_t result = (val1 << 8 | val0);			
		img_z.at<uchar>(row, col/2) = result;	
	}

}
// x-as matrix maken
for (int row = 0; row < channels[1].rows; row++)
{
	for (int col = 0; col < channels[1].cols*2; col = col+2)
	{
		int8_t val0 = channels[1].at<uchar>(row, col);
		int8_t val1 = channels[1].at<uchar>(row, col+1);
		int16_t result = (val1 << 8 | val0);			
		img_x.at<uchar>(row, col/2) = result;	
		
	}

}
// y-as matrix maken
for (int row = 0; row < channels[2].rows; row++)
{
	for (int col = 0; col < channels[2].cols*2; col = col+2)
	{
		int8_t val0 = channels[2].at<uchar>(row, col);
		int8_t val1 = channels[2].at<uchar>(row, col+1);
		int16_t result = (val1 << 8 | val0);			
		img_y.at<uchar>(row, col/2) = result;	
	}

}

// foto z
namedWindow("img_z", CV_WINDOW_AUTOSIZE);
imshow("img_z", img_z);
waitKey(1);

}

// Start and initialize the node
int main(int argc, char **argv)
{
// Initialize ros node
ros::init(argc, argv, "listener");
ros::NodeHandle nh;

// Subscribe to the topic
ros::Subscriber sub = nh.subscribe("o3d3xx/camera/xyz_image", 100, read_image);

// Wait for messages
ros::spin();

}

camera_info topic?

Perhaps I'm stuck in stereo camera land, but it seems there should be a camera_info topic to go along with the depth image (http://www.ros.org/reps/rep-0118.html).

I'm trying to convert depth images to 3D coordinates in a standardized way for multiple kinds of sensors - Kinect, CRL stereo, O3D. I can do the projection on the other units using their camera_info camera matrix, but o3d3xx isn't publishing one.

Am I missing something? How do I get X and Y scale from the depth image?

Compiling o3d3xx-ros

Hi guys,

I have a IFM O3D302 ToF camera and wanted to install ros drivers. After I installed all the required libraries, and went through several issues (from my end), I am stuck now with the following error when I catkin_make my workspace.

I updated the firmware to the latest, installed modules within libo3d3xx correctly (image, camera, pcicclient, framegrabber) and i am running Ubuntu 14.04 and ROS indigo, pcl 1.7

i tried to uninstall opencv completely and reinstall only what is required by this package for ros opencv (i.e., cv-bridge, image-transport, camera-info-manager).

Any help/ suggestion is highly appreciated

/usr/bin/ld: warning: libopencv_core.so.3.2, needed by /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../../lib/libo3d3xx_image.so, may conflict with libopencv_core.so.2.4
/usr/bin/ld: CMakeFiles/o3d3xx_node.dir/src/o3d3xx_node.cpp.o: undefined reference to symbol '_ZN2cv6String10deallocateEv'
//usr/local/lib/libopencv_core.so.3.2: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
make[2]: *** [/home/efmzafra/Development/hyg_ida/devel/lib/o3d3xx/o3d3xx_node] Error 1
make[1]: *** [o3d3xx-ros/CMakeFiles/o3d3xx_node.dir/all] Error 2
make: *** [all] Error 2
Invoking "make -j8 -l8" failed

Catkin Regression?

It seems that catkin is not respecting the -DCMAKE_INSTALL_PREFIX option these days. Thus, our install instructions in the README are now not working. I am going to look into a fix to this and update the install instructions appropriately.

Private node handle for topics and services

The o3d3xx_node uses a public node handle to create all publishers and to get all parameters. If there are multiple cameras, each of these topics/parameters have to be remapped which results in a lot of excess remapping. It would be nice to have all of the topics and parameters use a private node handle to facilitate using multiple cameras at once without the need for lots of remapping.

hello from lib03d3xx - catkin_make having cv issues

viki@c3po:~/dev/o3d3xx-catkin$ catkin_make verbose=1
Base path: /home/viki/dev/o3d3xx-catkin
Source space: /home/viki/dev/o3d3xx-catkin/src
Build space: /home/viki/dev/o3d3xx-catkin/build
Devel space: /home/viki/dev/o3d3xx-catkin/devel
Install space: /home/viki/dev/o3d3xx-catkin/install
####
#### Running command: "make cmake_check_build_system" in "/home/viki/dev/o3d3xx-catkin/build"
####
####
#### Running command: "make verbose=1 -j2 -l2" in "/home/viki/dev/o3d3xx-catkin/build"
####
[  0%] Built target std_msgs_generate_messages_cpp
[  0%] Built target _o3d3xx_generate_messages_check_deps_Rm
[  0%] Built target _o3d3xx_generate_messages_check_deps_Dump
[  0%] Built target _o3d3xx_generate_messages_check_deps_Config
[  0%] Built target _o3d3xx_generate_messages_check_deps_GetVersion
[  0%] Built target std_msgs_generate_messages_lisp
Linking CXX executable /home/viki/dev/o3d3xx-catkin/devel/lib/o3d3xx/o3d3xx_file_writer_node
[  0%] Built target std_msgs_generate_messages_py
[ 25%] Built target o3d3xx_generate_messages_cpp
[ 50%] Built target o3d3xx_generate_messages_lisp
[ 81%] Built target o3d3xx_generate_messages_py
[ 87%] Built target o3d3xx_node
Scanning dependencies of target o3d3xx_config_node
[ 93%] CMakeFiles/o3d3xx_file_writer_node.dir/src/o3d3xx_file_writer_node.cpp.o: In function `cv::operator<<(cv::FileStorage&, charBuilding CXX object o3d3xx/CMakeFiles/o3d3xx_config_node.dir/src/o3d3xx_config_node.cpp.o
 const*)':
o3d3xx_file_writer_node.cpp:(.text+0xa5): undefined reference to `cv::operator<<(cv::FileStorage&, cv::String const&)'
CMakeFiles/o3d3xx_file_writer_node.dir/src/o3d3xx_file_writer_node.cpp.o: In function `cv::FileStorage& cv::operator<< <cv::Mat>(cv::FileStorage&, cv::Mat const&)':
o3d3xx_file_writer_node.cpp:(.text+0x282): undefined reference to `cv::error(int, cv::String const&, char const*, char const*, int)'
o3d3xx_file_writer_node.cpp:(.text+0x2a9): undefined reference to `cv::write(cv::FileStorage&, cv::String const&, cv::Mat const&)'
CMakeFiles/o3d3xx_file_writer_node.dir/src/o3d3xx_file_writer_node.cpp.o: In function `cv::String::String(char const*)':
o3d3xx_file_writer_node.cpp:(.text._ZN2cv6StringC2EPKc[_ZN2cv6StringC5EPKc]+0x4f): undefined reference to `cv::String::allocate(unsigned long)'
CMakeFiles/o3d3xx_file_writer_node.dir/src/o3d3xx_file_writer_node.cpp.o: In function `cv::String::~String()':
o3d3xx_file_writer_node.cpp:(.text._ZN2cv6StringD2Ev[_ZN2cv6StringD5Ev]+0x14): undefined reference to `cv::String::deallocate()'
CMakeFiles/o3d3xx_file_writer_node.dir/src/o3d3xx_file_writer_node.cpp.o: In function `cv::String::String(std::string const&)':
o3d3xx_file_writer_node.cpp:(.text._ZN2cv6StringC2ERKSs[_ZN2cv6StringC5ERKSs]+0x69): undefined reference to `cv::String::allocate(unsigned long)'
CMakeFiles/o3d3xx_file_writer_node.dir/src/o3d3xx_file_writer_node.cpp.o: In function `O3D3xxFileWriterNode::ImageCb(boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&, std::string const&)':
o3d3xx_file_writer_node.cpp:(.text._ZN20O3D3xxFileWriterNode7ImageCbERKN5boost10shared_ptrIKN11sensor_msgs6Image_ISaIvEEEEERKSs[_ZN20O3D3xxFileWriterNode7ImageCbERKN5boost10shared_ptrIKN11sensor_msgs6Image_ISaIvEEEEERKSs]+0x4d8): undefined reference to `cv::FileStorage::FileStorage(cv::String const&, int, cv::String const&)'
o3d3xx_file_writer_node.cpp:(.text._ZN20O3D3xxFileWriterNode7ImageCbERKN5boost10shared_ptrIKN11sensor_msgs6Image_ISaIvEEEEERKSs[_ZN20O3D3xxFileWriterNode7ImageCbERKN5boost10shared_ptrIKN11sensor_msgs6Image_ISaIvEEEEERKSs]+0x5fc): undefined reference to `cv::imwrite(cv::String const&, cv::_InputArray const&, std::vector<int, std::allocator<int> > const&)'
collect2: error: ld returned 1 exit status
make[2]: *** [/home/viki/dev/o3d3xx-catkin/devel/lib/o3d3xx/o3d3xx_file_writer_node] Error 1
make[1]: *** [o3d3xx/CMakeFiles/o3d3xx_file_writer_node.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
Linking CXX executable /home/viki/dev/o3d3xx-catkin/devel/lib/o3d3xx/o3d3xx_config_node
[ 93%] Built target o3d3xx_config_node
make: *** [all] Error 2
Invoking "make verbose=1 -j2 -l2" failed
viki@c3po:~/dev/o3d3xx-catkin$ 

crash

lcl@xrobot:~/ros/devel$ roslaunch o3d3xx camera.launch ip:=192.168.10.69
... logging to /home/lcl/.ros/log/50f749d4-9be4-11ea-9e2a-00505629bb94/roslaunch-xrobot-12847.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://xrobot:37324/

SUMMARY

PARAMETERS

  • /o3d3xx/camera/assume_sw_triggered: False
  • /o3d3xx/camera/frame_id_base: o3d3xx/camera
  • /o3d3xx/camera/ip: 192.168.10.69
  • /o3d3xx/camera/password:
  • /o3d3xx/camera/publish_viz_images: True
  • /o3d3xx/camera/schema_mask: 15
  • /o3d3xx/camera/timeout_millis: 500
  • /o3d3xx/camera/timeout_tolerance_secs: 5.0
  • /o3d3xx/camera/xmlrpc_port: 80
  • /rosdistro: kinetic
  • /rosversion: 1.12.14

NODES
/o3d3xx/
camera (o3d3xx/o3d3xx_node)
camera_tf (tf2_ros/static_transform_publisher)

ROS_MASTER_URI=http://localhost:11311

process[o3d3xx/camera-1]: started with pid [12865]
process[o3d3xx/camera_tf-2]: started with pid [12866]
terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injectorstd::logic_error >'
what(): character conversion failed
[o3d3xx/camera-1] process has died [pid 12865, exit code -6, cmd /home/lcl/ros/devel/lib/o3d3xx/o3d3xx_node cloud:=camera/cloud depth:=camera/depth depth_viz:=camera/depth_viz amplitude:=camera/amplitude raw_amplitude:=camera/raw_amplitude confidence:=camera/confidence unit_vectors:=camera/unit_vectors good_bad_pixels:=camera/good_bad_pixels xyz_image:=camera/xyz_image extrinsics:=camera/extrinsics GetVersion:=camera/GetVersion Dump:=camera/Dump Config:=camera/Config Rm:=camera/Rm Trigger:=camera/Trigger __name:=camera __log:=/home/lcl/.ros/log/50f749d4-9be4-11ea-9e2a-00505629bb94/o3d3xx-camera-1.log].
log file: /home/lcl/.ros/log/50f749d4-9be4-11ea-9e2a-00505629bb94/o3d3xx-camera-1*.log

Interpreting /cloud message

I'm working with @bsbc trying to use the pointcloud ROS message from the ifm camera and have a couple questions about it. Here is a sample of message contents:

height: 132
width: 176
fields:

name: x
offset: 0
datatype: 7
count: 1

name: y
offset: 4
datatype: 7
count: 1

name: z
offset: 8
datatype: 7
count: 1

name: intensity
offset: 16
datatype: 7
count: 1

is_bigendian: False

point_step: 32

row_step: 5632

data: [0, 0, 192, 127, 0, 0, 192, 127, 0, 0, 192, 127, 0, 0, 128, 63, 0, 0, 22, 68, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 192, 127, 0, 0, 192, 127, 0, 0, 192, 127, 0, 0, 128, 63, 0, 0, 30, 68, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 192, 127, 0, 0, 192, 127, 0, 0, 192, 127, 0, 0, 128, 63, 0, 128, 35, 68, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...

I show data for 3 points (96 bytes). My questions are:

  • it seems there should be 32 bytes per pixel, so 13217632=743424 bytes, but the size of the message seems to be half of that, 371712
  • according to the format, bytes 12-15 don't have meaning, and yet they are not zero, what do they mean? (in the example, they are all 0,0,128,63 which is float 1.0)
  • 0,0,192,127 is float nan, so the three points shown above are invalid, I guess, but still they have different intensity values (if bytes 16-19 are indeed intensity; the float values above are 600.0, 632.0 and 654.0). Are these values meaningful?

Thanks for your help.

Problem with nodes while launching ros

Hello, im receiving this while trying to launch the camera module:

root@ramon-VirtualBox:/home/ramon# roslaunch /home/ramon/proyecto_camara/o3d3xx-ros/launch/camera.launch ip:=192.168.1.73
... logging to /root/.ros/log/d14f125c-04ba-11e7-821b-0800272f682b/roslaunch-ramon-VirtualBox-7512.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://ramon-VirtualBox:44047/

SUMMARY

PARAMETERS

  • /o3d3xx/camera/assume_sw_triggered: False
  • /o3d3xx/camera/frame_id_base: o3d3xx/camera
  • /o3d3xx/camera/ip: 192.168.1.73
  • /o3d3xx/camera/password:
  • /o3d3xx/camera/publish_viz_images: True
  • /o3d3xx/camera/schema_mask: 15
  • /o3d3xx/camera/timeout_millis: 500
  • /o3d3xx/camera/timeout_tolerance_secs: 5.0
  • /o3d3xx/camera/xmlrpc_port: 80
  • /rosdistro: kinetic
  • /rosversion: 1.12.7

NODES
/o3d3xx/
camera (o3d3xx/o3d3xx_node)
camera_tf (tf2_ros/static_transform_publisher)

ROS_MASTER_URI=http://localhost:11311

core service [/rosout] found
ERROR: cannot launch node of type [o3d3xx/o3d3xx_node]: o3d3xx
ROS path [0]=/opt/ros/kinetic/share/ros
ROS path [1]=/opt/ros/kinetic/share
process[o3d3xx/camera_tf-2]: started with pid [7530]

And this one with the rviz.launch:

root@ramon-VirtualBox:/home/ramon# roslaunch /home/ramon/proyecto_camara/o3d3xx-ros/launch/camera.launch ip:=192.168.1.73
... logging to /root/.ros/log/d14f125c-04ba-11e7-821b-0800272f682b/roslaunch-ramon-VirtualBox-7512.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://ramon-VirtualBox:44047/

SUMMARY

PARAMETERS

  • /o3d3xx/camera/assume_sw_triggered: False
  • /o3d3xx/camera/frame_id_base: o3d3xx/camera
  • /o3d3xx/camera/ip: 192.168.1.73
  • /o3d3xx/camera/password:
  • /o3d3xx/camera/publish_viz_images: True
  • /o3d3xx/camera/schema_mask: 15
  • /o3d3xx/camera/timeout_millis: 500
  • /o3d3xx/camera/timeout_tolerance_secs: 5.0
  • /o3d3xx/camera/xmlrpc_port: 80
  • /rosdistro: kinetic
  • /rosversion: 1.12.7

NODES
/o3d3xx/
camera (o3d3xx/o3d3xx_node)
camera_tf (tf2_ros/static_transform_publisher)

ROS_MASTER_URI=http://localhost:11311

core service [/rosout] found
ERROR: cannot launch node of type [o3d3xx/o3d3xx_node]: o3d3xx
ROS path [0]=/opt/ros/kinetic/share/ros
ROS path [1]=/opt/ros/kinetic/share
process[o3d3xx/camera_tf-2]: started with pid [7530]

Looks like ros problem. Any idea?

Greetings and thanks,
Juan Ramón Gómez Martín.

camera config error

Using Goldeneye-C3_1.3.712.swu on a o3d303, we ran into some problems, maybe related to using the updated firmware.

Here is the process we followed:
Using Windows:

  1. updated the firmware to Goldeneye-C3_1.3.712.swu with the VisionAssistant SW
  2. created different applications (settings) using the VisionAssistant SW

Using Ubuntu 14.04:
3) launched the camera with the standard o3dxxx-ros launch process: ‘roslaunch o3d3xx camera.launch ip:=192.168.0.69’
4) verified that things work using rviz
5) dumped the settings using with ‘rosservice call ?/?/Dump > settings.json’
6) then tried to load the ‘settings.json’ file back to the device with ‘roslaunch o3d3xx config.launch infile:=settings.json’, which is the point where things started to go wrong, despite that the output of this process indicates that the configuration went through without an error:

The camera.launch process started timing out, and continues to do so after relaunching and power-cycling. Lost connection to the camera.

We had to go back to step 1 in order to recover this.

Extrinsics not latched

We are using /o3d3xx/camera/extrinsics and /o3d3xx/camera/unit_vectors for ray-tracing.

We noticed that the unit_vectors are published once (i.e. latched) but the extrinsics are published continuously.

We were wondering why this is happening. Is it possible that the extrinsincs change during operation / after camera (factory) calibration?

trouble decoding compressed depth map from IFM

Looking at the raw depth values, I see 16-bit integers measuring what I assume are milimeters.
Looking at the compressed topic (i.e. from a rosbag), I decompress with cv2, something like this:
pic = cv2.imdecode(np.fromstring(msg.data,np.uint8),cv2.CV_LOAD_IMAGE_UNCHANGED)
But what comes out is an array of uint8, mostly equal to 255, and looking vaguely like the raw depth image.

Is there a different way I'm supposed to be decompressing the depth image?

Getting Libo3d3xx on Yocto

I would love to use this library on the Intel edison.
The problem is that there are certain packages that I cannot find with opkg.
opkg does not have packages
LIB_glog
LIB_xmlrpc_clientxx
LIB_xmlrpcxx

these are all named differently with apt-get but they are non existent for opkg.

is this possible to use libo3d3xx on Yocto or am I wasting my time.

Amplitude image scaling

I have a similar issue to #23. I am trying to use the /ifm/amplitude image in order to perform extrinsic calibration of the sensor location. However, the default amplitude image is very dark and the calibration target cannot be seen in the image.
frame0000
I looked at the data and noticed that there is information there, but it appears to be scaled like it should be mono8, but it is being published as mono16. Is the data supposed to be mono16? I tried publishing as a mono8, but it is too bright and still wouldn't work for extrinsic calibration.
frame0003
I then scaled the image by the mean and published as the original mono16. With this I got a much better result.
frame0001
What is the scaling for the amplitude image and why is the image so dark by default? Or is the image supposed to be dark? We have auto exposure turned on and are using the less than 5m setting, so I would think that the image should be brighter. Would it be possible to change the scaling of the image prior to publishing in order to enable extrinsic calibration using the amplitude image?

SWTrigger Service doesn't return error codes

I'm using the software trigger service to get data, and I'm successfully able to data. However, when I introduce an error condition (such as no camera connected) and trigger the SWTrigger service, I get the response status as 0(OK). I would expect that I get an error code for being unable to talk to the camera. I'm not sure if this is an issue with the ROS part or with the library. Can someone confirm that I should be getting an error code from the software trigger service for cases such as no camera at IP address?
Thanks

No camera output

Hi,
I'm using a O3D311 and I'm not getting the grayscale image as shown in README.md. The Amplitude image is as shown below. I have updated the camera firmware to latest version. Am I doing something wrong? Or does this camera not support 2D image output?

o3d311

Doubt on the dataframe

Hello again,

We have a doubt about the frame we get through this lib: In the PCL file (through ex-file_io), we obtain 4 data per point. No problem with the first 3 points (coordinates of the point), but what about the last point, the intensity? Is it tied to one of the coordinates (x or y) or to the IR reflection?

By the way, the dataframe we get, in what order is displayed? From left to right and from top to bottom?

Thanks,
Juan Ramón Gómez Martín.

inquiry: expected output of rosnode info ...

When I launch the camera, I get the following information. It also lists the node running on localhost.

wegunterjr@ros-VB:~$ roslaunch o3d3xx camera.launch 
... logging to /home/wegunterjr/.ros/log/94d49286-92d6-11e5-b905-08002798ae28/roslaunch-ros-VB-10343.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://ros-VB:45169/

SUMMARY
========

PARAMETERS
 * /o3d3xx/camera/ip: 192.168.0.69
 * /o3d3xx/camera/password: 
 * /o3d3xx/camera/publish_viz_images: True
 * /o3d3xx/camera/timeout_millis: 500
 * /o3d3xx/camera/xmlrpc_port: 80
 * /rosdistro: indigo
 * /rosversion: 1.11.16

NODES
  /o3d3xx/
    camera (o3d3xx/o3d3xx_node)
    camera_tf (tf2_ros/static_transform_publisher)

ROS_MASTER_URI=http://localhost:11311

core service [/rosout] found
process[o3d3xx/camera-1]: started with pid [10361]
process[o3d3xx/camera_tf-2]: started with pid [10362]

What should I expect the output to be when I run rosnode info?
Shouldn't it repeate the host URI, etc?

rosnode info /o3d3xx/
Node [/o3d3xx/]
Publications: None

Subscriptions: None

Services: None

cannot contact [/o3d3xx/]: unknown node

Generate point cloud from depth image

I am using the depth image to detect any irregularities or obstacles in the path of an AGV with ifm o3d303 mounted. The process involves removing any depth data other than obstacles from the image through a mask created by gradient based processing. I would like to convert this selective depth image to a point cloud. Is it possible to do using this package?

P.S I have a working version of this using openni to convert selective depth image to point cloud, I am looking for a similar functionality.

install issues test errors

Hello,

I'm having issues installing this Ros package.
I installed the libo3d3xx version0.4.5 and use Ros Indigo.
I use the newest version of this ros package.

It always fails at the catkin_make run_tests

sjors@sjors-VirtualBox:~/dev/o3d3xx-catkin$ catkin_make -DCATKIN_ENABLE_TESTING=ON
Base path: /home/sjors/dev/o3d3xx-catkin
Source space: /home/sjors/dev/o3d3xx-catkin/src
Build space: /home/sjors/dev/o3d3xx-catkin/build
Devel space: /home/sjors/dev/o3d3xx-catkin/devel
Install space: /home/sjors/dev/o3d3xx-catkin/install

Running command: "cmake /home/sjors/dev/o3d3xx-catkin/src -DCATKIN_ENABLE_TESTING=ON -DCATKIN_DEVEL_PREFIX=/home/sjors/dev/o3d3xx-catkin/devel -DCMAKE_INSTALL_PREFIX=/home/sjors/dev/o3d3xx-catkin/install -G Unix Makefiles" in "/home/sjors/dev/o3d3xx-catkin/build"

-- Using CATKIN_DEVEL_PREFIX: /home/sjors/dev/o3d3xx-catkin/devel
-- Using CMAKE_PREFIX_PATH: /opt/ros/indigo
-- This workspace overlays: /opt/ros/indigo
-- Using PYTHON_EXECUTABLE: /usr/bin/python
-- Using Debian Python package layout
-- Using empy: /usr/bin/empy
-- Using CATKIN_ENABLE_TESTING: ON
-- Call enable_testing()
-- Using CATKIN_TEST_RESULTS_DIR: /home/sjors/dev/o3d3xx-catkin/build/test_results
-- Found gtest sources under '/usr/src/gtest': gtests will be built
-- Using Python nosetests: /usr/bin/nosetests-2.7
-- catkin 0.6.18
-- BUILD_SHARED_LIBS is on
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- ~~ traversing 1 packages in topological order:
-- ~~ - o3d3xx
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- +++ processing catkin package: 'o3d3xx'
-- ==> add_subdirectory(o3d3xx)
-- Boost version: 1.54.0
-- Found the following Boost libraries:
-- filesystem
-- Using these message generators: gencpp;genlisp;genpy
-- o3d3xx: 1 messages, 5 services
-- Configuring done
-- Generating done
-- Build files have been written to: /home/sjors/dev/o3d3xx-catkin/build

Running command: "make -j1 -l1" in "/home/sjors/dev/o3d3xx-catkin/build"

[ 0%] Built target _o3d3xx_generate_messages_check_deps_Rm
[ 0%] Built target _o3d3xx_generate_messages_check_deps_Config
[ 0%] Built target _o3d3xx_generate_messages_check_deps_Dump
[ 0%] Built target std_msgs_generate_messages_cpp
[ 0%] Built target _o3d3xx_generate_messages_check_deps_Extrinsics
[ 0%] Built target _o3d3xx_generate_messages_check_deps_GetVersion
[ 0%] Built target _o3d3xx_generate_messages_check_deps_Trigger
[ 27%] Built target o3d3xx_generate_messages_cpp
[ 31%] Built target o3d3xx_config_node
[ 31%] Built target std_msgs_generate_messages_lisp
[ 59%] Built target o3d3xx_generate_messages_lisp
[ 59%] Built target std_msgs_generate_messages_py
[ 95%] Built target o3d3xx_generate_messages_py
[ 95%] Built target o3d3xx_generate_messages
[100%] Built target o3d3xx_node
sjors@sjors-VirtualBox:~/dev/o3d3xx-catkin$ catkin_make run_tests
Base path: /home/sjors/dev/o3d3xx-catkin
Source space: /home/sjors/dev/o3d3xx-catkin/src
Build space: /home/sjors/dev/o3d3xx-catkin/build
Devel space: /home/sjors/dev/o3d3xx-catkin/devel
Install space: /home/sjors/dev/o3d3xx-catkin/install

Running command: "cmake /home/sjors/dev/o3d3xx-catkin/src -DCATKIN_DEVEL_PREFIX=/home/sjors/dev/o3d3xx-catkin/devel -DCMAKE_INSTALL_PREFIX=/home/sjors/dev/o3d3xx-catkin/install -G Unix Makefiles" in "/home/sjors/dev/o3d3xx-catkin/build"

-- Using CATKIN_DEVEL_PREFIX: /home/sjors/dev/o3d3xx-catkin/devel
-- Using CMAKE_PREFIX_PATH: /opt/ros/indigo
-- This workspace overlays: /opt/ros/indigo
-- Using PYTHON_EXECUTABLE: /usr/bin/python
-- Using Debian Python package layout
-- Using empy: /usr/bin/empy
-- Using CATKIN_ENABLE_TESTING: ON
-- Call enable_testing()
-- Using CATKIN_TEST_RESULTS_DIR: /home/sjors/dev/o3d3xx-catkin/build/test_results
-- Found gtest sources under '/usr/src/gtest': gtests will be built
-- Using Python nosetests: /usr/bin/nosetests-2.7
-- catkin 0.6.18
-- BUILD_SHARED_LIBS is on
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- ~~ traversing 1 packages in topological order:
-- ~~ - o3d3xx
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- +++ processing catkin package: 'o3d3xx'
-- ==> add_subdirectory(o3d3xx)
-- Boost version: 1.54.0
-- Found the following Boost libraries:
-- filesystem
-- Using these message generators: gencpp;genlisp;genpy
-- o3d3xx: 1 messages, 5 services
-- Configuring done
-- Generating done
-- Build files have been written to: /home/sjors/dev/o3d3xx-catkin/build

Running command: "make run_tests -j1 -l1" in "/home/sjors/dev/o3d3xx-catkin/build"

Removing test result files from '/home/sjors/dev/o3d3xx-catkin/build/test_results/o3d3xx'

  • removing '/home/sjors/dev/o3d3xx-catkin/build/test_results/o3d3xx/nosetests-test.xml'
  • removing '/home/sjors/dev/o3d3xx-catkin/build/test_results/o3d3xx/rosunit-test_camera.xml'
  • removing '/home/sjors/dev/o3d3xx-catkin/build/test_results/o3d3xx/rostest-test_o3d3xx.xml'
  • removing '/home/sjors/dev/o3d3xx-catkin/build/test_results/o3d3xx/rosunit-test_services.xml'
    Built target clean_test_results_o3d3xx
    Built target tests
    -- run_tests.py: execute commands
    /opt/ros/indigo/share/rostest/cmake/../../../bin/rostest --pkgdir=/home/sjors/dev/o3d3xx-catkin/src/o3d3xx --package=o3d3xx --results-filename test_o3d3xx.xml --results-base-dir /home/sjors/dev/o3d3xx-catkin/build/test_results /home/sjors/dev/o3d3xx-catkin/src/o3d3xx/test/o3d3xx.test
    ... logging to /home/sjors/.ros/log/rostest-sjors-VirtualBox-18103.log
    [ROSUNIT] Outputting test results to /home/sjors/dev/o3d3xx-catkin/build/test_results/o3d3xx/rostest-test_o3d3xx.xml
    terminate called after throwing an instance of 'o3d3xx::error_t'
    what(): XMLRPC call timed out
    terminate called after throwing an instance of 'o3d3xx::error_t'
    what(): XMLRPC call timed out
    Traceback (most recent call last):
    File "/home/sjors/dev/o3d3xx-catkin/src/o3d3xx/test/test_services.py", line 66, in
    sys.exit(main())
    File "/home/sjors/dev/o3d3xx-catkin/src/o3d3xx/test/test_services.py", line 62, in main
    rostest.rosrun('o3d3xx', 'test_services', TestServices, sys.argv)
    File "/opt/ros/indigo/lib/python2.7/dist-packages/rostest/init.py", line 145, in rosrun
    result = rosunit.create_xml_runner(package, test_name, result_file).run(suite)
    File "/opt/ros/indigo/lib/python2.7/dist-packages/rosunit/xmlrunner.py", line 233, in run
    test(result)
    File "/usr/lib/python2.7/unittest/suite.py", line 70, in call
    return self.run(*args, **kwds)
    File "/usr/lib/python2.7/unittest/suite.py", line 108, in run
    test(result)
    File "/usr/lib/python2.7/unittest/case.py", line 395, in call
    return self.run(*args, **kwds)
    File "/usr/lib/python2.7/unittest/case.py", line 331, in run
    testMethod()
    File "/home/sjors/dev/o3d3xx-catkin/src/o3d3xx/test/test_services.py", line 18, in test_services
    rospy.wait_for_service("/Dump")
    File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 159, in wait_for_service
    raise ROSInterruptException("rospy shutdown")
    rospy.exceptions.ROSInterruptException: rospy shutdown
    testtest_camera ... ok
    testtest_services ... ERROR!
    ERROR: max time [60.0s] allotted for test [test_services] of type [o3d3xx/test_services.py]
    File "/usr/lib/python2.7/unittest/case.py", line 331, in run
    testMethod()
    File "/opt/ros/indigo/lib/python2.7/dist-packages/rostest/runner.py", line 148, in fn
    self.test_parent.run_test(test)
    File "/opt/ros/indigo/lib/python2.7/dist-packages/rostest/rostest_parent.py", line 132, in run_test
    return self.runner.run_test(test)
    File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/launch.py", line 681, in run_test
    (test.time_limit, test.test_name, test.package, test.type))

[ROSTEST]-----------------------------------------------------------------------

[o3d3xx.rosunit-test_camera/test_camera][FAILURE]-------------------------------
False is not true
File "/usr/lib/python2.7/unittest/case.py", line 331, in run
testMethod()
File "/home/sjors/dev/o3d3xx-catkin/src/o3d3xx/test/test_camera.py", line 82, in test_camera
self.assertTrue(self.success)
File "/usr/lib/python2.7/unittest/case.py", line 424, in assertTrue
raise self.failureException(msg)

SUMMARY

  • RESULT: FAIL
  • TESTS: 1
  • ERRORS: 1
  • FAILURES: 1

rostest log file is in /home/sjors/.ros/log/rostest-sjors-VirtualBox-18103.log
-- run_tests.py: verify result "/home/sjors/dev/o3d3xx-catkin/build/test_results/o3d3xx/rostest-test_o3d3xx.xml"
Built target _run_tests_o3d3xx_rostest_test_o3d3xx.test
Built target _run_tests_o3d3xx_rostest
-- run_tests.py: execute commands
/usr/bin/cmake -E make_directory /home/sjors/dev/o3d3xx-catkin/build/test_results/o3d3xx
/usr/bin/nosetests-2.7 -P --process-timeout=60 --where=/home/sjors/dev/o3d3xx-catkin/src/o3d3xx/test --with-xunit --xunit-file=/home/sjors/dev/o3d3xx-catkin/build/test_results/o3d3xx/nosetests-test.xml


Ran 0 tests in 0.000s

OK
-- run_tests.py: verify result "/home/sjors/dev/o3d3xx-catkin/build/test_results/o3d3xx/nosetests-test.xml"
Built target _run_tests_o3d3xx_nosetests_test
Built target _run_tests_o3d3xx_nosetests
Built target _run_tests_o3d3xx
Built target run_tests

Camera config to extract full resolution image

We have a O3D303 sensor mounted on an AGV at a height of 1 m & 45 deg angle facing downwards. The sensor's primary purpose is to detect obstacles in 1.5 m range in-front of the AGV. I am looking for some camera config changes to suit the purpose.

  1. How to get full resolution(352 x 264) images from the sensor?
    • Required to detect poles & bollards in the path, more room to filter the output data
  2. How to set sensor to use single frequency for TOF?
    • For higher frame acquisition & heating concerns
  3. Ideal integration time for a moving platform (< 1 m/s)?

Saturated pixels

When using the VisionAssistant SW it is possible to see "saturated" pixels.

However, o3d3xx-ros does not seem to publish information about saturation.

We wanted to use the saturation info to distinguish between pixels that are either too close or too far away and do not report valid point-cloud data:

  • typically close pixels in addition to having very low amplitude and confidence values, they are also over-saturated
  • distant pixels also have low amplitude and confidence, but not low saturation.

How would it be possible to access the saturation info using o3d3xx-ros package?

Alternatively, would there be a different way to differentiate between "close" vs. "far away" objects/pixels that do not report valid depth data?

Thanks

Output pointcloud that can be recognized as from a projective device?

How do I get a point cloud from an o3d303 camera that PCL can recognize as from a projective device?

I'm trying to use the nodelet pcl/StatisticalOutlierRemoval on the output of a point cloud from an o3d303 camera. PCL complains that the cloud is not from a projective device. It says this:

[pcl::OrganizedNeighbor::radiusSearch] Input dataset is not from a projective device!
  Residual (MSE) 0.000133, using 1188 valid points

The definition of a projective device is here

catkin_make run_tests failure

Hey Tom,
Thanks for you help. I'm having trouble with running the unit tests. It seems to be a connectivity issue, which another user had reported, #27 . However, my o3d is using the default IP address (the same as in camera.launch). I can also ping the camera. I have the same error with both the current github head and also version 0.2.5. I'm running Ubuntu trusty with ros Indigo. The error I see is :

[ROSTEST]-----------------------------------------------------------------------

[o3d3xx.rosunit-test_camera/test_camera][FAILURE]-------------------------------
False is not true
File "/usr/lib/python2.7/unittest/case.py", line 331, in run
testMethod()
File "/home/sroth/dev/o3d3xx-catkin/src/o3d3xx/test/test_camera.py", line 73, in test_camera
self.success = self.compute_cartesian()
File "/home/sroth/dev/o3d3xx-catkin/src/o3d3xx/test/test_camera.py", line 172, in compute_cartesian
self.assertTrue(x_mask.sum() == 0)
File "/usr/lib/python2.7/unittest/case.py", line 424, in assertTrue
raise self.failureException(msg)

[o3d3xx.rosunit-test_services/test_services][passed]

SUMMARY

  • RESULT: FAIL
  • TESTS: 2
  • ERRORS: 0
  • FAILURES: 1

Thanks again for the help.
Stephan

how to diagnose IFM

I checked all the ros topic for IFM, I didn't find any diagnostic topic. If so, how could we know the status of IFM? Thank you!

Time stamp on images incorrect

The ROS time stamp that is applied to the images coming from the sensor uses ros::Time::now(), see here.

The real time stamp is likely older than this, when taking into account the exposure time and the on-board processing.

Is there a way to estimate this latency and apply it to the ROS time stamp?

Incorporate filtering at a ROS node/driver level

I would like to incorporate my own filtering techniques on the sensor outputs (pointclouds and potentially 2D images). In order to achieve the best performance, this can be done in two ways:

  • nodelets - eliminates costly message copying and sending, but requires the ROS node to be written in a certain way (which it currently is not)
  • filter at a driver level - embed filtering at a driver level, but this brings up several questions:
    1. Can filters be applied at an image level (easier with opencv) that affect the point cloud output?
    2. Is there a generic way to do this? Are there a set of filters that we can agree should be supported in general? Here are my desired filters (in order they would be applied):

There are more filtering options with 2D images. Ideally, these would be applied to the 2D images and then the pointcloud recalculated. However, as near as I can tell, the point cloud is calculated on the sensor and not in the ROS node.

Unable to change SymmetryThreshold of Imager

Following a recent discussion with IFM reps, I was trying to change the symmetry threshold of the imager to eliminate some over saturated pixels due to motion. I tried changing symmetry threshold through the config node but it's not reflected in the imager. ( json dump just returns a 0 value after the config change)

"Imager": { "SymmetryThreshold": "0" }

I dug deeper & found this commnet in libo3d3xx/modules/camera/src/libo3d3xx_camera/imager_config.cpp
//! @todo: This data type may change. It is not clear that this is even //! implemented in the camera yet. I believe it has to do with flagging a //! pixel as bad due to motion artifacts across exposure times ... but it //! does not appear to be documented. int SymmetryThreshold() const noexcept;

I was wondering if it's not an accessible parameter or is it not implemented?
@tpanzarella, @GarrettPlace any comments?

IFM 03X101

Hello, Happy New Year! I have a IFM 03X101 Camera (3D-Camera). Can someone tell me if this package will work with my version of Camera? In the description of the package it is written that it works with 03D3XX camera versions. Therefore, I am unsure as to usability of the package with the version of the camera I have.

Can't find libopencv_core3.so.3.2

Hello,

I performed an update using the Ubuntu 16.04 package manager but now I get the following error when trying to start my o3d303 camera:

/home/bloom/catkin_ws/devel/lib/o3d3xx/o3d3xx_node: error while loading shared libraries: libopencv_core3.so.3.2: cannot open shared object file: No such file or directory

If I look into /opt/ros/kinetic/lib, i see libopencv_core3.so, libopencv_core3.so.3.3, and libopencv_core3.so.3.3.1, but not what I need.

How do I fix this?

PointCloud<pcl::PointWithRange> range data

Hi,

I am having some trouble understand the point cloud data I am getting from my camera. I subscribed to the point cloud data from the camera, by defining the callback function in this manner:

void cloud_cb (const pcl::PointCloud<pcl::PointWithRange>::ConstPtr& input_const_ptr) {...}

I did this to obtain the range values associated to each point relative to the origin. However, I have been getting huge "range" values that does not seem to be in the standard units like meters or mm(as seen below). Is there a certain conversion factor for me to attain the actual range from the camera to each individual point? Thanks!

# .PCD v0.7 - Point Cloud Data file format
VERSION 0.7
FIELDS x y z range
SIZE 4 4 4 4
TYPE F F F F
COUNT 1 1 1 1
WIDTH 176
HEIGHT 132
VIEWPOINT 0 0 0 1 0 0 0
POINTS 23232
DATA ascii
1.929 1.251 0.90799999 233
1.97 1.258 0.92400002 247
2.0020001 1.2589999 0.935 250
2.003 1.24 0.93099999 248
2.0090001 1.225 0.93000001 239
2.016 1.21 0.93000001 232
2.0179999 1.193 0.92799997 199
2.0309999 1.182 0.93000001 148
2.04 1.169 0.93099999 162
2.053 1.158 0.93400002 157
2.0569999 1.142 0.93199998 173
2.0639999 1.127 0.93199998 188

subscribing to messages...easy question, i know

So, I am just trying out a couple of things and using an example that subscribes to a turtlesim Pose message. I am wanting to replicate that with an o3d3xx message, but I may not be thinking of it correctly:

code snip:

#include <ros/ros.h>
#include <turtlesim/Pose.h>

#include <iomanip>  //for std::precision, std::fixed

// a callback function. Executed each time a new pose message arrives
void poseMessageReceived(const turtlesim::Pose& msg)
{
    ROS_INFO_STREAM(std::setprecision(2) << std::fixed
      << " position =(" << msg.x << " ," << msg.y << ")"
      << " direction =" << msg.theta );
}

I Wonder how to get access to the messages. Do I do something like this at the top of my cpp and then create a callback function that is looking for that message?
#include <sensor_msg/Image.h>
I have nothing to back that up, but just reaching...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.