Giter Site home page Giter Site logo

mdrwiega / depth_nav_tools Goto Github PK

View Code? Open in Web Editor NEW
56.0 8.0 40.0 269 KB

A set of tools for mobile robot navigation with the depth sensor

Home Page: http://wiki.ros.org/depth_nav_tools

License: Other

CMake 4.02% Python 6.05% C++ 89.93%
ros rgbd navigation kinect robotics ros2

depth_nav_tools's Introduction

depth_nav_tools

The set of software tools dedicated to a mobile robot autonomous navigation with a depth sensor, for example Kinect.

The metapackage depth_nav_tools contains following packages:

  • laserscan_kinect -- It converts the depth image to a laser scanner format (LaserScan). The node finds the smallest value of distance in each column of the depth image and converts it to polar coordinates. The package provides features like:

    • removing a ground plane from the output data,
    • a sensor tilt compenstaion.

    However, the sensor position (heigh and tilt angle) must be known for a correct data processing. Parameters should be calculated in a frame of the ground plane.

  • depth_sensor_pose -- It detects the ground plane on the depth image and estimates the height and the tilt angle of the depth sensor relative to ground. The procedure of the ground plane detection based on the RANSAC algorithm and ranges of acceptable parameters.

  • cliff_detector -- This tool detects negative objects like cliffs or downstairs. It uses a known sensor pose to determine obstacles placed below the ground plane.

  • nav_layer_from_points -- It creates navigation costmap layer based on received points, for example from the cliff_detector.

Additional documentation

A full documentation is available at the ROS wiki and in the publication "A set of depth sensor processing ROS tools for wheeled mobile robot navigation"(PDF) by M. Drwięga and J. Jakubiak (Journal of Automation, Mobile Robotics & Intelligent Systems, 2017).

BibTeX:

@ARTICLE{drwiega17jamris,
  author = {Michał Drwięga and Janusz Jakubiak},
  title = {A set of depth sensor processing {ROS} tools for wheeled mobile robot navigation},
  journal = {Journal of Automation, Mobile Robotics & Intelligent Systems (JAMRIS)},
  year = 2017,
  doi = {10.14313/JAMRIS_2-2017/16},
  note = {Software available at \url{http://github.com/mdrwiega/depth_nav_tools}}
}

laserscan_kinect

The example of obstacles detection by the laserscan_kinect

The picture shows comparison between a laser scan based on the converted depth image from a Microsoft Kinect (blue points) and a laser scan from a scanner Hokuyo URG-04LX-UG01 (black points). Laserscan Kinect detection

Tuning

During the tuning process additional debug image can be used. It contains lines that represent the lower and upper bounds of the detection area. Also, closest points in each image column are visible. laserscan_kinect_dbg

Usage

To start a node laserscan_kinect it can be used a following command roslaunch laserscan_kinect laserscan.launch

Subscribed topics

Published topics

  • /scan (sensor_msgs/LaserScan) - the converted depth image in form of laser scan. It contains information about the robot surrounding in a planar scan.

Parameters

The file /config/params.yaml contains default parameters values.

  • ~output_frame_id (str) - frame id for the output laserscan message.

  • ~range_min (double) - minimum sensor range (in meters). Pixels in depth image with values smaller than this parameter are ignored in processing.

  • ~range_max (double) - maximum sensor range (in meters). Pixels in depth image with values greater than this parameter are ignored in processing.

  • ~depth_img_row_step (int) - Row step in depth image processing. Increasing this parameter we decrease computational complexity of algorithm but some of data are lost.

  • ~scan_height (int) - height of used part of depth image (in pixels).

  • ~cam_model_update (bool) - determines if continuously camera model data update is necessary. If it's true, then camera model (sensor_msgs/CameraInfo) from topic camera_info is updated with each new depth image message. Otherwise, camera model and parameters associated with it are updated only at the start of node or when node parameter are changed by dynamic_reconfigure.

  • ~ground_remove_en (bool) - determines if ground remove from output scan feature is enabled. The ground removing method to work needs a correctly values of parameters like a sensor_tilt_angle and sensor_mount_height.

  • ~sensor_mount_height (double) - height of depth sensor optical center mount (in meters). Parameter is necessary for the ground removing feature. It should be measured from ground to the optical center of depth sensor.

  • ~sensor_tilt_angle (double) - depth sensor tilt angle (in degrees). If the sensor is leaning towards the ground the tilt angle should be positive. Otherwise, the value of angle should be negative.

  • ~ground_margin (double) - margin in ground removing feature (in meters).

  • ~tilt_compensation_en (bool) - parameter determines if sensor tilt angle compensation is enabled.

depth_sensor_pose

Usage

To start a node laserscan_kinect it can be used a following command roslaunch depth_sensor_pose depth_sensor_pose.launch.launch

Subscribed topics

Published topics

  • /height (double) - the sensor height (the distance from the ground to the center of optical sensor)
  • /tilt_angle (double) - the sensor tilt angle (in deg)
  • /debug_image (sensor_msgs/Image) - the debug image to check which points are used in the ground plane estimation, enabled only if publish_dbg_info parameter is set to true.

Parameters

  • ~rate (dobule) - Data processing frequency (Hz)

  • ~range_min (double) - minimum sensor range (in meters). Pixels in depth image with values smaller than this parameter are ignored in processing.

  • ~range_max (double) - maximum sensor range (in meters). Pixels in depth image with values greater than this parameter are ignored in processing.

  • ~mount_height_min (double) - minimum height of the depth sensor (m)

  • ~mount_height_max (double) - maximum height of the depth sensor (m)

  • ~tilt_angle_min (double) - minimum sensor tilt angle (degrees)

  • ~tilt_angle_max (double) - maximum sensor tilt angle (degrees)

  • ~cam_model_update (bool) - determines if continuously camera model data update is neccessary. If it's true, then camera model (sensor_msgs/CameraInfo) from topic camera_info is updated with each new depth image message. Otherwise, camera model and parameters associated with it are updated only at the start of node or when node parameter are changed by dynamic_reconfigure

  • ~used_depth_height (int) - used depth height from img bottom (px)

  • ~depth_img_step_row (int) - rows step in depth processing (px)

  • ~depth_img_step_col (int) - columns step in depth processing (px)

  • ~ground_max_points (int) - max ground points in selection stage

  • ~ransac_max_iter (int) - max number of RANSAC iterations

  • ~ransac_dist_thresh (double) - RANSAC distance threshold

  • ~publish_dbg_info (bool) - determines if debug image should be published

Building

colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release

Tests

Currently unit tests are implemented only for the laserscan_kinect package.

  • catkin_make run_tests_laserscan_kinect

depth_nav_tools's People

Contributors

askokostic avatar hudiwan avatar mdrwiega avatar nakai-omer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

depth_nav_tools's Issues

Error when depth_pose is implemented

Hello,

I am trying to use this package for stair detection. When I run all the nodes I get this error:

[ERROR] [1553614236.245436569]: Ground points not detected
[ERROR] [1553614236.245497004]: height = 0.0000 angle = 0.0000

Any idea how I can fix it?

Thanks,
Heta

Alignment Issue with Tilt Compensation and Off-Center Objects

Without tilt compensation enabled and my kinect level, off center obstacles line up very well with a 360-degree laser scanner I have on my robot. However, the kinect is about 50-inches above ground so I tilt it down pretty extreme (25-degrees) and after making the adjustments in settings, off-center objects no longer line up. The scan seems to 'narrow-in' in relationship to the laser scanner. I don't think this is a camera calibration issue as it lines up well without a tilt and seems like it might potentially be a math issue if its not a user issue, but I've used dynamic reconfigure via rqt and could not succeed in finding any combination of settings (height/tilt) that would realign the off-center portion of the scan with the laser scanner.

How do I use this set of tools for downward stairs?

Hi Michal,

I'm trying to use this wonderful package without sucess. I managed to use laserscan node and it works perfect (better than depthimage_to_laserscan), but I'm struggling with the rest of nodes. You know, I'm using costmap2d for mapping and at the moment, the local_costmap isn't detecting downard stairs. I set correctly the kinect height and tilt_angle, but when I see rqt_graph only laserscan node is publishing LaserScan mesagges through scan topic which I'm using, but not cliff_detector nor depth_sensor_pose. Do I need to use these three nodes to get this detection working or there is something I'm missing? Could you tell me what else I'm missing, please?

Btw for costmap2d I'm using as source the scans from laserscan with max_obstacle_height: 2.0 meters and min_obstacle_height: -2.0 meters, but seems not working.
Do I need adding another observation sources: for scan again, but as pointcloud message with max_obstacle_height of 0.0m and min_obstacle_height: -2.0 or something similar?
When I set for camera

I hope you can enlighten me about the correct use of this package.

Thanks again!

How it works for costmap_2d?

Hey Michal,

Thanks for your code. I'd like to use it for kinect I'm using (V1). Turns out that I'm mapping a 2D plannar indoors environment like most of people does. I want to go forward by using cliff (downstairs) avoidance and hopefully you've already done that. So could you tell me what configuration should I set for my kinect if it's 0.28m height and looking straight ahead? I know kinect's viewing angle has 43 vertical degrees. So, 21.5 degrees upwards and downwards. As my kinect is 0.28m high (and in front of the robot according to navigation tutorials by ROS) it should detect the floor at 0.71084m of distance.
How costmap_2d perceive the stairs detection? I've sent you an email in case you weren't notified.

Hope you can enlighten me about that

Thanks

nav_layer_from_points not working

Hi,

I am to add your nav_layer_from_points plug-in to our robot's global costmap. But we get the following errors:

Using plugin "simplelayer"
terminate called after throwing an instance of 'pluginlib::CreateClassException'
what(): MultiLibraryClassLoader: Could not create object of class type nav_layer_from_points::NavLayerPoints as no factory exists for it. Make sure that the library exists and was explicitly loaded through MultiLibraryClassLoader::loadLibrary()

My global_costmap.yaml is listed as follows:

global_costmap:
global_frame: /map
robot_base_frame: /base_footprint
update_frequency: 5.0
static_map: true

plugins:
- {name: static_layer, type: "costmap_2d::StaticLayer"}
- {name: obstacle_layer, type: "costmap_2d::ObstacleLayer"}
- {name: inflation_layer, type: "costmap_2d::InflationLayer"}
- {name: simplelayer, type: "nav_layer_from_points::NavLayerPoints"}

Could you please enlighten me how to solve this problem?

Thanks.

access to original ROS1 code for depth_nav_tools

The URL https://github.com/mdrwiega/depth_nav_tools describes the depth_nav_tools code. The url states that the documentation is for ros1. In order to avoid other complications, I am building to noetic, 20.04. I have gazebo, rviz, and the kinect camera working fine. However, when I try to build in depth_nav_tools, I get the error Warning: Skipping package laserscan_kinect because it has an unsupported package build type: ament_cmake . Ament_cmake is, of course, part of ros2, and I notice from the above URL that update workflow to 22.04 occurred 3 months ago. I need the depth_nav_tools code compatible with ros1. Please let me know where this previous code, for ros1, resides.

Distance to ground calculation

Hi,

I would be happy to get a little explanation on the following distance to ground calculation:

dist_to_ground_[i] = sensor_mount_height_ * sin(M_PI / 2 - delta_row_[i]) /
        cos(M_PI / 2 - delta_row_[i] - alpha);

Shouldn't the sin also decrease the alpha? If so, wouldn't it be easier to have:

dist_to_ground_[i] = sensor_mount_height_ * tan(M_PI / 2 - delta_row_[i] - alpha);

Laserscan_kinect not working with astra orbbec depth camera in indigo

Hello Michat,

I’m very interested in using you node for some close obstacle avoidance.

However I cant seem to get your "laserscan_kinect" working in indigo. Im using a astra orbbec S camera on a inidgo Jackal.

My lunch file looks like this;

# Frame id for the output laserscan output_frame_id: camera_01_depth_frame # Minimum sensor range (m). Astra sensors range 0.4-2m range_min: 0.4 # Maximum sensor range (m). range_max: 2.0 # Height of used part of depth img (px). Astra Depth Res = 640x480 scan_height: 240 # Row step in depth image processing (px). depth_img_row_step: 1 # If continously camera data update. cam_model_update: false # Height of sensor optical center mount (m). sensor_mount_height: 0.25 # Sensor tilt angle compensation. sensor_tilt_angle: 30.0 # Remove ground from scan. ground_remove_en: true # Ground margin (m). ground_margin: 0.05 # Sensor tilt angle compensation. tilt_compensation_en: true

Unfortunately every time I try to view the output scan in RVIZ it crashes. It also seems it is not connecting to the 2 topics i give it. do you know why?

output:
carma2:Fri Feb 08-14:08:~$ rosnode info /laserscan_kinect

Node [/laserscan_kinect]
Publications:

  • /scan3 [sensor_msgs/LaserScan]
  • /laserscan_kinect/parameter_descriptions [dynamic_reconfigure/ConfigDescription]
  • /laserscan_kinect/parameter_updates [dynamic_reconfigure/Config]
  • /rosout [rosgraph_msgs/Log]

Subscriptions: None

Services:

  • /laserscan_kinect/set_logger_level
  • /laserscan_kinect/set_parameters
  • /laserscan_kinect/get_loggers

contacting node http://CPR-J100-0150:52464/ ...
Pid: 14284
Connections:

  • topic: /rosout
    • to: /rosout
    • direction: outbound
    • transport: TCPROS

Camera details not being initialized

Hi,

In the parametersCallback, the depth_sensor_params_update flag is being set to true. Since this is being called before detectCliff, it blocks updating the camera details from the CameraInfo message (unless cam_model_update_ is set to true).

The line that sets it to true is:

detector_.setParametersConfigurated(true);

IMHO, it will be more correct to actually set the depth_sensor_params_update to false after updating parameters, so it re-initializes the camera model from the next CameraInfo message, as parameters are being used in the camera info initialization code.

We are using ROS2 Humble.

Commit 5e07d3f Breaks Operation

I left a comment in commit 5e07d3f (the last one) for laserscan_kinect, but it applies to other files like cliff_detector and pose_estimator.

It looks like the callback when the scan topic is subscribed to includes a test that is always false:

sub_ != nullptr

The previous code tested

!sub

The change always tests false since initially sub_ is a nullptr. Since that's the case, it never gets set to subscribe to image and therefore, there is never a callback to publish the scan message.

In my copy, I changed it to

sub_ == nullptr

and it worked. Whether or not that is the proper way to do it, I don't know..

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.