Giter Site home page Giter Site logo

Comments (11)

carlosmccosta avatar carlosmccosta commented on May 18, 2024 3

Hello,

In the links below is the configuration for drl for your bag file, along with a rviz config file and associated video showing drl working with your sensor data.

In rviz, the 3 DoF pose output of drl is shown as a TF frame [map_drl -> base_link] and as a PoseStamped.
For checking if drl is aligning the sensor data correctly, there are 2 topics, showing the inliers as green dots (points considered as correctly registered) and outliers as red dots (sensor points without correspondences in the map).

When using drl you should at least configure:

  • TF frame_id names
  • Sensor data topics
  • Initial pose (which can be given in launch file, rviz 2D pose estimate feature or letting drl compute it automatically, but this typically only works if the robot is in an unique part of the map with good geometric features)
<?xml version="1.0" encoding="UTF-8"?>
<launch>
	<include file="$(find dynamic_robot_localization)/launch/dynamic_robot_localization_system.launch" >
		<arg name="ros_verbosity_level" default="INFO" /> <!-- DEBUG | INFO | WARN | ERROR | FATAL -->

		<!-- frame ids -->
		<arg name="map_frame_id" default="map_drl" />
		<arg name="odom_frame_id" default="odom" />
		<arg name="base_link_frame_id" default="base_link" />
		<arg name="sensor_frame_id" default="base_scan" />

		<!-- initial pose setup -->
		<arg name="robot_initial_pose_in_base_to_map" default="true" /> <!-- recommendation: use false to allow direct publish of tf between odom -> map -->
		<arg name="robot_initial_pose_available" default="true" />
		<arg name="robot_initial_x" default="13.49" />
		<arg name="robot_initial_y" default="11.27" />
		<arg name="robot_initial_z" default="0.0" />
		<arg name="robot_initial_roll" default="0.0" />
		<arg name="robot_initial_pitch" default="0.0" />
		<arg name="robot_initial_yaw" default="0.168" />

		<!-- laser scan assembler -->
		<arg name="laser_scan_topics" default="/scan" />
		<arg name="dynamic_update_of_assembly_configuration_from_odometry_topic" default="" />

		<!-- tf configurations -->
		<arg name="invert_tf_transform" default="true" />
		<arg name="invert_tf_hierarchy" default="true" />
	</include>
</launch>

The drl_configs.yaml is only for documentation, you should not edit it.
For passing configurations to drl you should load them into the parameter server.
For that there is some examples here.

For comparing the drl poses with the ground truth you can use my package robot_localization_tools and inspect topic rostopic echo /localization_error along with the trajectories topics that I configured in the drl.rviz

The accuracy of drl depends highly on:

  • Environment / map
  • Sensor data quality
  • Sensor field of view
  • Calibration between sensors (if you have several sensors and want to merge the scans with my package laserscan_to_pointcloud)
  • Calibration between sensors and base_link of the robot
  • Calibration between the ground truth reference frame and the base_link of the robot

For accuracy tests check my dissertation and papers available in the links at:
https://github.com/carlosmccosta/dynamic_robot_localization#more-info

For running drl with your bag file:

roscore
rosparam set /use_sim_time true

# cd to folder with drl.rviz
rosrun rviz rviz -d drl.rviz

# cd to folder with map_default.yaml
rosrun map_server map_server map_default.yaml _frame_id:=map_drl

# cd to folder with dynamic_robot_localization_system.launch
roslaunch dynamic_robot_localization_system.launch

# cd to folder with robot_localization_error.launch
roslaunch robot_localization_error.launch
rostopic echo /localization_error

# cd to folder with expt6.bag
rosbag play --clock expt6.bag --start 35 --rate 5 --pause
# wait around 5 seconds and them press space key

Have a nice day,

from dynamic_robot_localization.

carlosmccosta avatar carlosmccosta commented on May 18, 2024 2

Hello,

Due to the repetitive nature of warehouses, feature matching will have many possible matches / initial poses.
As such, for your use case, you probably want to disable feature matching by setting yaml_configuration_pose_estimation_filename to empty.

<arg name="yaml_configuration_pose_estimation_filename" default="$(find dynamic_robot_localization)/yaml/configs/empty.yaml" />

This way, drl will only use the iterative closest point (ICP) algorithm for tracking the pose, which will require that the initial pose be given in one of these ways:

For computing a 3 DoF pose, the ambient point cloud must have at least 2 points, but I suggest to have at least 10 points, because 2 points might be associated with sensor noise.

drl can only estimate the pose when it has sensor / lidar data.
If possible, use a lidar with more range to avoid this issue.

Also, drl can use several lidars by merging their data into a single point cloud using the laserscan_to_pointcloud.
You just need to give all topics separated by + in laser_scan_topics.
This way, if you have 2 lidars, one in front of the robot and another in the back, drl can keep tracking as long as one lidar is seeing geometry in the environment.

drl uses odometry (retrieved from TF base_link_frame_id -> odom_frame_id) to estimate how much the robot moved in relation to the last estimated pose.
As such, if your robot has good odometry, drl should recover the tracking when the lidars start seeing environment geometry.

If other modules use the map_frame_id -> base_link_frame_id transform, you can increase publish_last_pose_tf_timeout_seconds to specify how long the last estimated pose computed by drl will be republished in tf (allowing you to control how long the last pose is considered "valid").

You can also increase the values that control when the tracking is considered lost, such as pose_tracking_timeout, pose_tracking_minimum_number_of_failed_registrations_since_last_valid_pose and pose_tracking_maximum_number_of_failed_registrations_since_last_valid_pose.

Have a nice day,

from dynamic_robot_localization.

carlosmccosta avatar carlosmccosta commented on May 18, 2024 1

Hello,

The warning is informing you that an alignment of the sensor data with the map was discarded because the percentage of outliers was above the specified threshold.

You can find more information in the sections about outlier detection and transformation validators.

These modules are used to validate the point cloud alignment and can be configured according to your requirements, such as:

  • max_inliers_distance -> should be above the resolution of your map and should take into account the accuracy of your sensor.
  • max_outliers_percentage -> specifies the maximum outliers that you are expecting to encounter in the environment (for example, if someone is walking around the robot or other geometry was moved / added after you made the map, these points captured by the lidar will not have any correspondence in the map). This threshold is for specifying how different you expect the lidar data to be in relation to the map.

You need to compile the robot_localization_tools package inside your catkin workspace.

I started the bag file at 35 seconds because at 0 seconds there was no sensor data in the lidar. At 35 seconds was when the lidar started seeing a corner in the warehouse, allowing drl to start the pose tracking.

If you start the bag at different time you need to update the initial pose in the launch file or use the rviz 2D pose estimate.
Since you have ground truth, you can easily find the initial pose with:

rosbag play --clock expt6.bag --start XX --rate 0.1
rosrun tf tf_echo map base_link_gt

Have a nice day,

from dynamic_robot_localization.

carlosmccosta avatar carlosmccosta commented on May 18, 2024 1

Hello,

After compiling the robot_localization_tools pacakge you should have this message available:

rosmsg show robot_localization_tools/LocalizationError 
std_msgs/Header header
  uint32 seq
  time stamp
  string frame_id
geometry_msgs/Vector3 translation_errors
  float64 x
  float64 y
  float64 z
float64 translation_error
geometry_msgs/Vector3 rotation_error_axis
  float64 x
  float64 y
  float64 z
float64 rotation_error_angle

You should use the robot_localization_error.launch that I had attached in the zip file, since it has the frame_id configurations for your use case.

<?xml version="1.0" encoding="UTF-8"?>
<launch>
	<arg name="map_frame_id" default="map_drl" />
	<arg name="odom_frame_id" default="odom" />
	<arg name="base_link_frame_id" default="base_link" />
	<arg name="localization_ground_truth_frame_id" default="map" />
	<arg name="localization_base_link_ground_truth_frame_id" default="base_link_gt" />
	<arg name="use_degrees_in_angles" default="true" />
	<arg name="use_millimeters_in_distances" default="true" />
	<arg name="use_6_dof" default="false" />
	<arg name="publish_rate" default="0.0" />										<!-- if > 0 uses regular tf sampling (and also computes error when a new pose arrives), otherwise only computes error when receives poses from topics -->
	<arg name="pose_publishers_sampling_rate" default="10" />						<!-- collect 1 out of pose_publishers_sampling_rate poses (< 0 or 1 -> collects all poses) -->
	<arg name="pose_stamped_topic" default="/dynamic_robot_localization/localization_pose" />									<!-- empty to disable listening to geometry_msgs::PoseStamped messages -->
	<arg name="pose_stamped_with_covariance_pose_topic" default="" />	<!-- empty to disable listening to geometry_msgs::PoseWithCovarianceStamped messages -->
	<arg name="pose_error_publish_topic" default="localization_error" />
	<arg name="tf_lookup_timeout" default="0.15" />
	<arg name="use_time_0_when_querying_tf" default="false" />
	<arg name="localization_poses_output_filename" default="" />
	<arg name="odom_only_poses_output_filename" default="" />
	<arg name="ground_truth_poses_output_filename" default="" />
	<arg name="save_poses_timestamp" default="true" />
	<arg name="save_poses_orientation_quaternion" default="false" />
	<arg name="save_poses_orientation_vector" default="false" />

	<node pkg="robot_localization_tools" type="rlt_robot_localization_error_node" name="rlt_robot_localization_error" output="screen">
		<param name="invert_tf_from_map_ground_truth_frame_id" type="bool" value="false" />
		<param name="map_ground_truth_frame_id" type="str" value="$(arg localization_ground_truth_frame_id)" />
		<param name="map_frame_id" type="str" value="$(arg map_frame_id)" />
		<param name="odom_frame_id" type="str" value="$(arg odom_frame_id)" />
		<param name="base_link_frame_id" type="str" value="$(arg base_link_frame_id)" />
		<param name="base_link_ground_truth_frame_id" type="str" value="$(arg localization_base_link_ground_truth_frame_id)" />
		<param name="publish_rate" type="double" value="$(arg publish_rate)" />
		<param name="pose_publishers_sampling_rate" type="int" value="$(arg pose_publishers_sampling_rate)" />
		<param name="tf_lookup_timeout" type="double" value="$(arg tf_lookup_timeout)" />
		<param name="use_time_0_when_querying_tf" value="$(arg use_time_0_when_querying_tf)" />
		<param name="localization_poses_output_filename" type="str" value="$(arg localization_poses_output_filename)" />
		<param name="odom_only_poses_output_filename" type="str" value="$(arg odom_only_poses_output_filename)" />
		<param name="ground_truth_poses_output_filename" type="str" value="$(arg ground_truth_poses_output_filename)" />
		<param name="save_poses_timestamp" type="bool" value="$(arg save_poses_timestamp)" />
		<param name="save_poses_orientation_quaternion" type="bool" value="$(arg save_poses_orientation_quaternion)" />
		<param name="save_poses_orientation_vector" type="bool" value="$(arg save_poses_orientation_vector)" />
		<param name="pose_stamped_topic" type="str" value="$(arg pose_stamped_topic)" />
		<param name="pose_stamped_with_covariance_pose_topic" type="str" value="$(arg pose_stamped_with_covariance_pose_topic)" />
		<param name="pose_error_publish_topic" type="str" value="$(arg pose_error_publish_topic)" />
		<param name="pose_error_odometry_publish_topic" type="str" value="odometry_error" />
		<param name="localization_poses_publisher_topic" type="str" value="localization_poses" />
		<param name="odometry_poses_publisher_topic" type="str" value="odometry_poses" />
		<param name="ground_truth_poses_publisher_topic" type="str" value="groundtruth_poses" />
		<param name="use_degrees_in_angles" type="bool" value="$(arg use_degrees_in_angles)" />
		<param name="use_millimeters_in_distances" type="bool" value="$(arg use_millimeters_in_distances)" />
		<param name="use_6_dof" type="bool" value="$(arg use_6_dof)" />
	</node>
</launch>

The drl updates the tracking pose when it aligns the lidar data with the map.
It there is no points in the lidar data or the alignment is not within the validation thresholds (including the minimum amount of points), then drl will not update the tracking pose.

Have a nice day,

from dynamic_robot_localization.

carlosmccosta avatar carlosmccosta commented on May 18, 2024 1

Hello,

For using rgbd data, the tracking needs to be 6 DoF (x, y, z, roll, pitch, yaw) and you need to have a 3D map / point cloud of the environment.

6 DoF tracking is much less robust than 3 DoF (x, y, yaw) due to the extra 3 degrees of freedom and is also much more computational intensive.
If you have a robot moving only on a floor, 3 DoF tracking with planar lidars is the most suitable approach.
As such, if possible, add lidars with high sensing range instead of rgbd sensors to your robot.

Have a nice day,

from dynamic_robot_localization.

carlosmccosta avatar carlosmccosta commented on May 18, 2024 1

Hello,

For accurate and robust 3 DoF localization, long range 360º lidar sensing is the most used solution.
For achieving 360º sensing while ensuring movement safety, that usually requires:

  • 2 lidars at the ground level (1 in front and another in the back of the robot)
  • 1 safety laser at ground level in the direction in which the robot moves and a 360º lidar at a higher height (above the robot)

Examples of both approaches in the fotos in dynamic_robot_localization_tests, namely the guardian and jarvis platforms.

Have a nice day,

from dynamic_robot_localization.

poornimajd avatar poornimajd commented on May 18, 2024

@carlosmccosta Thanks a ton for the detailed reply,

I was able to run and get the same output as you showed in the video,
but,I got this on the terminal -
[ WARN] [1627114942.029440612, 27351.156843860]: EuclideanTransformationValidator rejected new pose at time 27351.156843860 due to max_outliers_percentage -> correction translation: 0.109807 | (max_transformation_distance: 0.5) correction rotation: 0.0197412 | (max_transformation_angle: 0.7) new pose diff translation: 0.109654 | (max_new_pose_diff_distance: 0.7) new pose diff rotation: 0.0344563 | (max_new_pose_diff_angle: 1.1) root_mean_square_error: 0.0336053 | (max_root_mean_square_error: 0.04) root_mean_square_error_reference_pointcloud: 0 | (max_root_mean_square_error_reference_pointcloud: -1) outliers_percentage: 0.678049 | (max_outliers_percentage: 0.66) outliers_percentage_reference_pointcloud: 0 | (max_outliers_percentage_reference_pointcloud: -1) inliers_angular_distribution: 2 | (min_inliers_angular_distribution: 0.125) outliers_angular_distribution: -2 | (max_outliers_angular_distribution: 0.875) [ WARN] [1627114942.029471883, 27351.156843860]: Discarded cloud because localization couldn't be calculated [ WARN] [1627114942.029489326, 27351.156843860]: Failed point cloud processing with error [FailedPoseEstimation] [ WARN] [1627114942.116789161, 27351.610560300]: [DefaultConvergenceCriteriaWithTime::hasConverged] Convergence time limit of 0.000938804 seconds exceeded (elapsed time: 0.000954) | Iteration: 28 | CorrespondencesCurrentMSE: 0.0197872

  1. Is there any source you refer to understand the above warning?
    and then in other terminal I tried this-
    rostopic echo /localization_error
    I did not get any output,I got this
    ERROR: Cannot load message class for [robot_localization_tools/LocalizationError]. Are your messages built?

2 . Could you please let me know what is the problem ?
3. Also why should the bag file start from 35 seconds?Is there any specific reason ,or can I start from any time I want?
Any help is greatly appreciated.
Thank you

from dynamic_robot_localization.

poornimajd avatar poornimajd commented on May 18, 2024

Thanks for the detailed reply.
I will look into the parameters,thank you.

You need to compile the robot_localization_tools package inside your catkin workspace.

I did this ,and when I do rostopic list,I get the topics in the list,but rostopic echo /localization_error still does not give any output.

Also I have some cases,where when the robot is navigating in the indoor environment,there is no/very less number of features,and this causes the laser scan not to detect any feature,like it does not fall on any object/wall ,etc,so in this case will the drl still be able to localize the robot accurately?

from dynamic_robot_localization.

poornimajd avatar poornimajd commented on May 18, 2024

Thanks a lot for the help,I am now able to compute the error and even visualize the output in rviz as required.

The only problem is that in case where the laser scan does not fall on any object,there is no tracking of the pose as you mentioned.Also sometimes the error goes as bad as 50m,may be this is due to the repeatative nature of the world,which I use.I have attached a screen-shot below-
drl
As it is seen ,at the start the estimate aligns with the ground_truth(blue) ,but the moment the bot takes a turn to more like a middle region as shown below- the track is lost(here actually there is no object the laser scan falls on to)

drl_edit

After this when it reaches the next portion of the world,the ground truth is on one side of the warehouse(blue) and estimate is on the complete opposite side.
I tried changing some parameters such as

  1. increased max_inliers_distance to 0.05
  2. reduced minimum_number_of_points_in_ambient_pointcloud to 1
  3. Increased max_number_of_ransac_iterations in recovery2d.yaml and initial_pose_estimation_large_map_2d.yaml
    and tried changing one or two more parameters as suggested in other issues.(#10 )

But I think the main problem is that there is no object in the scan data right?And the world is also not so unique?
So in such cases,drl may not track as you mentioned,no matter what I change the parameters to right?

Then may be DRL is very accurate with good features and unique environment as it uses feature matching.

Thank you a lot for the help.Any suggestion is appreciated.
(P-S- The application I am looking at is for localization for docking stations in a warehouse)

Thank you

from dynamic_robot_localization.

poornimajd avatar poornimajd commented on May 18, 2024

Thanks for the quick and detailed reply.
I will try to check the parameters suggested by you.

drl can only estimate the pose when it has sensor / lidar data.
If possible, use a lidar with more range to avoid this issue.

Since I cannot increase the lidar range,may be I can use a rgbd camera right?
by giving the topic names as you suggested below.

You just need to give all topics separated by + in laser_scan_topics.

Since rgbd camera has a larger field of view,it will have a point cloud for those places where the laser cannot really detect the features.Will this track the pose in areas like the one indicated by a circle as shown before?

Thank you

from dynamic_robot_localization.

poornimajd avatar poornimajd commented on May 18, 2024

Thanks a lot for your help.May be I can try increasing the range as you suggested.

I just had a small quick question - But if not then by any chance do you have suggestions for indoor mobile 2d localization with high accuracy ,may be based on using multiple sensors?

Thank you

from dynamic_robot_localization.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.