Giter Site home page Giter Site logo

austinderic / yak Goto Github PK

View Code? Open in Web Editor NEW
29.0 5.0 13.0 3.57 MB

yak (yet another kinfu) is a library and ROS wrapper for Truncated Signed Distance Fields (TSDFs).

License: MIT License

CMake 3.11% C++ 85.99% Cuda 10.90%
tsdf ros ros-industrial meshing robotics surface-reconstruction

yak's Introduction

yak

yak (yet another kinfu) is a library and ROS wrapper for Truncated Signed Distance Fields (TSDFs).

A TSDF is a probabilistic representations of a solid surface in 3D space. It's a useful tool for combining many noisy incomplete sensor readings into a single smooth and complete model.

To break down the name:

Distance field: Each voxel in the volume contains a value that represents its metric distance from the closest point on the surface. Voxels very far from the surface have high-magnitude distance values, while those near the surface have values approaching zero.

Signed: Voxels outside the surface have positive distances, while voxels inside the surface have negative distances. This allows the representation of solid objects. The distance field becomes a gradient that shifts from positive to negative as it crosses the surface.

Truncated: Only the distance values of voxels very close to the surface are regularly updated. Distances beyond a certain threshold have their values capped at +/- 1. This decreases the cost of integrating new readings, since not every voxel in the volume needs to be updated.

yak handles two very different use cases. It can reconstruct from a RGBD camera moved around by a human without any knowledge of pose relative to the global frame. It can also reconstruct from a sensor mounted on a robot arm using pose hints provided by TF and robot kinematics. The idea is that this second case doesn't need to deduce sensor motion by comparing the most recent reading to previous readings via ICP, so it should work better in situations with incomplete sensor readings.

The human-guided situation should work out of the box without any other packages. You might need to force the sensor to reset to get the volume positioned in a desirable orientation around your target. The easiest way to do this is to cover and uncover the camera lens.

The robot-assisted situation is currently partially hardcoded to use the sensors and work cell models from the Godel blending project. This will change soon to make it more generalized!

yak_meshing

Aluminum part reconstructed with yak and meshed with yak_meshing

yak_meshing is a ROS package to mesh TSDF volumes generated by Kinect Fusion-like packages.

Meshing happens through the /get_mesh service, which in turn calls the kinfu_ros /get_tsdf service. yak_meshing_node expects a serialized TSDF voxel volume, which is a list of TSDF values and weights for every occupied voxel along with a list of the coordinates of each occupied voxel. OpenVDB's voxel meshing algorithm generates a triangular mesh along the zero-value isosurface of the TSDF volume. The mesh is saved as a .obj, which can be viewed and manipulated in a program like Meshlab or Blender.

nbv_planner

Candidate poses generated and evaluated by nbv_planner

nbv_planner is a ROS package to perform Next Best View analysis using data provided from RGBD cameras like the Asus Xtion. It uses octomap to track voxel occupancy and integrate new readings.

Call the /get_nbv service to return a sorted list (best to worst) of candidate poses near the volume that could expose unknown voxels. Currently evaluation of poses is conducted by casting rays corresponding to the camera's field of view into the octomap. More hits on unknowns = better view.

Executing rosrun nbv_planner exploration_controller_node will execute an exploration routine that will try to move the robot to views that expose unknown regions of a user-specified volume, using the NBV evaluation explained above. The octomap server should be running.

Operating Instructions for Human-Guided Reconstruction

  1. Start TSDF/KinFu processes: roslaunch yak launch_xtion_default.launch
  2. Launch the drivers for the RGBD camera. For the Asus Xtion, this is roslaunch openni2_launch openni2.launch.
  3. Start mapping! Since yak doesn't have any way to relate the pose of the camera to the global frame, the initial position of the volume will be centered in front of the camera. You might have to force the camera to reset a few times to get the volume positioned where you need it.
  4. When you decide that the reconstruction is good enough: rosservice call /get_mesh

Operating Instructions for Autonomous Exploration and Reconstruction

  1. If you intend to use an actual robot, make sure that its state and motion servers are running, and that autonomous motion is allowed (deadman switch engaged, or auto mode).
  2. roslaunch a moveit planning/execution launch file. My command looks like: roslaunch godel_irb2400_moveit_config moveit_planning_execution.launch robot_ip:=192.168.125.1 sim:=False use_ftp:=False. Wait for rviz and moveit to start up.
  3. Launch the TSDF reconstruction nodes. For example, roslaunch yak launch_xtion_robot.launch.
  4. Launch the drivers for the RGBD camera. For the Asus Xtion, this is roslaunch openni2_launch openni2.launch.
  5. Start the octomap server: roslaunch nbv_planner octomap_mapping.launch
  6. When you want to start exploration: rosrun nbv_planner exploration_controller_node
  7. When you decide that the reconstruction is good enough: rosservice call /get_mesh

Build with Docker:

nvidia-docker run -v "<absolute path to your yak workspace:/yak_ws>" rosindustrial/yak:kinetic catkin build --workspace /yak_ws -DCMAKE_LIBRARY_PATH=/usr/local/nvidia/lib64/

yak's People

Contributors

aadityasaraiya avatar akgoins avatar austinderic avatar schornakj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

yak's Issues

Meshing doesn't work on voxel volumes with non-identical length, width, and height

If the number of voxels in the X, Y, and Z dimension are not identical, at some point between serialization and meshing the voxel volume is traversed in the wrong order and the occupied voxels end up in incorrect coordinates relative to each other. It would be very useful to have more freedom when specifying the dimensions and extent of the volume.

Serialized TSDF exceeds 1 GB size limit for ROS messages.

Large volumes (hundreds of millions of voxels) with few empty voxels don't successfully serialize into messages because they're too big, even after improvements were made to storage efficiency.

Option 1: Take another pass at improving the efficiency of the TSDF message. Currently stores 32 bits of TSDF data and 3 16-bit row/col/sheet coordinates. Might be better to save as a map with the coordinate as the key.
Option 2: Save to a file on the disk instead of packaging as a message. Added benefit of giving better access to intermediate processing steps. Would need to restructure how get_mesh service works.

terminate called after throwing an instance of 'tf2::LookupException'

Hi!

I am Aaditya Saraiya and I am currently working on the Google Summer of Code 2018 project with ROS Industrial under the mentorship of @Levi-Armstrong.

I want to use the yak package with my project for 3-D reconstruction with a Kinect mounted on a robotic manipulator. I tried the roslaunch yak launch_xtion_robot.launch command after launching the RViz planning/execution file, when I get this error.

terminate called after throwing an instance of 'tf2::LookupException'
what():  "ensenso_sensor_optical_frame" passed to lookupTransform argument source_frame does not exist.

I am currently using a simulated Kinect from Gazebo. I tried changing the topic names to match with the topic names from my package but that didn't seem to solve the issue. Can you please guide me on how to remove this error as well as how I should start off using this package? Some hints on what parameters I should change just to start off would be very helpful.

Edit: Have just added the tf tree for reference
frames1.pdf

Thanks in advance,
Aaditya Saraiya

Multiple octomap views generated and octomap flipped

Hi!

So after solving this issue, I was able to run all the launch files/ nodes and started to analyze the Kinect fusion process. However, I am not able to understand why multiple octomaps in different views are being generated, and one view has an octomap which is flipped.

I am adding the different steps of the process and the views in Gazebo and RViz so you can get a better visualisation of the problem.


After launching Gazebo and moveit_planning_execution.launch

Note: The Octomap is straight and properly generated wrt to the object placed in Gazebo. The first Octomap isn't generated by the YAK package and was created separately to check if the Kinect camera is working in simulation properly.

gazebo_rviz_init


After roslaunch yak launch_gazebo_robot.launch

Note: Multiple octomaps are generated in all 3 directions. Even though there is just one sphere in the view as shown in the earlier picture. Could this be because the tracking isn't working?

multi_octomaps


After roslaunch nbv_planner octomap_mapping.launch

initialise_volume_markers


Afterrosrun nbv_planner exploration_controller_node

Note-

  1. The raycasting generates all the rays and hit rays near to the robot base itself. Is this how its supposed to happen?
  2. The robot doesn't move as its supposed to when running the exploration_controller_node
  3. I receive a warning message which is as follows:
[ INFO] [1530883513.318211330, 866.408000000]: Unable to solve the planning problem
[ERROR] [1530883513.318379591, 866.408000000]: Couldn't compute a free-space trajectory
[ INFO] [1530883513.318441594, 866.408000000]: Pose index: 28
[ INFO] [1530883513.318512142, 866.408000000]: Trying next best pose: position: 
  x: -0.218574
  y: 0.793138
  z: 0.303502
orientation: 
  x: 0.77676
  y: 0.211575
  z: 0.155896
  w: 0.572343

[ERROR] [1530883513.318788475, 866.408000000]: Found empty JointState message
[ WARN] [1530883513.318873192, 866.408000000]: It looks like the planning volume was not specified.
  1. On the MoveIt! terminal i get the following error message [ WARN] [1530883644.819044064, 925.199000000]: The weight on position constraint for link 'camera_depth_optical_frame' is near zero. Setting to 1.0.

raycasting


I apologise for the long message. Just wanted to describe stuff in a better way as I am not able to pinpoint what exactly is causing this error.

Thanks in advance,
Aaditya Saraiya

@Levi-Armstrong

raycast_image corrupted when using gazebo camera

I'm trying to use yak with gazebo. I have successfully run yak with a xtion camera. However, when I launch yak using the color and depth messages from a kinect camera in gazebo the /kinfu/raycast_image looks corrupted, like this:
image

Do you have any idea what the problem could be?

Robot URDF needs to be updated to match position of Xtion

Using the Ensenso frame works OK for translation. Since the Xtion camera frame isn't in the same position, errors are introduced during rotation which impede accurate reconstruction. There's an established procedure for calibrating sensors mounted on robot arms, right?

Examples

  • Example data
  • Example instructions

TODOs

  • Directory for generated meshes is hardcoded; change to be a ROS parameter.
  • Serialized TSDF volume is very inefficient, since it's a dense representation of a sparse volume.
  • Change serialization to make better use of space (requires work on the kinfu side too).
  • Mesh normals are always inverted. Can be easily fixed manually in MeshLab but that's contrary to the spirit of things! Programmatically set normal directions when writing .obj.
  • Currently uses OpenVBD's standard meshing algorithm. Adaptive meshing could be more efficient.

No robot motion after relaxing camera pose constraints

Hi,

As a follow up of this issue, I removed the camera pose constraints applied to the motion planner to plan and execute the array of poses provided by the Next Best View Planner.

With the constraints relaxed, the robot was able to plan considerably more paths. However, a good number of paths failed to execute because the shoulder_link or the forearm_link were in collision with each other or with the table_link which lead to the robot landing in the following position.

the message after which robot crashed

The issues seem to be more related to the lack of maneuverability with the manipulator URDF itself.

Another weird behavior is that when the myworkcell_planning_execution is launched the TF tree seems correct.

tf_pre_kinfu

However, after starting the TSDF reconstruction node, the TF tree is shifted below by a certain amount and certain frames get messed up.

tf_post_kinfu

Further analysis is required in order to understand where the issue lies.

Any comments will be much valuable.
Thanks!

@Levi-Armstrong

TSDF loses out-of-view surfaces when new data is integrated

If I'm mapping a large part and collect a lot of data on one end while the other end is outside the sensor field of view, the existing data for out-of-view surfaces vanishes as new data accumulates. This is bad, since I'd like to be able to model big parts that I can't see all at once!

Use known transform between camera and voxel volume frames instead of estimating

Current approach when using pose hints:
[in kinfu_server]

  • Get newest camera pose from TF via robot arm kinematics.
  • Calculate camera motion using current pose and previous pose.
  • Convert from TF transform to OpenCV Affine3f
  • Pass camera motion transform to kinfu
    [in kinfu]
  • Correct camera motion transform via ICP
  • Update estimated transform between camera pose and volume using camera motion
  • Integrate new measurements using this updated estimated transform

A better approach that would avoid estimating a transform that we already know canonically:
[in kinfu_server]

  • Get transform between camera frame and volume frame
  • Calculate camera motion using new transform and previous transform
  • Pass camera pose transform and camera motion transform to kinfu
    [in kinfu]
  • Correct camera motion via ICP
  • Update known pose transform using corrected camera motion
  • Integrate new measurements using updated known transform

Unable to start the node

After building using docker, I am unable to source it nor run the package. If you could elaborate on the steps needed to build and run the program, that would be very helpful.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.