austinderic / yak Goto Github PK
View Code? Open in Web Editor NEWyak (yet another kinfu) is a library and ROS wrapper for Truncated Signed Distance Fields (TSDFs).
License: MIT License
yak (yet another kinfu) is a library and ROS wrapper for Truncated Signed Distance Fields (TSDFs).
License: MIT License
Hi!
I am Aaditya Saraiya and I am currently working on the Google Summer of Code 2018 project with ROS Industrial under the mentorship of @Levi-Armstrong.
I want to use the yak package with my project for 3-D reconstruction with a Kinect mounted on a robotic manipulator. I tried the roslaunch yak launch_xtion_robot.launch
command after launching the RViz planning/execution file, when I get this error.
terminate called after throwing an instance of 'tf2::LookupException'
what(): "ensenso_sensor_optical_frame" passed to lookupTransform argument source_frame does not exist.
I am currently using a simulated Kinect from Gazebo. I tried changing the topic names to match with the topic names from my package but that didn't seem to solve the issue. Can you please guide me on how to remove this error as well as how I should start off using this package? Some hints on what parameters I should change just to start off would be very helpful.
Edit: Have just added the tf tree for reference
frames1.pdf
Thanks in advance,
Aaditya Saraiya
Dragging the translation arrows that overlap with the box edge lines consistently produces a segfault. Workaround for now is to only use the arrows pointing away from the box.
Hi!
So after solving this issue, I was able to run all the launch files/ nodes and started to analyze the Kinect fusion process. However, I am not able to understand why multiple octomaps in different views are being generated, and one view has an octomap which is flipped.
I am adding the different steps of the process and the views in Gazebo and RViz so you can get a better visualisation of the problem.
After launching Gazebo and moveit_planning_execution.launch
Note: The Octomap is straight and properly generated wrt to the object placed in Gazebo. The first Octomap isn't generated by the YAK package and was created separately to check if the Kinect camera is working in simulation properly.
After roslaunch yak launch_gazebo_robot.launch
Note: Multiple octomaps are generated in all 3 directions. Even though there is just one sphere in the view as shown in the earlier picture. Could this be because the tracking isn't working?
After roslaunch nbv_planner octomap_mapping.launch
Afterrosrun nbv_planner exploration_controller_node
Note-
exploration_controller_node
[ INFO] [1530883513.318211330, 866.408000000]: Unable to solve the planning problem
[ERROR] [1530883513.318379591, 866.408000000]: Couldn't compute a free-space trajectory
[ INFO] [1530883513.318441594, 866.408000000]: Pose index: 28
[ INFO] [1530883513.318512142, 866.408000000]: Trying next best pose: position:
x: -0.218574
y: 0.793138
z: 0.303502
orientation:
x: 0.77676
y: 0.211575
z: 0.155896
w: 0.572343
[ERROR] [1530883513.318788475, 866.408000000]: Found empty JointState message
[ WARN] [1530883513.318873192, 866.408000000]: It looks like the planning volume was not specified.
[ WARN] [1530883644.819044064, 925.199000000]: The weight on position constraint for link 'camera_depth_optical_frame' is near zero. Setting to 1.0.
I apologise for the long message. Just wanted to describe stuff in a better way as I am not able to pinpoint what exactly is causing this error.
Thanks in advance,
Aaditya Saraiya
If the number of voxels in the X, Y, and Z dimension are not identical, at some point between serialization and meshing the voxel volume is traversed in the wrong order and the occupied voxels end up in incorrect coordinates relative to each other. It would be very useful to have more freedom when specifying the dimensions and extent of the volume.
Hi,
As a follow up of this issue, I removed the camera pose constraints applied to the motion planner to plan and execute the array of poses provided by the Next Best View Planner.
With the constraints relaxed, the robot was able to plan considerably more paths. However, a good number of paths failed to execute because the shoulder_link
or the forearm_link
were in collision with each other or with the table_link
which lead to the robot landing in the following position.
The issues seem to be more related to the lack of maneuverability with the manipulator URDF itself.
Another weird behavior is that when the myworkcell_planning_execution
is launched the TF tree seems correct.
However, after starting the TSDF reconstruction node, the TF tree is shifted below by a certain amount and certain frames get messed up.
Further analysis is required in order to understand where the issue lies.
Any comments will be much valuable.
Thanks!
Current approach when using pose hints:
[in kinfu_server]
A better approach that would avoid estimating a transform that we already know canonically:
[in kinfu_server]
Using the Ensenso frame works OK for translation. Since the Xtion camera frame isn't in the same position, errors are introduced during rotation which impede accurate reconstruction. There's an established procedure for calibrating sensors mounted on robot arms, right?
Large volumes (hundreds of millions of voxels) with few empty voxels don't successfully serialize into messages because they're too big, even after improvements were made to storage efficiency.
Option 1: Take another pass at improving the efficiency of the TSDF message. Currently stores 32 bits of TSDF data and 3 16-bit row/col/sheet coordinates. Might be better to save as a map with the coordinate as the key.
Option 2: Save to a file on the disk instead of packaging as a message. Added benefit of giving better access to intermediate processing steps. Would need to restructure how get_mesh service works.
If I'm mapping a large part and collect a lot of data on one end while the other end is outside the sensor field of view, the existing data for out-of-view surfaces vanishes as new data accumulates. This is bad, since I'd like to be able to model big parts that I can't see all at once!
After building using docker, I am unable to source it nor run the package. If you could elaborate on the steps needed to build and run the program, that would be very helpful.
Need to initialize it at a non-origin position so the volume starts within the field of view of the camera. Right now I have to move it manually, which is time-consuming.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.