Giter Site home page Giter Site logo

vrviz's Introduction

Visualization node for ROS using OpenVR

Example Screenshot

This code was built from the openvr example code, and adapted to run in catkin and to display ROS messages in virtual space running natively on Ubuntu. There are currently not many message types implemented, and in the future it may be more sensible to turn this into an rviz plugin rather than continue re-implimenting message types here.

If you end up using it, please cite us :)

@inproceedings{ONeill2019,
author = {O'Neill, J. and Ourselin, S. and Vercauteren, T. and {Da Cruz*}, L. and Bergeles*, C.},
booktitle = {Joint Workshop on New Technologies for Computer/Robot Assisted Surgery},
title = {{VRViz: Native VR Visualization of ROS Topics}},
year = {2019}
}

Prerequisites

The main dependancy is SteamVR, which can be installed from Steam. Additionally you will need a VR headset compatable with openvr. This code has only been tested with the HTC Vive, but it is possible that other headsets supported by steamvr such as the Oculus Rift could work as well.

Several library dependancies are included in the the repo, the openvr library, which is included in openvr_library and sdl2 which is included in sdl2_library.

The code is designed to be used in ROS, and has been tested in ROS Kinetic on Ubuntu 16.04. Instructions for installing ROS can be found here and other than the turtlebot demo, all ROS dependencies should be covered by ros-kinetic-desktop-full.

The non-ROS dependencies include GLEW for rendering and assimp for loading URDF robot models with Collada meshes. These should be able to be installed with rosdep or with:

sudo apt-get install libglew-dev libassimp-dev

Running with Steam Runtime

For now, this node requires being run as part of the steam runtime (This can be automated, as shown in vrviz.launch):

rosrun --prefix '~/.steam/ubuntu12_32/steam-runtime/run.sh' vrviz vrviz_gl

Demonstration Launch Files

For a demo of showing a Turtlebot in Gazebo, install ros-kinetic-turtlebot-gazebo and run:

roslaunch vrviz turtlebot_demo.launch

This should load up the robot, and it can be controlled by pulling the trigger of the controller and then moving/rotating the wand while the trigger is depressed. Pressing the touchpad and moving the wand will move the world around relative to the user. This launch file will fix the grid to the odom frame. See an example youtube video here.

For a demo showing a bagfile download the demo_mapping.bag file from here and run:

roslaunch vrviz point_cloud_demo.launch bagfile:=/path/to/demo_mapping.bag

For a demo showing a stereoscopic video, download the bbb_clip_sbs.mp4 file from here and install ros-kinetic-video-stream-opencv and then run:

roslaunch vrviz video_demo.launch video_file:=/path/to/bbb_clip_sbs.mp4

Features

  • The default RViz 1m grid
  • Scaling the VR world relative to the ROS world (set by rosparam at startup)
  • Loading a robot model from the parameter server with load_robot:=true
  • Visualizing TF's (currently only TF's that have been referenced somewhere)
  • Visualizing PointCloud2 messages (works with color or intensity, otherwise sets constant color)
  • Visualizing stereo pair image (currently expects one side-by-side image, or duplicates the same image to each eye)
  • Visualizing camera image (projects out from camera location)
  • Visualizing visualization messages (All types are at least basically supported, but may not perform identically to rviz)
    • to see a variety of markers, run roslaunch vrviz turtlebot_demo.launch silly_shapes:=true

Limitations

  • The code is very much a work in progress, and many features are partially or inefficiently implemented.
  • The SteamVR support for Ubuntu is still in Beta, so be careful.
  • Currently only supports one of each message type. This can be worked around by, for example, concatenating a bunch of point clouds in another node and then sending the big cloud into VRViz. Also multiple visualization messages could be mapped to the one receiver, as it should update based on namespace and ID.
  • Please feel free to open a feature request or add a pull request, there are lots of little improvements that we have not gotten around to but if there's a desire for them we would be happy to try.

Vulkan

As much as possible, the code has been created to keep the opengl specific things separate in order to allow building either with vulkan or opengl (based on the fact that the open_vr examples of opengl and vulkan, which this was built from, are very similar). However, while the vulkan executable (vrviz_vk) builds and links, it does NOT have any real functionality to speak of, and would require some work to bring up to the level of vrviz_gl. It is currently commented out of the CMakeLists and does not get built.

vrviz's People

Contributors

john-j-oneill avatar ravibhadeshiya avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

vrviz's Issues

Naming of HTC Vive trackers properly

Hi,

I am trying to name an individual htc tracker which is currently named as "vrviz_base_depricated_x_y". I was trying to use unTrackedDevice variable with valid pose as reference to the devices which are connected but couldn't figure out the order in which it is named.

thanks

Streamline Image Conversion

In rawImageCallback() we convert to OpenCV and then convert to OpenGL, which is probably slower than could be.
See if there is an easy way to pass the raw_image_msg->data directly into glTexImage2D to avoid OpenCV.

Text doesn't face user

Currently TEXT_VIEW_FACING does render text (barely) but it has a fixed orientation, even though the name implies it should always face the user.

Dynamic Scaling

Currently scaling_factor is implemented ad-hoc when loading/importing the object/mesh/cloud. We could add a scale to the matrix in the call to glUniformMatrix4fv() in RenderScene(), so it could be updated at any time. This could then be changed by a message or by a dynamic param or even by a GUI if things got fancy.

Add more marker types

Currently of the types of markers we only support CUBE, SPHERE, CYLINDER, TEXT_VIEW_FACING & TRIANGLE_LIST. That's only 5 of 12 types, so the others could/should be supported as well.

Implement Teleportation for 2D nav

This was originally requested by @jc-bm in an email. The idea would be to implement teleporation for moving around the scene in 2d, similar to how steamvr home does it here. The same pointing method could also be used to send 2D nav goals, which could be used for a cool Turtlebot demo with the nav stack running.

Make research paper available

Hello John,

Could you please make your paper "VRViz: Native VR Visualization of ROS Topics" available for reading?

Thanks a lot,
Rahul

Inverted VR display

Launching vrviz.launch displays inverted view of headset also tf appears to be inverted as well
vr_inverted

Shapes in URDF

Currently the Robot Description parser only loads urdf::Geometry::MESH elements, but the geometry used in the visualization_msgs parser should correlate well with urdf shape geometry. E.G. this robot should be able to be visualized.

Error when running static video with video_stream_opencv

Getting the following error when playing a static video with no pose requirements.

================================================================================REQUIRED process [vrviz-4] has died!
process has died [pid 16904, exit code -11, cmd /home/ros/.steam/steam/ubuntu12_32/steam-runtime/run.sh /home/ros/github/vrviz/devel/lib/vrviz/vrviz_gl -novblank /markers:=/markers /cloud:=/cloud /controller_twist:=/controller_twist __name:=vrviz __log:=/home/ros/.ros/log/19ccdc6e-2ec4-11e9-98f8-001fbc129174/vrviz-4.log].
log file: /home/ros/.ros/log/19ccdc6e-2ec4-11e9-98f8-001fbc129174/vrviz-4*.log
Initiating shutdown!
================================================================================

Camera visualization

The current image view is tied directly to the headset, giving more of an FPV type experience. It would be more appropriate for ROS if camera feeds were displayed in 3D relative to their tf and projection info. There would still be a distance from the camera origin to project out to, but it would definitely be useful for combining cameras with the URDF or point clouds. Maybe best to test it out on turtlebot_demo since that has a camera in the Kinect. Would also probably want to support a custom alpha for the image.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.