Giter Site home page Giter Site logo

rpg_emvs's Introduction

EMVS: Event-based Multi-View Stereo

EMVS: Event-based Multi-View Stereo

This is the code for the 2018 IJCV paper EMVS: Event-Based Multi-View Stereo - 3D Reconstruction with an Event Camera in Real-Time by Henri Rebecq, Guillermo Gallego, Elias Mueggler, and Davide Scaramuzza.

Citation

A pdf of the paper is available here. If you use any of this code, please cite this publication as follows:

@Article{Rebecq18ijcv,
  author        = {Henri Rebecq and Guillermo Gallego and Elias Mueggler and
                  Davide Scaramuzza},
  title         = {{EMVS}: Event-based Multi-View Stereo---{3D} Reconstruction
                  with an Event Camera in Real-Time},
  journal       = "Int. J. Comput. Vis.",
  year          = 2018,
  volume        = 126,
  issue         = 12,
  pages         = {1394--1414},
  month         = dec,
  doi           = {10.1007/s11263-017-1050-6}
}

Patent & License

  • The proposed EMVS method is patented, as you may find in this link.

    H. Rebecq, G. Gallego, D. Scaramuzza
    Simultaneous Localization and Mapping with an Event Camera
    Pub. No.: WO/2018/037079.  International Application No.: PCT/EP2017/071331
    
  • The license is available here.

Overview

From a high-level, input-output point of view, EMVS receives a set of events and camera poses and produces a semi-dense 3D reconstruction of the scene, as shown in the above video. See the example below.

Installation

This software depends on ROS. Installation instructions can be found here. We have tested this software on Ubuntu 16.04 and ROS Kinetic.

Install catkin tools, vcstool:

sudo apt-get install python-catkin-tools python-vcstool

Create a new catkin workspace if needed:

mkdir -p ~/emvs_ws/src && cd ~/emvs_ws/
catkin config --init --mkdirs --extend /opt/ros/kinetic --merge-devel --cmake-args -DCMAKE_BUILD_TYPE=Release

Clone this repository:

cd src/
git clone [email protected]:uzh-rpg/rpg_emvs.git

Clone dependencies:

vcs-import < rpg_emvs/dependencies.yaml

Install pcl-ros:

sudo apt-get install ros-kinetic-pcl-ros

Build the package(s):

catkin build mapper_emvs
source ~/emvs_ws/devel/setup.bash

Running example

Download slider_depth.bag data file, from the Event Camera Dataset, which was recorded using the DVS ROS driver.

Run the example:

roscd mapper_emvs
rosrun mapper_emvs run_emvs --bag_filename=/path/to/slider_depth.bag --flagfile=cfg/slider_depth.conf

Configuration parameters: The options that can be passed to the program using the configuration file (e.g., slider_depth.conf) and their default values are defined at the top of the main.cpp file. These are: the parameters defining the input data, the parameters of the shape and size of the Disparity Space Image (DSI), and the parameters to extract a depth map and its point cloud from the DSI.

Visualization

Images

Upon running the example above, some images will be saved in the folder where the code was executed, for visualization. For example, the output images for the slider_depth example should look as follows:

Confidence map Depth map

The depth map is colored according to depth with respect to the reference view.

Disparity Space Image (DSI)

We also provide Python scripts to inspect the DSI (3D grid).

Volume Rendering

Install visvis first:

pip install visvis

To visualize the DSI stored in the dsi.npy file, run:

roscd mapper_emvs
python scripts/visualize_dsi_volume.py -i /path/to/dsi.npy

You should get the following output, which you can manipulate interactively:

Showing Slices of the DSI

To visualize the DSI with moving slices (i.e., cross sections), run:

python scripts/visualize_dsi_slices.py -i /path/to/dsi.npy

which should produce the following output:

Point Cloud

To visualize the 3D point cloud extracted from the DSI, install pypcd first as follows:

pip install pypcd

and then run:

python scripts/visualize_pointcloud.py -i /path/to/pointcloud.pcd

A 3D matplotlib interactive window like the one below should appear, allowing you to inspect the point cloud (color-coded according to depth with respect to the reference view):

Additional Examples

We provide additional examples with sequences from the Event Camera Dataset.

Office Scene

Download dynamic_6dof and run:

rosrun mapper_emvs run_emvs --bag_filename=/path/to/dynamic_6dof.bag --flagfile=cfg/dynamic_6dof.conf

The images generated should coincide with those in this folder.

Confidence map Depth map

You may also explore the DSI as in the previous example (the same commands should work).

Boxes

Download boxes_6dof and run:

rosrun mapper_emvs run_emvs --bag_filename=/path/to/boxes_6dof.bag --flagfile=cfg/boxes_6dof.conf

The images generated should coincide with those in this folder.

Confidence map Depth map

You may also explore the DSI as in the previous example (the same commands should work).

Shapes

Download shapes_6dof and run:

rosrun mapper_emvs run_emvs --bag_filename=/path/to/shapes_6dof.bag --flagfile=cfg/shapes_6dof.conf

The images generated should be those in this folder.

Confidence map Depth map

As you may notice by inspecting the DSI, the shapes are on a plane (a wall).

Additional Notes

By default, the Z slices of the DSI are uniformly spaced in inverse depth. However, it is possible to change this behavior to use Z slices uniformly spaced in depth (rather than inverse depth). This can be achieved by changing the option USE_INVERSE_DEPTH to OFF in the CMakeLists.txt. This requires recompiling mapper_emvs. We recommend removing the emvs_mapper build folder before recompiling.

Additional Resources on Event Cameras

rpg_emvs's People

Contributors

danielgehrig18 avatar gsaponaro avatar guillermogb avatar hwangh95 avatar supitalp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rpg_emvs's Issues

runtime error, OpenCV

Dear developer, when rosrun mapper_emvs run_emvs --bag_filename=/home/yh/data_file/data/slider_depth.bag --flagfile=cfg/slider_depth.conf;
I got this error:

I1114 10:33:05.754899  4604 data_loading.cpp:100] Initial stamp: 1461581615.010418917
I1114 10:33:05.932999  4604 depth_vector.hpp:133] Using linear spacing in inverse depth
I1114 10:33:05.933040  4604 mapper_emvs.cpp:183] Specified DSI FoV < 10 deg. Will use camera FoV instead.
I1114 10:33:05.933051  4604 mapper_emvs.cpp:191] Focal length of virtual camera: 328.308 pixels
I1114 10:33:06.379168  4604 main.cpp:91] Time to evaluate DSI: 376 milliseconds
I1114 10:33:06.379199  4604 main.cpp:92] Number of events processed: 1075290 events
I1114 10:33:06.379215  4604 main.cpp:93] Number of events processed per second: 2.85981 Mev/s
I1114 10:33:06.379231  4604 main.cpp:95] Mean square = 356.087
OpenCV Error: Assertion failed (isIdentity(expr)) in _InputArray, file /home/yh/opencv-3.4.13/modules/core/src/matrix_expressions.cpp, line 1843
terminate called after throwing an instance of 'cv::Exception'
  what():  /home/yh/opencv-3.4.13/modules/core/src/matrix_expressions.cpp:1843: error: (-215) isIdentity(expr) in function _InputArray

*** Aborted at 1668393186 (unix time) try "date -d @1668393186" if you are using GNU date ***
PC: @     0x7f551812ce87 gsignal
*** SIGABRT (@0x3e8000011fc) received by PID 4604 (TID 0x7f551c611680) from PID 4604; stack trace: ***
    @     0x7f5519f89980 (unknown)
    @     0x7f551812ce87 gsignal
    @     0x7f551812e7f1 abort
    @     0x7f55187940a9 (unknown)
    @     0x7f551879f506 (unknown)
    @     0x7f551879f571 std::terminate()
    @     0x7f551879f7f5 __cxa_throw
    @     0x7f551baf28a2 cv::error()
    @     0x7f551baf29bf cv::error()
    @     0x7f5518c98bda cv::_InputArray::_InputArray()
    @     0x55d58d78c585 (unknown)
    @     0x7f551810fc87 __libc_start_main
    @     0x55d58d78d62a (unknown)
已放弃 (核心已转储)

terminate called after throwing an instance of 'cv::Exception'
what(): /home/yh/opencv-3.4.13/modules/core/src/matrix_expressions.cpp:1843: error: (-215) isIdentity(expr) in function _InputArray.

What's the cause of this error, Please

conf parameters explanation

Hi,
I just have two questions about the parameters used to reconstruct the images in your pkg
1- What do the following parameters represent and is there a way (other than trial and error) to set them? also what are the units of each?
--dimZ
--adaptive_threshold_c
--median_filter_size
--radius_search
--min_depth
--max_depth
2- Can the time parameters be set automatically?
--start_time_s=4.2
--stop_time_s=6.2

Thanks in advance

How to run the EMVS algorithm in real time?

I have found that the algorithm is off-line. I have to have a rosbag for run it.
But I have no a Motion capture system to publish the topic " /optitrack/davis"
So is there any way to run it online?
Thanks in advance.

How to run emvs with my bag recorded by rpg_dvs_ros?

Hi,
I run your example 'slider_depth.bag' successfully , and I recored some data with the tool 'https://github.com/uzh-rpg/rpg_dvs_ros.git'., but I cant run emvs with my data. As the log shows: Your bag has 'geometry_msgs/PoseStamped' but mine doesnt.
As you say ,you also record your data with DVS ROS Driver, what should I do to run emvs with my data?
Thanks!

user@user:/data/EventDataGen$ rosbag info emvs/slider_depth.bag
path: emvs/slider_depth.bag
version: 2.0
duration: 3.4s
start: Apr 25 2016 18:53:35.00 (1461581615.00)
end: Apr 25 2016 18:53:38.39 (1461581618.39)
size: 17.3 MB
messages: 1189
compression: none [22/22 chunks]
types: dvs_msgs/EventArray [5e8beee5a6c107e504c2e78903c224b8]
geometry_msgs/PoseStamped [d3812c3cbc69362b77dc0b19b345f8f5]
sensor_msgs/CameraInfo [c9a58c1b0b154e0e6da7578cb991d214]
sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743]
topics: /dvs/camera_info 661 msgs : sensor_msgs/CameraInfo
/dvs/events 102 msgs : dvs_msgs/EventArray
/dvs/image_raw 87 msgs : sensor_msgs/Image
/optitrack/davis 339 msgs : geometry_msgs/PoseStamped

user@user:/data/EventDataGen$ rosbag info rpg_dvs_ros/bag/2019-11-08-14-48-07.bag
path: rpg_dvs_ros/bag/2019-11-08-14-48-07.bag
version: 2.0
duration: 9.3s
start: Nov 08 2019 14:48:07.65 (1573195687.65)
end: Nov 08 2019 14:48:16.99 (1573195696.99)
size: 80.7 MB
messages: 11470
compression: none [102/102 chunks]
types: dvs_msgs/EventArray [5e8beee5a6c107e504c2e78903c224b8]
dynamic_reconfigure/Config [958f16a05573709014982821e6822580]
dynamic_reconfigure/ConfigDescription [757ce9d44ba8ddd801bb30bc456f946f]
rosgraph_msgs/Log [acffd30cd6b6de30f120938c17c593fb]
sensor_msgs/CompressedImage [8f7a12909da2c9d3332d540a0977563f]
sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743]
sensor_msgs/Imu [6a62c6daae103f4ff57a132d6f95cec2]
std_msgs/Float32 [73fcbf46b49191e672908e50842a83d4]
std_msgs/Int32 [da5909fbe378aeaf85e547e830cc1bb7]
theora_image_transport/Packet [33ac4e14a7cff32e7e0d65f18bb410f3]
topics: /davis_ros_driver/parameter_descriptions 1 msg : dynamic_reconfigure/ConfigDescription
/davis_ros_driver/parameter_updates 1 msg : dynamic_reconfigure/Config
/dvs/events 278 msgs : dvs_msgs/EventArray
/dvs/exposure 215 msgs : std_msgs/Int32
/dvs/image_raw 214 msgs : sensor_msgs/Image
/dvs/imu 9240 msgs : sensor_msgs/Imu
/dvs_accumulated_events 7 msgs : sensor_msgs/Image
/dvs_accumulated_events/compressed 7 msgs : sensor_msgs/CompressedImage
/dvs_accumulated_events/compressed/parameter_descriptions 1 msg : dynamic_reconfigure/ConfigDescription
/dvs_accumulated_events/compressed/parameter_updates 1 msg : dynamic_reconfigure/Config
/dvs_accumulated_events/compressedDepth/parameter_descriptions 1 msg : dynamic_reconfigure/ConfigDescription
/dvs_accumulated_events/compressedDepth/parameter_updates 1 msg : dynamic_reconfigure/Config
/dvs_accumulated_events/theora 10 msgs : theora_image_transport/Packet
/dvs_accumulated_events/theora/parameter_descriptions 1 msg : dynamic_reconfigure/ConfigDescription
/dvs_accumulated_events/theora/parameter_updates 1 msg : dynamic_reconfigure/Config
/dvs_accumulated_events_edges 7 msgs : sensor_msgs/Image
/dvs_accumulated_events_edges/compressed 7 msgs : sensor_msgs/CompressedImage
/dvs_accumulated_events_edges/compressed/parameter_descriptions 1 msg : dynamic_reconfigure/ConfigDescription
/dvs_accumulated_events_edges/compressed/parameter_updates 1 msg : dynamic_reconfigure/Config
/dvs_accumulated_events_edges/compressedDepth/parameter_descriptions 1 msg : dynamic_reconfigure/ConfigDescription
/dvs_accumulated_events_edges/compressedDepth/parameter_updates 1 msg : dynamic_reconfigure/Config
/dvs_accumulated_events_edges/theora 10 msgs : theora_image_transport/Packet
/dvs_accumulated_events_edges/theora/parameter_descriptions 1 msg : dynamic_reconfigure/ConfigDescription
/dvs_accumulated_events_edges/theora/parameter_updates 1 msg : dynamic_reconfigure/Config
/dvs_rendering 276 msgs : sensor_msgs/Image
/dvs_rendering/compressed 276 msgs : sensor_msgs/CompressedImage
/dvs_rendering/compressed/parameter_descriptions 1 msg : dynamic_reconfigure/ConfigDescription
/dvs_rendering/compressed/parameter_updates 1 msg : dynamic_reconfigure/Config
/dvs_rendering/compressedDepth/parameter_descriptions 1 msg : dynamic_reconfigure/ConfigDescription
/dvs_rendering/compressedDepth/parameter_updates 1 msg : dynamic_reconfigure/Config
/dvs_rendering/theora 278 msgs : theora_image_transport/Packet
/dvs_rendering/theora/parameter_descriptions 1 msg : dynamic_reconfigure/ConfigDescription
/dvs_rendering/theora/parameter_updates 1 msg : dynamic_reconfigure/Config
/dvs_undistorted/compressed/parameter_descriptions 1 msg : dynamic_reconfigure/ConfigDescription
/dvs_undistorted/compressed/parameter_updates 1 msg : dynamic_reconfigure/Config
/dvs_undistorted/compressedDepth/parameter_descriptions 1 msg : dynamic_reconfigure/ConfigDescription
/dvs_undistorted/compressedDepth/parameter_updates 1 msg : dynamic_reconfigure/Config
/dvs_undistorted/theora/parameter_descriptions 1 msg : dynamic_reconfigure/ConfigDescription
/dvs_undistorted/theora/parameter_updates 1 msg : dynamic_reconfigure/Config
/events_off_mean_1 9 msgs : std_msgs/Float32
/events_off_mean_5 2 msgs : std_msgs/Float32
/events_on_mean_1 9 msgs : std_msgs/Float32
/events_on_mean_5 2 msgs : std_msgs/Float32
/rosout 309 msgs : rosgraph_msgs/Log (3 connections)
/rosout_agg 288 msgs : rosgraph_msgs/Log

Compile error

Hello,I am new here,I follow the steps in the readme,then I found the compile error blew.
Ubuntu 16.04 ROS Kinetic.

Running in real-time

Hi, I would like to run the program in real-time, but I don't know how doing it, I have the necessary topics, so, what is the correct way to run it?

Problem with coordinate axis in ROS Gazebo

Hello,

I tried your code with a rosbag recorded with ROS Gazebo platform, but it didn’t work.I guess that Gazebo seems to use different coordinate axis in compare to Optitrack. In Gazebo X points forward, Y left and Z up.

I have replayed one of your example rosbag slider_depth. And in it, camera moved from left to right with increased positive X coordinate.

I don’t have optitrack installed on my computer. But,If you could tell me what the coordinate system you used, then I can do some transformation.

Thanks in advance

clonning prameter error

Hi,
Thanks for the useful package.
I am following your steps for installing the rpg_emv package and receiving the following error when I type this command vcs-import < rpg_emvs/dependencies.yaml
Any clue?

user@user-Alienware-14:~/emvs_ws/src$ sudo vcs-import < rpg_emvs/dependencies.yaml
EEEEEEEE
=== ./catkin_simple (git) ===
Could not clone repository '[email protected]:catkin/catkin_simple.git': Cloning into '.'...
Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
=== ./cnpy (git) ===
Could not clone repository '[email protected]:uzh-rpg/cnpy_catkin.git': Cloning into '.'...
Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
=== ./eigen_catkin (git) ===
Could not clone repository '[email protected]:ethz-asl/eigen_catkin.git': Cloning into '.'...
Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
=== ./eigen_checks (git) ===
Could not clone repository '[email protected]:ethz-asl/eigen_checks.git': Cloning into '.'...
Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
=== ./gflags_catkin (git) ===
Could not clone repository '[email protected]:ethz-asl/gflags_catkin.git': Cloning into '.'...
Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
=== ./glog_catkin (git) ===
Could not clone repository '[email protected]:ethz-asl/glog_catkin.git': Cloning into '.'...
Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
=== ./minkindr (git) ===
Could not clone repository '[email protected]:ethz-asl/minkindr.git': Cloning into '.'...
Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
=== ./rpg_dvs_ros (git) ===
Could not clone repository '[email protected]:uzh-rpg/rpg_dvs_ros.git': Cloning into '.'...
Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

munmap_chunk(): invalid pointer

Hi,
I just wanted to try your running example and then got the following error. Does anyone know the possible reason for it?
I built this project against the ROS version of Melodic.

`
$ rosrun mapper_emvs run_emvs --bag_filename=/home/weiyancai/Dataset/DAVIS240C/slider_depth.bag --flagfile=cfg/slider_depth.conf

1127 18:20:52.614800 8786 data_loading.cpp:100] Initial stamp: 1461581615.010418917
1127 18:20:52.741639 8786 depth_vector.hpp:133] Using linear spacing in inverse depth
1127 18:20:52.741664 8786 mapper_emvs.cpp:183] Specified DSI FoV < 10 deg. Will use camera FoV instead.
1127 18:20:52.741683 8786 mapper_emvs.cpp:191] Focal length of virtual camera: 328.308 pixels
1127 18:20:53.002171 8786 main.cpp:91] Time to evaluate DSI: 108 milliseconds
1127 18:20:53.002195 8786 main.cpp:92] Number of events processed: 1075290 events
1127 18:20:53.002207 8786 main.cpp:93] Number of events processed per second: 9.95639 Mev/s
1127 18:20:53.002219 8786 main.cpp:95] Mean square = 356.087
munmap_chunk(): invalid pointer
*** Aborted at 1574850053 (unix time) try "date -d @1574850053" if you are using GNU date ***
PC: @ 0x7f5d3cd23e97 gsignal
*** SIGABRT (@0x3e800002252) received by PID 8786 (TID 0x7f5d3ffd8680) from PID 8786; stack trace: ***
@ 0x7f5d3ec778b9 google::(anonymous namespace)::FailureSignalHandler()
@ 0x7f5d3dc70890 (unknown)
@ 0x7f5d3cd23e97 gsignal
@ 0x7f5d3cd25801 abort
@ 0x7f5d3cd6e897 (unknown)
@ 0x7f5d3cd7590a (unknown)
@ 0x7f5d3cd7cecc cfree
@ 0x7f5d3fa55e45 Grid3D::writeGridNpy()
@ 0x559ca5aad74c main
@ 0x7f5d3cd06b97 __libc_start_main
@ 0x559ca5aaeb0a _start
`

Reconstrucring recorded data

Hi,
I am trying to use your package to reconstruct the scene of a new recorded(i am using DAVIS240C and running it using rpg_dvs_ros (davis_mono.launch file))

I tried the following:
1- I recorded a bag file and run the same command line $rosrun mapper_emvs run_emvs --bag_filename=/home/user/ros_ws_kinetic/data/test.bag --flagfile=cfg/slider_depth.conf
but it shows me this error
I0205 15:35:53.154494 22741 data_loading.cpp:61] initial stamp: 1549366232.709949495 F0205 15:35:53.212122 22741 trajectory.hpp:118] Check failed: poses_.size() >= 2u (0 vs. 2) At least two poses need to be provided *** Check failure stack trace: *** @ 0x7f4750dfa43d google::LogMessage::Fail() @ 0x7f4750dfc253 google::LogMessage::SendToLog() @ 0x7f4750df9fcb google::LogMessage::Flush() @ 0x7f4750dfcc3e google::LogMessageFatal::~LogMessageFatal() @ 0x418833 LinearTrajectory::LinearTrajectory() @ 0x4112cf main @ 0x7f474efc9830 __libc_start_main @ 0x412f09 _start
Looking at your bag file (slider_depth.bag), I can see the following topics recorded
/clock
/dvs/camera_info
/dvs/events
/dvs/image_raw
/optitrack/davis
/rosout
/rosout_agg

However, when I list the topic of my DAVIS, I didn't find the /optitrack/davis nor the /clock
Here is the list of my topics

/davis_ros_driver/parameter_descriptions
/davis_ros_driver/parameter_updates
/dvs/calibrate_imu
/dvs/camera_info
/dvs/events
/dvs/exposure
/dvs/image_raw
/dvs/imu
/dvs/reset_timestamps
/dvs/trigger_snapshot
/dvs_accumulated_events
/dvs_accumulated_events/compressed
/dvs_accumulated_events/compressed/parameter_descriptions
/dvs_accumulated_events/compressed/parameter_updates
/dvs_accumulated_events/compressedDepth
/dvs_accumulated_events/compressedDepth/parameter_descriptions
/dvs_accumulated_events/compressedDepth/parameter_updates
/dvs_accumulated_events/theora
/dvs_accumulated_events/theora/parameter_descriptions
/dvs_accumulated_events/theora/parameter_updates
/dvs_accumulated_events_edges
/dvs_accumulated_events_edges/compressed
/dvs_accumulated_events_edges/compressed/parameter_descriptions
/dvs_accumulated_events_edges/compressed/parameter_updates
/dvs_accumulated_events_edges/compressedDepth
/dvs_accumulated_events_edges/compressedDepth/parameter_descriptions
/dvs_accumulated_events_edges/compressedDepth/parameter_updates
/dvs_accumulated_events_edges/theora
/dvs_accumulated_events_edges/theora/parameter_descriptions
/dvs_accumulated_events_edges/theora/parameter_updates
/dvs_rendering
/dvs_rendering/compressed
/dvs_rendering/compressed/parameter_descriptions
/dvs_rendering/compressed/parameter_updates
/dvs_rendering/compressedDepth
/dvs_rendering/compressedDepth/parameter_descriptions
/dvs_rendering/compressedDepth/parameter_updates
/dvs_rendering/theora
/dvs_rendering/theora/parameter_descriptions
/dvs_rendering/theora/parameter_updates
/dvs_undistorted
/dvs_undistorted/compressed
/dvs_undistorted/compressed/parameter_descriptions
/dvs_undistorted/compressed/parameter_updates
/dvs_undistorted/compressedDepth
/dvs_undistorted/compressedDepth/parameter_descriptions
/dvs_undistorted/compressedDepth/parameter_updates
/dvs_undistorted/theora
/dvs_undistorted/theora/parameter_descriptions
/dvs_undistorted/theora/parameter_updates
/events_off_mean_1
/events_off_mean_5
/events_on_mean_1
/events_on_mean_5
/image_view/output
/image_view/parameter_descriptions
/image_view/parameter_updates
/rosout
/rosout_agg

The questions are:
1- how can I record the /optitrack/davis topic knowing that I don't have an optitrack system in the lab?
2- Does the /clock topic need to be there to reconstruct the area
3- Will DAVIS be able to reconstruct the scene if it doesn't move?
4- Is there a way to reconstruct a live stream data?

Thanks in advance
Best,
Raghad

Using other datasets (Prophesee event camera PPS3MVCD)

I have been trying to use the EMVS on data from a Prophesee event camera PPS3MVCD without success. I have adjusted the code to run on the Prophesee_event_msgs and the topics for our motion capture system. Moreover, looking into the Prophesee ROS msgs and Davis ROS msgs seems to have a similar structure.

The code executes without errors but the output is not the expected. The first observed place with unexpected results is in the Disparity Space Image (DSI). It looks like the event rays are parallel and therefore don't intersect, causing the DSI to lack dense areas.

dsi_prophesee_dataset
In the figure above, the right part of the scene should be close to the camera and the left part of the scene should be further back. Changing settings can change how clear the contours are but there are no settings that improve the depth image.

depth_colored

I recorded a dataset that includes geometry_msgs/TransformStamped data from Vicon Motion capture system as well as prophesee_event_msgs/EventArray data from the Prophesee camera.

The event camera is calibrated using the Prophesee camera calibration method: metavision_mono_calibration (https://docs.prophesee.ai/metavision_sdk/modules/calibration/guides/intrinsics.html?highlight=calibration)

Could you please let me know whether there are some constraints on how the datasets need to be recorded? Or do you have any insight into what could be the issue?

Thanks for your time in advance!

Undergraduate Computer Science student using a Prophesee Gen 4 HD event camera and a UR5 robot arm

HI I was wondering if there is any reason the EMVS algorithm wouldn't work with a 1280x720 event camera?

I am an undergraduate computer science student and I have outlined below my setup.
I am not using a system that has ROS so i rewrote data loading code for my data to go straight into the message data structures,vectors, and maps that the EMVS code uses.

I have calibrated my event camera twice using the tool provided by prophesee and get similar numbers each time - i hardcode these into a cameraInfo msg

I ahve an RGB camera and an event camera attached to a robot arm. I have a set path and orientation of the tool part of the arm programmed into the robot. I record video and event data as the robot moves the cameras along the path...

My event data is the x,y,polarity,time in comma separated format
My pose data is from a VSLAM algorithm..

I parsed out one of the examples in the readme into the format my event data and pose data is in and it still outputs OK.. and fed this into my alterations in the code and it outputs the expected output

with my data and pose information I get .. unexpected results.. the point cloud looks like linear diverging lines out from a central point.. I will try and post some pictures..

hmm when i recorded it.. it was on a small screen and there is a LOT of noise across the entire video.. It might be a simple sitatuon of "put crap in" and get ..well.. its not gonna make gold out of crap is it..

getting EMVS running on ROS noetic

Issue:

building in ros noetic throws an error which says that the pcl library needs to be built with c++14

Solution:

The solution is simply changing the CMakeLists .txt inside the mapper_emvs folder from c++11 to c++14

from:
set(CMAKE_CXX_FLAGS "-O3 -fopenmp -std=c++11 ${CMAKE_CXX_FLAGS}")

to:
set(CMAKE_CXX_FLAGS "-O3 -fopenmp -std=c++14 ${CMAKE_CXX_FLAGS}")

Then most likely the library cnpy throws an error at runtime, because it was built in c++11 when cloning the dependencies, so we have to change the CMakeLists.txt here a well. Its in a seperate cnpy folder inside the src of your workspace.

basically we need to add this two lines after catkin_simple()

set(CMAKE_CXX_STANDARD 14)
set(CMAKE_CXX_STANDARD_REQUIRED ON)

and then build again with

catkin build mapper_emvs

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.