Giter Site home page Giter Site logo

arsekkat / omniscape Goto Github PK

View Code? Open in Web Editor NEW
49.0 9.0 4.0 326.8 MB

This repository will provide tools and news about The OmniScape Dataset

Home Page: https://git.io/omniscape

dataset omnidirectional spherical fisheye catadioptric semantic-segmentation depth-estimation depth-map gta5 gtav

omniscape's Introduction

Paper:

Ahmed Rida Sekkat¹, Yohan Dupuis², Pascal Vasseur¹ and Paul Honeine¹.
¹Normandie Univ, UNIROUEN, LITIS, Rouen, France
²Normandie Univ, UNIROUEN, ESIGELEC, IRSEEM, Rouen, France
[email protected]

IEEE International Conference on Robotics and Automation (ICRA), 2020.

If you find our dataset useful for your research, please cite our paper:

@INPROCEEDINGS{9197144,  
author={A. R. {Sekkat} and Y. {Dupuis} and P. {Vasseur} and P. {Honeine}},  
booktitle={2020 IEEE International Conference on Robotics and Automation (ICRA)},   
title={The OmniScape Dataset},   
year={2020},
pages={1603-1608},
doi={10.1109/ICRA40945.2020.9197144}}

Abstract

Despite the utility and benefits of omnidirectional images in robotics and automotive applications, there are no datasets of omnidirectional images available with semantic segmentation, depth map, and dynamic properties. This is due to the time cost and human effort required to annotate ground truth images. This paper presents a framework for generating omnidirectional images using images that are acquired from a virtual environment. For this purpose, we demonstrate the relevance of the proposed framework on two well-known simulators: CARLA simulator, which is an open-source simulator for autonomous driving research, and Grand Theft Auto V (GTA V), which is a very high quality video game. We explain in details the generated OmniScape dataset, which includes stereo fisheye and catadioptric images acquired from the two front sides of a motorcycle, including semantic segmentation, depth map, intrinsic parameters of the cameras and the dynamic parameters of the motorcycle. It is worth noting that the case of two-wheeled vehicles is more challenging than cars due to the specific dynamic of these vehicles.

Video Presentation

Additional modalities:

  • The class Vehicle is divided into two classes, four-wheeled and two-wheeled.
  • Optical Flow.
  • Instance semantic segmentation.
  • 3D bounding boxes.

Dataset Release:

The dataset and tools will be provided in stages as soon as the article is published.

If you are interested in our dataset, fulfill This Form to receive the first release.

Demos:

CARLA simulator:

Instance segmentation:

For each capture stereo 360° cubemap and equirectangular images are also provided:

The images are provided in different weather and lighting conditions:

GTA V:

*This work was supported by a RIN grant, Région Normandie, France

omniscape's People

Contributors

arsekkat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

omniscape's Issues

Does the database contain 6D pose (rotation and translation)?

Hi,

Can you please clarify if your database will contain 6D pose information with both rotation and translation for every frame?
From your paper, I saw that you've orientation ground truth, which I guess will give rotation values. Will the ground truth also contain translation values?

Basically, I'm try to warp a frame to the view of next frame. For this, I'll need depth, pose of both frames and camera intrinsic matrix. Will all these data be available, at least for one of left or right camera?

Choice of spherical projection

Out of curiosity, what's the purpose of using an orthographic projection to distribute the dataset? You end up throwing away a lot of information in the boundaries due to the distortion and pixel discretization. The distortion is going to exist regardless, but it seems to me that a format with expansive distortion (i.e. 1 undistorted pixel is spread into many pixels) is better than a format with contractive distortion (i.e. many undistorted pixels get condensed into a single pixel). Something like a cropped equirectangular projection (with associated metadata detailing the FOV) would serve this purpose better. It's not difficult to resample these orthographic projections to equirectangular projections, but you've already lost the data due to discretization in the original orthographic projections. Because it's all virtual data anyway, it seems there shouldn't be any limitations (other than time) to rendering the dataset to a less lossy format.

Adjusting the camera angle and data recording/generation for Omniscape

To the Omniscape team:-

I have a few questions relating to the camera position and the way the images were recorded for the dataset. How did you determine the camera position for simulation on the ego vehicle for different views. Also how was the recording of the images performed? Is there a process for determining that 10,000 images will be recorded from CARLA at once and stored in a single directory for your dataset (was this done with a bash script or all in Python)? Also was there a process by which the different camera views were separated from each other in the dataset or were they all placed into one directory?

Thank you for your time and I congratulate you for making a successful publication for your research! Will any code for your dataset generation be released on github?

How can I simulate a fish eye camera in carla?

Hello, I want to know how to configure a fish eye camera in carla.
I just found "Camera lens distortion attributes" in sensors reference of carla, but I can't get the effect I want.
Camera lens distortion attributes
my fish eye camera
normal camera with 90fov
I looking forward to your answer, thank you very much.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.