arsekkat / omniscape Goto Github PK
View Code? Open in Web Editor NEWThis repository will provide tools and news about The OmniScape Dataset
Home Page: https://git.io/omniscape
This repository will provide tools and news about The OmniScape Dataset
Home Page: https://git.io/omniscape
To the Omniscape team:-
I have a few questions relating to the camera position and the way the images were recorded for the dataset. How did you determine the camera position for simulation on the ego vehicle for different views. Also how was the recording of the images performed? Is there a process for determining that 10,000 images will be recorded from CARLA at once and stored in a single directory for your dataset (was this done with a bash script or all in Python)? Also was there a process by which the different camera views were separated from each other in the dataset or were they all placed into one directory?
Thank you for your time and I congratulate you for making a successful publication for your research! Will any code for your dataset generation be released on github?
Out of curiosity, what's the purpose of using an orthographic projection to distribute the dataset? You end up throwing away a lot of information in the boundaries due to the distortion and pixel discretization. The distortion is going to exist regardless, but it seems to me that a format with expansive distortion (i.e. 1 undistorted pixel is spread into many pixels) is better than a format with contractive distortion (i.e. many undistorted pixels get condensed into a single pixel). Something like a cropped equirectangular projection (with associated metadata detailing the FOV) would serve this purpose better. It's not difficult to resample these orthographic projections to equirectangular projections, but you've already lost the data due to discretization in the original orthographic projections. Because it's all virtual data anyway, it seems there shouldn't be any limitations (other than time) to rendering the dataset to a less lossy format.
Hi,
Can you please clarify if your database will contain 6D pose information with both rotation and translation for every frame?
From your paper, I saw that you've orientation ground truth, which I guess will give rotation values. Will the ground truth also contain translation values?
Basically, I'm try to warp a frame to the view of next frame. For this, I'll need depth, pose of both frames and camera intrinsic matrix. Will all these data be available, at least for one of left or right camera?
@ARSekkat Could you introduce how to convert cubemap to fisheye image?
If polynomial coefficients of fisheye model is used and how to set WFOV and HFOV? Thanks
Hi, may I ask the planned date of releasing dataset? Thanks
The dataset looks interesting.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.