Giter Site home page Giter Site logo

Comments (11)

JunweiLiang avatar JunweiLiang commented on May 26, 2024

Please follow the tutorial here about getting the world coordinates in CARLA. If you have specific questions about the tutorial you can list them here. compute_actev_world_norm.py is not provided. As the comments say, the actev_norm just includes the min and max of the coordinates in each scene.

from multiverse.

isaryo avatar isaryo commented on May 26, 2024

I have one more thing to ask. I'm try to make a new dataset like the forking path.
How do you get the calibration parameters?
And please let me know a formula how to convert trajectories into CARLA world coordinates?

from multiverse.

JunweiLiang avatar JunweiLiang commented on May 26, 2024

Hi,
The calibration parameters (just the homography matrics) are provided in ETH/UCY/ActEV. For new videos, you can manually calibrate them with this code or this code or automatically with this paper. It is a classic computer vision problem.
You can follow the tutorial - "Recreate Scenarios from Real World Videos" to recreate a new dataset. Converting trajectories from 2D view to 3D world involves knowing the basic pin-hole camera model. Go through the tutorial and related code and you'll understand.

from multiverse.

abhi-rf avatar abhi-rf commented on May 26, 2024

Hi,

@JunweiLiang
First of all, thanks for the lovely repo.
Could you also tell how can we get the world coordinates from the pixel coordinates for the trajectories esp for the birds eye view ?

Regards
Abhishek

from multiverse.

JunweiLiang avatar JunweiLiang commented on May 26, 2024

Converting coordinates from pixel to world (2D to 3D) needs the depth map. If by "world coordinates" you mean ground-plane coordinates the problem becomes 2D to 2D coordinates transformation. Then you only need the 3x3 homography matrix. We get these world ground-plane coordinates for ETH/UCY/ActEV using their provided homography matrices.

from multiverse.

abhi-rf avatar abhi-rf commented on May 26, 2024

Hi,

Thanks for the reply. Also, if I want to plot certain trajectories in the unprepared data for some scene, how do I convert the b-box coordinates to the x,y coordinate (without changing the space of reference)?

from multiverse.

JunweiLiang avatar JunweiLiang commented on May 26, 2024

x, y coordinates on the ground plane? Then you'll need the homography matrix between the camera view and the ground plane of the unseen scene.

from multiverse.

abhi-rf avatar abhi-rf commented on May 26, 2024

x, y coordinates on the ground plane? Then you'll need the homography matrix between the camera view and the ground plane of the unseen scene.

No, I mean I am trying one of the forking paths scene of the form 0000_x_300_... (I mean not the eth/zara etc). So how do I get the coordinates from the bounding box(found from the json)!

from multiverse.

JunweiLiang avatar JunweiLiang commented on May 26, 2024

Camera parameters are in the code. You can get the world coordinates from the actev_candidates_m15.2_skip10 folder, and visualize them according to Step 4 of the tutorial.

from multiverse.

dendorferpatrick avatar dendorferpatrick commented on May 26, 2024

I am also having problems getting the scaling of the trajectories. I am working on the top-down camera videos. Where can I find the scaling factor that transforms one pixel into the corresponding length in meters?

from multiverse.

JunweiLiang avatar JunweiLiang commented on May 26, 2024

The units in CARLA (transform) are actually in meters (see here). So the l2 norm of the world coordinates is in meters. As I mentioned previously, get the world coordinates by using the camera parameters in the code or get the world coordinates from the actev_candidates_m15.2_skip10 folder.

from multiverse.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.