Comments (11)
Please follow the tutorial here about getting the world coordinates in CARLA. If you have specific questions about the tutorial you can list them here. compute_actev_world_norm.py
is not provided. As the comments say, the actev_norm
just includes the min and max of the coordinates in each scene.
from multiverse.
I have one more thing to ask. I'm try to make a new dataset like the forking path.
How do you get the calibration parameters?
And please let me know a formula how to convert trajectories into CARLA world coordinates?
from multiverse.
Hi,
The calibration parameters (just the homography matrics) are provided in ETH/UCY/ActEV. For new videos, you can manually calibrate them with this code or this code or automatically with this paper. It is a classic computer vision problem.
You can follow the tutorial - "Recreate Scenarios from Real World Videos" to recreate a new dataset. Converting trajectories from 2D view to 3D world involves knowing the basic pin-hole camera model. Go through the tutorial and related code and you'll understand.
from multiverse.
Hi,
@JunweiLiang
First of all, thanks for the lovely repo.
Could you also tell how can we get the world coordinates from the pixel coordinates for the trajectories esp for the birds eye view ?
Regards
Abhishek
from multiverse.
Converting coordinates from pixel to world (2D to 3D) needs the depth map. If by "world coordinates" you mean ground-plane coordinates the problem becomes 2D to 2D coordinates transformation. Then you only need the 3x3 homography matrix. We get these world ground-plane coordinates for ETH/UCY/ActEV using their provided homography matrices.
from multiverse.
Hi,
Thanks for the reply. Also, if I want to plot certain trajectories in the unprepared data for some scene, how do I convert the b-box coordinates to the x,y coordinate (without changing the space of reference)?
from multiverse.
x, y coordinates on the ground plane? Then you'll need the homography matrix between the camera view and the ground plane of the unseen scene.
from multiverse.
x, y coordinates on the ground plane? Then you'll need the homography matrix between the camera view and the ground plane of the unseen scene.
No, I mean I am trying one of the forking paths scene of the form 0000_x_300_... (I mean not the eth/zara etc). So how do I get the coordinates from the bounding box(found from the json)!
from multiverse.
Camera parameters are in the code. You can get the world coordinates from the actev_candidates_m15.2_skip10
folder, and visualize them according to Step 4 of the tutorial.
from multiverse.
I am also having problems getting the scaling of the trajectories. I am working on the top-down camera videos. Where can I find the scaling factor that transforms one pixel into the corresponding length in meters?
from multiverse.
The units in CARLA (transform) are actually in meters (see here). So the l2 norm of the world coordinates is in meters. As I mentioned previously, get the world coordinates by using the camera parameters in the code or get the world coordinates from the actev_candidates_m15.2_skip10
folder.
from multiverse.
Related Issues (20)
- "Person" class in semantic segmentation input HOT 2
- Visualize the model output HOT 2
- Feeding more features into forking path dataset HOT 1
- How to visualize new videos in simaug project? HOT 1
- Cannot find the scene36_64_id2name_top10.json file for single future trajectory data (Garden of forking path) HOT 4
- Creating Whole Garden of forking path dataset as single future HOT 3
- How to visualize Actev datasets in Simaug? HOT 1
- How to run model on one test video ? HOT 1
- Error to train SimAug HOT 12
- Visualise Data HOT 7
- Where is deeplabv3 loacate? HOT 1
- Visualize semantic segmentation image? HOT 1
- Pretrained model can't download!! HOT 1
- Script not found!! HOT 1
- Reason why no use_gnn in Multimodal testing HOT 8
- Prepare training data? HOT 1
- The project website does not open HOT 14
- Additional material for simaug article HOT 3
- can next-prediction to simAug? HOT 1
- pygame Windows图像无法呈现 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from multiverse.