Comments (21)
This function is useful to look at to visualize the object in camera view. https://github.com/nutonomy/nuscenes-devkit/blob/274725ae1b3a2d921725016e3f4b383b8b218d3a/python-sdk/nuscenes/nuscenes.py#L903
from centerpoint.
I can see .pkl and .json in the folder. I zipped .json and submitted to detection task.
for tracking I need the same thing?
Thank you for your response and good luck with your coming work :)
from centerpoint.
Yeah zip the json and submit to the tracking server
from centerpoint.
AssertionError: Error: You are trying to evaluate on the test set but you do not have the annotations!
This is not an error. The test set annotation is not available (need to submit to test server). There is no immediate function to see the visualization result for tracking. To see the detection output, you can comment out the if statement here
and the devkit will plot some images. Though, the visualization in the devkit is not that good (doesn't really give a sense of your detection quality without gt annotations) so you probably want to use other tools to visualize the detection/tracking in camera view / 3d.
from centerpoint.
Thanks!
from centerpoint.
@tianweiy The predicted bounding boxes co-ordinates are with respect to the lidar frame. Before transforming them to a particular camera, we need to first determine which camera's translation and rotation matrix we should use. The render_annotation() function from the nuScenes devkit takes as input the annotation token token with which the image path is known and the bounding boxes are plotted.
But in our case I was wondering how can we determine to which camera we transform the co-ordinates of the predicted bounding box and plot the bounding boxes after that?
from centerpoint.
@YoushaaMurhij @iamsiddhantsahu @tianweiy i trying to run a inference code but unable to find a command to run the inference on nuscenes but I found the above mentioned command in which had mentioned about tools/dist_test.py but in configs i am not able to find " nusc_centerpoint_voxelnet_dcn_0075voxel_flip_testset.py" can you please share some insights on this
or can you tell me which command should we use to run inference on nuscenes and waymo dataset
THanks in advance
from centerpoint.
Thanks for the interest. We have some basic updates to the codebase recently. You can replace the original config with
The other commands are the same to generate nuScenes results.
For Waymo models, we are not able to share it publicly due to license agreement, you can send me an email to access those models. Please provide the necessay informations mentioned here
https://github.com/tianweiy/CenterPoint/tree/master/configs/waymo
from centerpoint.
I used tools/dist_test.py to get the predictions.pkl for the validation-set to use them for tracking
python tools/dist_test.py /home/josh94mur/centerpoint/CenterPoint/configs/nusc/pp/nusc_centerpoint_pp_02voxel_two_pfn_10sweep.py --work_dir work_dirs/val --checkpoint working_dir/val/latest.pth --speed_test --testset --gpus 2
Can I use the same script to get the predictions of Test-set? I got an error related to GT KeyError: 'gt_names'
I also want to check mAP. Can I use the resulted .pkl for that? (submit to the server)
from centerpoint.
yeah, to fix the bug, change the config to the following
train_anno = "data/nuScenes/infos_train_10sweeps_withvelo_filter_True.pkl"
val_anno = "data/nuScenes/infos_val_10sweeps_withvelo_filter_True.pkl"
test_anno = "data/nuScenes/infos_test_10sweeps_withvelo_filter_True.pkl"
data = dict(
samples_per_gpu=4,
workers_per_gpu=8,
train=dict(
type=dataset_type,
root_path=data_root,
info_path=train_anno,
ann_file=train_anno,
nsweeps=nsweeps,
class_names=class_names,
pipeline=train_pipeline,
),
val=dict(
type=dataset_type,
root_path=data_root,
info_path=val_anno,
test_mode=True,
ann_file=val_anno,
nsweeps=nsweeps,
class_names=class_names,
pipeline=test_pipeline,
),
test=dict(
type=dataset_type,
root_path=data_root,
info_path=test_anno,
test_mode=True,
ann_file=test_anno,
nsweeps=nsweeps,
class_names=class_names,
pipeline=test_pipeline,
version='v1.0-test'
),
)
you need to use the json files and zip it and then submit to the server
from centerpoint.
Thank you for your fast and clear answer!
The resulted .json
file gave me a 0.0 mAP
on the server. I used infos_test_10sweeps_withvelo.pkl
I can't figure out what's wrong! Any suggestions?
from centerpoint.
have you first tested the model on val? (dist_test.py without the --testset flag)
from centerpoint.
if val is ok, please send me an email with your generated json file so that I can take a look
from centerpoint.
The val is OK. I will send you an e-mail!
from centerpoint.
I tried :
python tools/nusc_tracking/pub_test.py --work_dir working_dir/track --checkpoint working_dir/test/infos_test_10sweeps_withvelo.json --max_age 3 --root data/nuScenes/v1.0-test --version v1.0-test
to get the tracking results on testset but got this error:
'Error: Requested split {} which is not compatible with NuScenes version {}'.format(eval_split, version)
AssertionError: Error: Requested split val which is not compatible with NuScenes version v1.0-test
@tianweiy Do I need to modify the config for the testset tracking ?
from centerpoint.
I think the file is already generated in the folder. You need to submit to server for evaluation.
from centerpoint.
Otherwise please attach full log. Especially where does this error come from which line of code
from centerpoint.
Sorry, but submitting the .json to the tracking server is giving a Failed status. The same .json gave normal results on the detection server. What I am doing wrong?
from centerpoint.
is the json file called "tracking_result.json" ?
from centerpoint.
wait that means tracking is not running. Please attach all outputs after you run pub_test.py
from centerpoint.
I think I found it. the files are there regarding the previous error. Thanks!
from centerpoint.
Related Issues (20)
- dbinfos_train_10sweeps_withvelo.pkl is empty
- Virtual Points Loading
- the error when i train nuscenes, i think the reason is my spconv>2.0, but how can i change the code to solve it?
- about centerpoint-pillar nuscenes test set mAP and NDS HOT 1
- Why do Waymo and NuScenes have different `us_layer_strides` even though they use the same CenterPoint?
- No such file or directory: 'data/nuScenes/infos_ Test_ 10 sweeps_ Withvelo_ Filter_ True. pkl
- Tracking bug with Hungarian
- RuntimeError: Error compiling objects for extension HOT 1
- Unable to find nuScenes devkit HOT 1
- requirements change
- Error when starting training: TypeError: string indices must be integers
- Unable to build iou3d_NMS
- When I run create_data.py use nuScenes dataset have something wrong
- What's the speed of tracker in WOD
- time
- visual pkl
- assert len(detections) == 6008
- 请问有用mini数据集得到的centerpoint的检测结果吗,类似这个infos_val_10sweeps_withvelo_filter_True.json文件
- Request for Combined Train and Validation Split File
- Personal dataset HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from centerpoint.