Giter Site home page Giter Site logo

sense-gvt / fast-bev Goto Github PK

View Code? Open in Web Editor NEW
590.0 14.0 90.0 32.98 MB

Fast-BEV: A Fast and Strong Bird’s-Eye View Perception Baseline

License: Other

Python 69.03% Shell 0.26% C++ 27.14% Cuda 3.28% CMake 0.03% Jupyter Notebook 0.26%
3d bird-eye-view detection multi-camera autonomous autonomous-driving 2d-to-3d

fast-bev's Introduction

Fast-BEV

Fast-BEV: A Fast and Strong Bird’s-Eye View Perception Baseline image image image

Better Inference Implementation

Thanks to the repository CUDA-FastBEV inference using CUDA & TensorRT. And provide PTQ and QAT int8 quantization code. You can refer to it to get faster speed.

Usage

usage

Installation

  • CUDA>=9.2
  • GCC>=5.4
  • Python>=3.6
  • Pytorch>=1.8.1
  • Torchvision>=0.9.1
  • MMCV-full==1.4.0
  • MMDetection==2.14.0
  • MMSegmentation==0.14.1

Dataset preparation

  .
  ├── data
  │   └── nuscenes
  │       ├── maps
  │       ├── maps_bev_seg_gt_2class
  │       ├── nuscenes_infos_test_4d_interval3_max60.pkl
  │       ├── nuscenes_infos_train_4d_interval3_max60.pkl
  │       ├── nuscenes_infos_val_4d_interval3_max60.pkl
  │       ├── v1.0-test
  │       └── v1.0-trainval

download

Pretraining

  .
  ├── pretrained_models
  │   ├── cascade_mask_rcnn_r18_fpn_coco-mstrain_3x_20e_nuim_bbox_mAP_0.5110_segm_mAP_0.4070.pth
  │   ├── cascade_mask_rcnn_r34_fpn_coco-mstrain_3x_20e_nuim_bbox_mAP_0.5190_segm_mAP_0.4140.pth
  │   └── cascade_mask_rcnn_r50_fpn_coco-mstrain_3x_20e_nuim_bbox_mAP_0.5400_segm_mAP_0.4300.pth

download

Training

  .
  ├── work_dirs
    └── fastbev
      └── exp
          └── paper
              └── fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4
              │   ├── epoch_20.pth
              │   ├── latest.pth -> epoch_20.pth
              │   ├── log.eval.fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4.02062323.txt
              │   └── log.test.fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4.02062309.txt
              ├── fastbev_m1_r18_s320x880_v200x200x4_c192_d2_f4
              │   ├── epoch_20.pth
              │   ├── latest.pth -> epoch_20.pth
              │   ├── log.eval.fastbev_m1_r18_s320x880_v200x200x4_c192_d2_f4.02080000.txt
              │   └── log.test.fastbev_m1_r18_s320x880_v200x200x4_c192_d2_f4.02072346.txt
              ├── fastbev_m2_r34_s256x704_v200x200x4_c224_d4_f4
              │   ├── epoch_20.pth
              │   ├── latest.pth -> epoch_20.pth
              │   ├── log.eval.fastbev_m2_r34_s256x704_v200x200x4_c224_d4_f4.02080021.txt
              │   └── log.test.fastbev_m2_r34_s256x704_v200x200x4_c224_d4_f4.02080005.txt
              ├── fastbev_m4_r50_s320x880_v250x250x6_c256_d6_f4
              │   ├── epoch_20.pth
              │   ├── latest.pth -> epoch_20.pth
              │   ├── log.eval.fastbev_m4_r50_s320x880_v250x250x6_c256_d6_f4.02080021.txt
              │   └── log.test.fastbev_m4_r50_s320x880_v250x250x6_c256_d6_f4.02080005.txt
              └── fastbev_m5_r50_s512x1408_v250x250x6_c256_d6_f4
                  ├── epoch_20.pth
                  ├── latest.pth -> epoch_20.pth
                  ├── log.eval.fastbev_m5_r50_s512x1408_v250x250x6_c256_d6_f4.02080021.txt
                  └── log.test.fastbev_m5_r50_s512x1408_v250x250x6_c256_d6_f4.02080001.txt

download

Deployment

TODO

View Transformation Latency on device

2D-to-3D on CUDA & CPU

Citation

@article{li2023fast,
  title={Fast-BEV: A Fast and Strong Bird's-Eye View Perception Baseline},
  author={Li, Yangguang and Huang, Bin and Chen, Zeren and Cui, Yufeng and Liang, Feng and Shen, Mingzhu and Liu, Fenggang and Xie, Enze and Sheng, Lu and Ouyang, Wanli and others},
  journal={arXiv preprint arXiv:2301.12511},
  year={2023}
}

fast-bev's People

Contributors

slothercui avatar yg256li avatar ymlab avatar zx55 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fast-bev's Issues

KeyError: 'SwinTransformer is already registered in models'

Traceback (most recent call last):
File "tools/analysis_tools/get_flops.py", line 6, in
from mmdet3d.models import build_model
..../Fast-BEV/mmdet3d/models/init.py", line 2, in
from .backbones import * # noqa: F401,F403
..../Fast-BEV/mmdet3d/models/backbones/init.py", line 9, in
from .swin_transformer import SwinTransformer
..../Fast-BEV/mmdet3d/models/backbones/swin_transformer.py", line 430, in
class SwinTransformer(nn.Module):
..../miniconda3/envs/mitbev1/lib/python3.8/site-packages/mmcv/utils/registry.py", line 311, in _register
self._register_module(
....miniconda3/envs/mitbev1/lib/python3.8/site-packages/mmcv/utils/registry.py", line 246, in _register_module
raise KeyError(f'{name} is already registered '
KeyError: 'SwinTransformer is already registered in models'

model problem

Did you use the internal and external parameters of the camera?

请问fastbev里在生成points时,那个img_meta["lidar2img"]["origin"]参数代表什么呢

points = get_points( # [3, vx, vy, vz]
n_voxels=torch.tensor(n_voxels),
voxel_size=torch.tensor(voxel_size),
origin=torch.tensor(img_meta["lidar2img"]["origin"]),
).to(mlvl_feats.device)
def get_points(n_voxels, voxel_size, origin):
points = torch.stack(
torch.meshgrid(
[
torch.arange(n_voxels[0]),
torch.arange(n_voxels[1]),
torch.arange(n_voxels[2]),
]
)
)
new_origin = origin - n_voxels / 2.0 * voxel_size
points = points * voxel_size.view(3, 1, 1, 1) + new_origin.view(3, 1, 1, 1)
return points

使用作者M2模型本地测试,eval结果NDS=0.347对应log内的0.4545不一致

我下载了作者train的模型本地使用pytorch进行test和eval, 具体命令参照作者workdir文件下对应的log文件,但是我用torch推理后测评的结果比作者log中的结果低很多,不同之处: 作者是用slurm我是用pytorch,我两张A100推理,难道与这有关?还是其他什么因素
image

我对作者m2模型测评结果
image

具体测试命令如下,请指教

# test
python3 -m torch.distributed.launch --nnodes=1 --node_rank=0 --nproc_per_node 2 --master_addr 127.0.0.1 tools/test.py \
        configs/fastbev/exp/paper/fastbev_m2_r34_s256x704_v200x200x4_c224_d4_f4.py work_dirs/fastbev/exp/paper/fastbev_m2_r34_s256x704_v200x200x4_c224_d4_f4/epoch_20.pth \
        --launcher=pytorch --out work_dirs/fastbev/exp/paper/fastbev_m2_r34_s256x704_v200x200x4_c224_d4_f4/results/results.pkl \
        --format-only \
        --eval-options jsonfile_prefix=work_dirs/fastbev/exp/paper/fastbev_m2_r34_s256x704_v200x200x4_c224_d4_f4/results

# eval
python3 -m torch.distributed.launch --nnodes=1 --node_rank=0 --nproc_per_node 2 --master_addr 127.0.0.1 tools/eval.py \
        configs/fastbev/exp/paper/fastbev_m2_r34_s256x704_v200x200x4_c224_d4_f4.py \
        --launcher=pytorch --out work_dirs/fastbev/exp/paper/fastbev_m2_r34_s256x704_v200x200x4_c224_d4_f4/results/results.pkl \
        --eval bbox

训练过程中会出现错误

无法完成一个epoch,每次在不同的batch时出现错误。
问题1:seed的设置默认取的0,对于数据加载是无效的吗?如果生效应该每次错误发生在同样的时期。
问题2:使用作者提供的pkl以及自己生成的pkl都是在训练的一个epoch内就会出现错误。使用mini数据集可以完成20个epoch的迭代。

环境:
sys.platform: linux
Python: 3.8.8 (default, Feb 24 2021, 21:46:12) [GCC 7.3.0]
CUDA available: True
GPU 0,1,2,3: NVIDIA GeForce RTX 3090
CUDA_HOME: /usr/local/cuda
NVCC: Build cuda_11.1.TC455_06.29190527_0
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.8.1
PyTorch compiling details: PyTorch built with:

  • GCC 7.3
  • C++ Version: 201402
  • Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v1.7.0 (Git Hash 7aed236906b1f7a05c0917e5257a1af05e9ff683)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • NNPACK is enabled
  • CPU capability usage: AVX2
  • CUDA Runtime 11.1
  • CuDNN 8.0.5
  • Magma 2.5.2

TorchVision: 0.9.1
OpenCV: 4.7.0
MMCV: 1.4.0
MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 11.1
MMDetection: 2.14.0
MMSegmentation: 0.14.1
MMDetection3D: 0.16.0+69d67ff

错误提示如下:
2023-03-29 05:03:13,493 - mmdet - INFO - Epoch [1][1860/4004] lr: 3.907e-04, eta: 3 days, 9:03:25, time: 3.341, data_time: 0.491, memory: 18153, positive_bag_loss: 1.4544, negative_bag_loss: 0.1518, loss: 1.6061, grad_norm: 1.5741
Traceback (most recent call last):
File "tools/train.py", line 279, in
main()
File "tools/train.py", line 268, in main
train_model(
File "/workspace/Fast-BEV/mmdet3d/apis/train.py", line 184, in train_model
train_detector(
File "/workspace/Fast-BEV/mmdet3d/apis/train.py", line 159, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/opt/conda/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "/opt/conda/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 50, in train
self.run_iter(data_batch, train_mode=True, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 29, in run_iter
outputs = self.model.train_step(data_batch, self.optimizer,
File "/opt/conda/lib/python3.8/site-packages/mmcv/parallel/distributed.py", line 52, in train_step
output = self.module.train_step(*inputs[0], **kwargs[0])
File "/opt/conda/lib/python3.8/site-packages/mmdet/models/detectors/base.py", line 237, in train_step
losses = self(**data)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 98, in new_func
return old_func(*args, **kwargs)
File "/workspace/Fast-BEV/mmdet3d/models/detectors/fastbev.py", line 294, in forward
return self.forward_train(img, img_metas, **kwargs)
File "/workspace/Fast-BEV/mmdet3d/models/detectors/fastbev.py", line 301, in forward_train
feature_bev, valids, features_2d = self.extract_feat(img, img_metas, "train")
File "/workspace/Fast-BEV/mmdet3d/models/detectors/fastbev.py", line 123, in extract_feat
x = self.backbone(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/mmdet/models/backbones/resnet.py", line 642, in forward
x = res_layer(x)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/container.py", line 119, in forward
input = module(input)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/mmdet/models/backbones/resnet.py", line 89, in forward
out = _inner_forward(x)
File "/opt/conda/lib/python3.8/site-packages/mmdet/models/backbones/resnet.py", line 72, in _inner_forward
out = self.conv1(x)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 399, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 395, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 6011) is killed by signal: Killed.
Killing subprocess 740
Killing subprocess 741
Killing subprocess 742
Killing subprocess 743
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 340, in
main()
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 326, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/conda/bin/python', '-u', 'tools/train.py', '--local_rank=3', './configs/fastbev/exp/paper/fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4.py', '--work-dir=./work_dirs/my/exp/', '--launcher=pytorch', '--gpus', '4']' returned non-zero exit status 1.

Why i try run tools/eval.py mAP==0.0 NDS==0.0

image
The results I got mAP=0.000 NDS=0.000.
The following is the step I run code

1, dataset nuscenes version v1.0-mini

Modify the code:
tools/create_data.py Line 188-212

parser.add_argument('dataset', metavar='nuscenes', help='name of the dataset')
parser.add_argument(
    '--root-path',
    type=str,
    default='./data/nuscenes',
    help='specify the root path of dataset')
parser.add_argument(
    '--version',
    type=str,
    default='v1.0-mini',
    required=False,
    help='specify the dataset version, no need for kitti')
parser.add_argument(
    '--max-sweeps',
    type=int,
    default=10,
    required=False,
    help='specify sweeps of lidar per example')
parser.add_argument(
    '--out-dir',
    type=str,
    default='./data/nuscenes',
    required=False,
    help='name of info pkl')
parser.add_argument('--extra-tag', type=str, default='nuscenes')

tools/data_converter/nuscenes_converter.py Line 45

 available_vers = ['v1.0-trainval', 'v1.0-test', 'v1.0-mini']

 elif version == 'v1.0-mini':
        train_scenes = splits.mini_train
        val_scenes = splits.mini_val

tools/data_converter/nuscenes_seq_converter.py Line 14-16

    for set in ['val', 'train', ]:
        # if set in ['val', 'train']:
        #     continue

later, i run
python tools/create_data.py nuscenes
python tools/data_converter/nuscenes_seq_converter.py
ps: cp nuscense_trainval_dataset map/ nuscense_v1.0-mini /map/
Got these:
image

2.run test.py

python tools/eval.py configs/fastbev/exp/paper/fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4.py --out temp_out/temp.pkl --eval bbox
later:

======
Loading NuScenes tables for version v1.0-mini...
23 category,
8 attribute,
4 visibility,
911 instance,
12 sensor,
120 calibrated_sensor,
31206 ego_pose,
8 log,
10 scene,
404 sample,
31206 sample_data,
18538 sample_annotation,
4 map,
Done loading in 0.376 seconds.
======
Reverse indexing ...
Done reverse indexing in 0.1 seconds.
======
lane thickness: 2
lane thickness: 2
lane thickness: 2
lane thickness: 2
lane thickness: 2

loading results from temp_out/temp.pkl
Start to convert detection format...
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 81/81, 18.6 task/s, elapsed: 4s, ETA:     0s
Results writes to /tmp/tmpow_oc5yk/results/results_nusc.json
mAP: 0.0000                                                                                                                                                                                         
mATE: 1.0000
mASE: 1.0000
mAOE: 1.0000
mAVE: 1.0000
mAAE: 1.0000
NDS: 0.0000
Eval time: 1.0s

Per-class results:
Object Class    AP      ATE     ASE     AOE     AVE     AAE
car     0.000   1.000   1.000   1.000   1.000   1.000
truck   0.000   1.000   1.000   1.000   1.000   1.000
bus     0.000   1.000   1.000   1.000   1.000   1.000
trailer 0.000   1.000   1.000   1.000   1.000   1.000
construction_vehicle    0.000   1.000   1.000   1.000   1.000   1.000
pedestrian      0.000   1.000   1.000   1.000   1.000   1.000
motorcycle      0.000   1.000   1.000   1.000   1.000   1.000
bicycle 0.000   1.000   1.000   1.000   1.000   1.000
traffic_cone    0.000   1.000   1.000   nan     nan     nan
barrier 0.000   1.000   1.000   1.000   nan     nan
{'pts_bbox_NuScenes/car_AP_dist_0.5': 0.0, 'pts_bbox_NuScenes/car_AP_dist_1.0': 0.0, 'pts_bbox_NuScenes/car_AP_dist_2.0': 0.0, 'pts_bbox_NuScenes/car_AP_dist_4.0': 0.0, 'pts_bbox_NuScenes/car_trans_err': 1.0, 'pts_bbox_NuScenes/car_scale_err': 1.0, 'pts_bbox_NuScenes/car_orient_err': 1.0, 'pts_bbox_NuScenes/car_vel_err': 1.0, 'pts_bbox_NuScenes/car_attr_err': 1.0, 'pts_bbox_NuScenes/mATE': 1.0, 'pts_bbox_NuScenes/mASE': 1.0, 'pts_bbox_NuScenes/mAOE': 1.0, 'pts_bbox_NuScenes/mAVE': 1.0, 'pts_bbox_NuScenes/mAAE': 1.0, 'pts_bbox_NuScenes/truck_AP_dist_0.5': 0.0, 'pts_bbox_NuScenes/truck_AP_dist_1.0': 0.0, 'pts_bbox_NuScenes/truck_AP_dist_2.0': 0.0, 'pts_bbox_NuScenes/truck_AP_dist_4.0': 0.0, 'pts_bbox_NuScenes/truck_trans_err': 1.0, 'pts_bbox_NuScenes/truck_scale_err': 1.0, 'pts_bbox_NuScenes/truck_orient_err': 1.0, 'pts_bbox_NuScenes/truck_vel_err': 1.0, 'pts_bbox_NuScenes/truck_attr_err': 1.0, 'pts_bbox_NuScenes/trailer_AP_dist_0.5': 0.0, 'pts_bbox_NuScenes/trailer_AP_dist_1.0': 0.0, 'pts_bbox_NuScenes/trailer_AP_dist_2.0': 0.0, 'pts_bbox_NuScenes/trailer_AP_dist_4.0': 0.0, 'pts_bbox_NuScenes/trailer_trans_err': 1.0, 'pts_bbox_NuScenes/trailer_scale_err': 1.0, 'pts_bbox_NuScenes/trailer_orient_err': 1.0, 'pts_bbox_NuScenes/trailer_vel_err': 1.0, 'pts_bbox_NuScenes/trailer_attr_err': 1.0, 'pts_bbox_NuScenes/bus_AP_dist_0.5': 0.0, 'pts_bbox_NuScenes/bus_AP_dist_1.0': 0.0, 'pts_bbox_NuScenes/bus_AP_dist_2.0': 0.0, 'pts_bbox_NuScenes/bus_AP_dist_4.0': 0.0, 'pts_bbox_NuScenes/bus_trans_err': 1.0, 'pts_bbox_NuScenes/bus_scale_err': 1.0, 'pts_bbox_NuScenes/bus_orient_err': 1.0, 'pts_bbox_NuScenes/bus_vel_err': 1.0, 'pts_bbox_NuScenes/bus_attr_err': 1.0, 'pts_bbox_NuScenes/construction_vehicle_AP_dist_0.5': 0.0, 'pts_bbox_NuScenes/construction_vehicle_AP_dist_1.0': 0.0, 'pts_bbox_NuScenes/construction_vehicle_AP_dist_2.0': 0.0, 'pts_bbox_NuScenes/construction_vehicle_AP_dist_4.0': 0.0, 'pts_bbox_NuScenes/construction_vehicle_trans_err': 1.0, 'pts_bbox_NuScenes/construction_vehicle_scale_err': 1.0, 'pts_bbox_NuScenes/construction_vehicle_orient_err': 1.0, 'pts_bbox_NuScenes/construction_vehicle_vel_err': 1.0, 'pts_bbox_NuScenes/construction_vehicle_attr_err': 1.0, 'pts_bbox_NuScenes/bicycle_AP_dist_0.5': 0.0, 'pts_bbox_NuScenes/bicycle_AP_dist_1.0': 0.0, 'pts_bbox_NuScenes/bicycle_AP_dist_2.0': 0.0, 'pts_bbox_NuScenes/bicycle_AP_dist_4.0': 0.0, 'pts_bbox_NuScenes/bicycle_trans_err': 1.0, 'pts_bbox_NuScenes/bicycle_scale_err': 1.0, 'pts_bbox_NuScenes/bicycle_orient_err': 1.0, 'pts_bbox_NuScenes/bicycle_vel_err': 1.0, 'pts_bbox_NuScenes/bicycle_attr_err': 1.0, 'pts_bbox_NuScenes/motorcycle_AP_dist_0.5': 0.0, 'pts_bbox_NuScenes/motorcycle_AP_dist_1.0': 0.0, 'pts_bbox_NuScenes/motorcycle_AP_dist_2.0': 0.0, 'pts_bbox_NuScenes/motorcycle_AP_dist_4.0': 0.0, 'pts_bbox_NuScenes/motorcycle_trans_err': 1.0, 'pts_bbox_NuScenes/motorcycle_scale_err': 1.0, 'pts_bbox_NuScenes/motorcycle_orient_err': 1.0, 'pts_bbox_NuScenes/motorcycle_vel_err': 1.0, 'pts_bbox_NuScenes/motorcycle_attr_err': 1.0, 'pts_bbox_NuScenes/pedestrian_AP_dist_0.5': 0.0, 'pts_bbox_NuScenes/pedestrian_AP_dist_1.0': 0.0, 'pts_bbox_NuScenes/pedestrian_AP_dist_2.0': 0.0, 'pts_bbox_NuScenes/pedestrian_AP_dist_4.0': 0.0, 'pts_bbox_NuScenes/pedestrian_trans_err': 1.0, 'pts_bbox_NuScenes/pedestrian_scale_err': 1.0, 'pts_bbox_NuScenes/pedestrian_orient_err': 1.0, 'pts_bbox_NuScenes/pedestrian_vel_err': 1.0, 'pts_bbox_NuScenes/pedestrian_attr_err': 1.0, 'pts_bbox_NuScenes/traffic_cone_AP_dist_0.5': 0.0, 'pts_bbox_NuScenes/traffic_cone_AP_dist_1.0': 0.0, 'pts_bbox_NuScenes/traffic_cone_AP_dist_2.0': 0.0, 'pts_bbox_NuScenes/traffic_cone_AP_dist_4.0': 0.0, 'pts_bbox_NuScenes/traffic_cone_trans_err': 1.0, 'pts_bbox_NuScenes/traffic_cone_scale_err': 1.0, 'pts_bbox_NuScenes/traffic_cone_orient_err': nan, 'pts_bbox_NuScenes/traffic_cone_vel_err': nan, 'pts_bbox_NuScenes/traffic_cone_attr_err': nan, 'pts_bbox_NuScenes/barrier_AP_dist_0.5': 0.0, 'pts_bbox_NuScenes/barrier_AP_dist_1.0': 0.0, 'pts_bbox_NuScenes/barrier_AP_dist_2.0': 0.0, 'pts_bbox_NuScenes/barrier_AP_dist_4.0': 0.0, 'pts_bbox_NuScenes/barrier_trans_err': 1.0, 'pts_bbox_NuScenes/barrier_scale_err': 1.0, 'pts_bbox_NuScenes/barrier_orient_err': 1.0, 'pts_bbox_NuScenes/barrier_vel_err': nan, 'pts_bbox_NuScenes/barrier_attr_err': nan, 'pts_bbox_NuScenes/NDS': 0.0, 'pts_bbox_NuScenes/mAP': 0.0}
{'pts_bbox_NuScenes/car_AP_dist_0.5': 0.0, 'pts_bbox_NuScenes/car_AP_dist_1.0': 0.0, 'pts_bbox_NuScenes/car_AP_dist_2.0': 0.0, 'pts_bbox_NuScenes/car_AP_dist_4.0': 0.0, 'pts_bbox_NuScenes/car_trans_err': 1.0, 'pts_bbox_NuScenes/car_scale_err': 1.0, 'pts_bbox_NuScenes/car_orient_err': 1.0, 'pts_bbox_NuScenes/car_vel_err': 1.0, 'pts_bbox_NuScenes/car_attr_err': 1.0, 'pts_bbox_NuScenes/mATE': 1.0, 'pts_bbox_NuScenes/mASE': 1.0, 'pts_bbox_NuScenes/mAOE': 1.0, 'pts_bbox_NuScenes/mAVE': 1.0, 'pts_bbox_NuScenes/mAAE': 1.0, 'pts_bbox_NuScenes/truck_AP_dist_0.5': 0.0, 'pts_bbox_NuScenes/truck_AP_dist_1.0': 0.0, 'pts_bbox_NuScenes/truck_AP_dist_2.0': 0.0, 'pts_bbox_NuScenes/truck_AP_dist_4.0': 0.0, 'pts_bbox_NuScenes/truck_trans_err': 1.0, 'pts_bbox_NuScenes/truck_scale_err': 1.0, 'pts_bbox_NuScenes/truck_orient_err': 1.0, 'pts_bbox_NuScenes/truck_vel_err': 1.0, 'pts_bbox_NuScenes/truck_attr_err': 1.0, 'pts_bbox_NuScenes/trailer_AP_dist_0.5': 0.0, 'pts_bbox_NuScenes/trailer_AP_dist_1.0': 0.0, 'pts_bbox_NuScenes/trailer_AP_dist_2.0': 0.0, 'pts_bbox_NuScenes/trailer_AP_dist_4.0': 0.0, 'pts_bbox_NuScenes/trailer_trans_err': 1.0, 'pts_bbox_NuScenes/trailer_scale_err': 1.0, 'pts_bbox_NuScenes/trailer_orient_err': 1.0, 'pts_bbox_NuScenes/trailer_vel_err': 1.0, 'pts_bbox_NuScenes/trailer_attr_err': 1.0, 'pts_bbox_NuScenes/bus_AP_dist_0.5': 0.0, 'pts_bbox_NuScenes/bus_AP_dist_1.0': 0.0, 'pts_bbox_NuScenes/bus_AP_dist_2.0': 0.0, 'pts_bbox_NuScenes/bus_AP_dist_4.0': 0.0, 'pts_bbox_NuScenes/bus_trans_err': 1.0, 'pts_bbox_NuScenes/bus_scale_err': 1.0, 'pts_bbox_NuScenes/bus_orient_err': 1.0, 'pts_bbox_NuScenes/bus_vel_err': 1.0, 'pts_bbox_NuScenes/bus_attr_err': 1.0, 'pts_bbox_NuScenes/construction_vehicle_AP_dist_0.5': 0.0, 'pts_bbox_NuScenes/construction_vehicle_AP_dist_1.0': 0.0, 'pts_bbox_NuScenes/construction_vehicle_AP_dist_2.0': 0.0, 'pts_bbox_NuScenes/construction_vehicle_AP_dist_4.0': 0.0, 'pts_bbox_NuScenes/construction_vehicle_trans_err': 1.0, 'pts_bbox_NuScenes/construction_vehicle_scale_err': 1.0, 'pts_bbox_NuScenes/construction_vehicle_orient_err': 1.0, 'pts_bbox_NuScenes/construction_vehicle_vel_err': 1.0, 'pts_bbox_NuScenes/construction_vehicle_attr_err': 1.0, 'pts_bbox_NuScenes/bicycle_AP_dist_0.5': 0.0, 'pts_bbox_NuScenes/bicycle_AP_dist_1.0': 0.0, 'pts_bbox_NuScenes/bicycle_AP_dist_2.0': 0.0, 'pts_bbox_NuScenes/bicycle_AP_dist_4.0': 0.0, 'pts_bbox_NuScenes/bicycle_trans_err': 1.0, 'pts_bbox_NuScenes/bicycle_scale_err': 1.0, 'pts_bbox_NuScenes/bicycle_orient_err': 1.0, 'pts_bbox_NuScenes/bicycle_vel_err': 1.0, 'pts_bbox_NuScenes/bicycle_attr_err': 1.0, 'pts_bbox_NuScenes/motorcycle_AP_dist_0.5': 0.0, 'pts_bbox_NuScenes/motorcycle_AP_dist_1.0': 0.0, 'pts_bbox_NuScenes/motorcycle_AP_dist_2.0': 0.0, 'pts_bbox_NuScenes/motorcycle_AP_dist_4.0': 0.0, 'pts_bbox_NuScenes/motorcycle_trans_err': 1.0, 'pts_bbox_NuScenes/motorcycle_scale_err': 1.0, 'pts_bbox_NuScenes/motorcycle_orient_err': 1.0, 'pts_bbox_NuScenes/motorcycle_vel_err': 1.0, 'pts_bbox_NuScenes/motorcycle_attr_err': 1.0, 'pts_bbox_NuScenes/pedestrian_AP_dist_0.5': 0.0, 'pts_bbox_NuScenes/pedestrian_AP_dist_1.0': 0.0, 'pts_bbox_NuScenes/pedestrian_AP_dist_2.0': 0.0, 'pts_bbox_NuScenes/pedestrian_AP_dist_4.0': 0.0, 'pts_bbox_NuScenes/pedestrian_trans_err': 1.0, 'pts_bbox_NuScenes/pedestrian_scale_err': 1.0, 'pts_bbox_NuScenes/pedestrian_orient_err': 1.0, 'pts_bbox_NuScenes/pedestrian_vel_err': 1.0, 'pts_bbox_NuScenes/pedestrian_attr_err': 1.0, 'pts_bbox_NuScenes/traffic_cone_AP_dist_0.5': 0.0, 'pts_bbox_NuScenes/traffic_cone_AP_dist_1.0': 0.0, 'pts_bbox_NuScenes/traffic_cone_AP_dist_2.0': 0.0, 'pts_bbox_NuScenes/traffic_cone_AP_dist_4.0': 0.0, 'pts_bbox_NuScenes/traffic_cone_trans_err': 1.0, 'pts_bbox_NuScenes/traffic_cone_scale_err': 1.0, 'pts_bbox_NuScenes/traffic_cone_orient_err': nan, 'pts_bbox_NuScenes/traffic_cone_vel_err': nan, 'pts_bbox_NuScenes/traffic_cone_attr_err': nan, 'pts_bbox_NuScenes/barrier_AP_dist_0.5': 0.0, 'pts_bbox_NuScenes/barrier_AP_dist_1.0': 0.0, 'pts_bbox_NuScenes/barrier_AP_dist_2.0': 0.0, 'pts_bbox_NuScenes/barrier_AP_dist_4.0': 0.0, 'pts_bbox_NuScenes/barrier_trans_err': 1.0, 'pts_bbox_NuScenes/barrier_scale_err': 1.0, 'pts_bbox_NuScenes/barrier_orient_err': 1.0, 'pts_bbox_NuScenes/barrier_vel_err': nan, 'pts_bbox_NuScenes/barrier_attr_err': nan, 'pts_bbox_NuScenes/NDS': 0.0, 'pts_bbox_NuScenes/mAP': 0.0}

Why did it turn out like this?

ps:
i try run code:
python tools/test.py configs/fastbev/exp/paper/fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4.py pretrained_models/cascade_mask_rcnn_r18_fpn_coco-mstrain_3x_20e_nuim_bbox_mAP_0.5110_segm_mAP_0.4070.pth --eval bbox

result same

CBGSDataset

Hi, thanks for the great work !
May I ask how much performance improvement can be achieved by enabling CBGS enhancement?

adamw's params error bug

in /mmdet3d/models/opt/adamw.py line 118-129:
F.adamw(params_with_grad, grads, exp_avgs, exp_avg_sqs, max_exp_avg_sqs, state_steps, amsgrad=amsgrad, beta1, beta2, group['lr'], group['weight_decay'], group['eps'])
rix bug to

F.adamw(params_with_grad, grads, exp_avgs, exp_avg_sqs, max_exp_avg_sqs, state_steps, amsgrad=amsgrad, beta1=beta1, beta2=beta2, lr=group['lr'], weight_decay=group['weight_decay'], eps=group['eps'])

train_error

python tools/train.py config

File "/home/snk/anaconda3/envs/fastbev/lib/python3.8/site-packages/nuscenes/nuscenes.py", line 225, in getind
return self._token2ind[table_name][token]
KeyError: '442c7729b9d0455ca75978f1a7fdab3a'

On the innovation part of the code implementation

May I ask the ": pre-compute projection index" of your paper
and dense voxel feature generation. "And dense voxel feature generation." Where do the two parts respectively refer to the present code?

CUDA error: device-side assert triggered

Thank you for this great work!
I followed the instructions and used the nuscenesv1.0 full dataset. But when I run the training code, as I tried multiple times, it always has this error at around epoch 1 [14000/20000]. I was using the provided '.pkl' files to train, so I wonder if anyone also met this problem. I read online that the reason is the inconsistency between the label and the output, but this error appeared during the training process, not at the very first beginning. Thus it is very wired to me.

I attached the report:

,0,0], thread:[9,0,0] Assertion input val >= zero && input val <= one" failed.40/1836opt/conda/conda-bld/pytorch 1616554790289/work/aten/src/ATen/native/cuda/Loss,cu:102: operator): block:00,0], thread: [10,0,0] Assertion input val >= zero && input val <= one" failed.opt/conda/conda-bld/pytorch 1616554790289/work/aten/src/ATen/native/cuda/Loss,cu:102: operator(): block: 00,0], thread:[11,0,0] Assertion "input val >= zero && input val <= one" failed./opt/conda/conda-bld/pytorch 1616554790289/work/aten/src/ATen/native/cuda/Loss,cu:102: operator(): block:[00,0, thread:[12,0,0] ssertion input val >= zero && input val <= one failed.opt/conda/conda-bld/pytorch 1616554799289/work/aten/src/ATen/native/cuda/Loss,cu:102: operator(): block;[00,0l, thread:[13,0,0] Assertion input val >= zero && input val <= one" failed./opt/conda/conda-bld/pytorch 1616554790289/work/aten/src/ATen/native/cuda/Loss,cu:102: operator(): block:;0,0,0], thread:[14,0,0] Assertion input val >= zero && input val <= one" failed./opt/conda/conda-bld/pytorch 1616554790289/work/aten/src/ATen/native/cuda/Loss,cu:102: operator(): block;[00,0l, thread: 15,0,0 Assertion input val >= zero && input val <= one" failed.opt/conda/conda-bld/pytorch 1616554790289/work/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block:0,0,l, thread:[16,0,0] Assertion "input val >= zero && input val <= one" failed.opt/conda/conda-bld/pytorch 1616554790289/work/aten/src/ATen/native/cuda/Loss,cu:102: operator(): block:[0,0,0], thread:[17,0,0] Assertion "input val >= zero && input val <= one" failed.Traceback (most recent call last):
File"tools/train.py",line 279,in <module>
main
File"tools/train.py",line 275,in mainmeta=meta)File"/home nfs/xxx/hang/mmdetection3d/mmdet3d/apis/train.py", line 191, in train model
meta=meta)
File "/home nfs/xxx/hang/mmdetection3d/mmdet3d/apis/train.py", line 159,in train detectorrunner.run(data loaders , cfe.workflow)

File "/home nfs/xxx/anaconda3/envs/bev-py36/lib/python3.6/site-packages/torch/nn/modules/module.py3
line 889,in call implresult= self.forward(*input,**kwargs)File "/home nfs/xxx/anaconda3/envs/bev-py36/lib/python3.6/site-packages/mmcv/runner/fp16 utils,py"
line 128,in new funcoutput = old func(*new args,**new kwargs)File "/home nfs/xxx/hang/mmdetection3d/mmdet3d/models/detectors/fastbev.py", line 294, in forwardreturn self.forward train(img,img metas,**kwargs)File "/home nfs/xxx/hang/mmdetection3d/mmdet3d/models/detectors/fastbev,py", line 312, in forward train
loss_det = self.bbox head.loss(*x, gt bboxes 3d, gt labels 3d, img metas)File "/home nfs/xxx/anaconda3/envs/bev-py36/lib/python3.6/site-packages/mmcv/runner/fp16 utils,py"
line 214,in new funcoutput = old func(*new args,**new kwargs)File "/home nfs/xxx/hang/mmdetection3d/mdet3d/models/dense heads/free anchor3d head.py",line 234,in loss
positive losses.append(self.positive bag loss(matched cls prob, matched box prob))File "/home nfs/xxx/hang/mmdetection3d/mmdet3d/models/dense heads/free anchor3d head,py", line 272,in positive bag loss
bag prob,torch.ones like(bag prob),reduction='none')File "/home nfs/xxx/anaconda3/envs/bev-py36/lib/python3.6/site-packages/torch/nn/functional.py",line 2762,in binary cross entropy
return torch.C.nn.binary cross entropy(input, target, weight, reduction enum)RuntimeError: CUDA error: device-side assert triggeredAborted (core dumped)

training error

when train utils epoch 8, an error occurs:
/opt/conda/conda-bld/pytorch_1634272168290/work/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [1488,0,0], thread: [32,0,0] Assertion input_val >= zero && input_val <= one failed.

mAP and NDS Gap

For this config "fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4.py", I can get the eval result(mAP: 0.2730, NDS: 0.3954) at epoch20, but in your paper, result of experiment M0, mAP:0.284, NDS:0.427, So what will cuase the gap?

M0 training result

This accuracy does not match with the paper, the feeling is much higher, is it normal?

fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4.py
mAP: 0.3706
mATE: 0.7414
mASE: 0.2416
mAOE: 0.6121
mAVE: 0.6098
mAAE: 0.2628
NDS: 0.4386
Eval time: 177.4s

Per-class results:
Object Class AP ATE ASE AOE AVE AAE
car 0.566 0.476 0.146 0.254 0.702 0.276
truck 0.384 0.659 0.189 0.289 0.584 0.275
bus 0.408 0.676 0.194 0.211 1.153 0.298
trailer 0.144 1.181 0.207 0.437 0.391 0.198
construction_vehicle 0.179 0.945 0.381 1.073 0.144 0.326
pedestrian 0.420 0.729 0.275 1.029 0.628 0.397
motorcycle 0.338 0.747 0.222 1.027 1.031 0.291
bicycle 0.304 0.640 0.262 1.072 0.244 0.040
traffic_cone 0.506 0.624 0.317 nan nan nan
barrier 0.458 0.739 0.223 0.117 nan nan

Welcome update to OpenMMLab 2.0

Welcome update to OpenMMLab 2.0

I am Vansin, the technical operator of OpenMMLab. In September of last year, we announced the release of OpenMMLab 2.0 at the World Artificial Intelligence Conference in Shanghai. We invite you to upgrade your algorithm library to OpenMMLab 2.0 using MMEngine, which can be used for both research and commercial purposes. If you have any questions, please feel free to join us on the OpenMMLab Discord at https://discord.gg/A9dCpjHPfE or add me on WeChat (ID: van-sin) and I will invite you to the OpenMMLab WeChat group.

Here are the OpenMMLab 2.0 repos branches:

OpenMMLab 1.0 branch OpenMMLab 2.0 branch
MMEngine 0.x
MMCV 1.x 2.x
MMDetection 0.x 、1.x、2.x 3.x
MMAction2 0.x 1.x
MMClassification 0.x 1.x
MMSegmentation 0.x 1.x
MMDetection3D 0.x 1.x
MMEditing 0.x 1.x
MMPose 0.x 1.x
MMDeploy 0.x 1.x
MMTracking 0.x 1.x
MMOCR 0.x 1.x
MMRazor 0.x 1.x
MMSelfSup 0.x 1.x
MMRotate 0.x 1.x
MMYOLO 0.x

Attention: please create a new virtual environment for OpenMMLab 2.0.

weird training error

By adding the dist_train.sh file under the tools folder, the single-machine multi-gpus training is performed, but the following error is reported.
File "/dfs/data/code_python/detection_3d/FastBEV/mmdet3d/models/opt/adamw.py", line 111, in step F.adamw(params_with_grad, TypeError: adamw() takes 6 positional arguments but 12 were given

how to use view_transform cuda module?

Hi,

The work from Fast-BEV is great! I have few questions of the recent updates in the repo:

  1. How to use the view_transform_cuda when transferring from .pth into trt model?
  2. Is there any reason that you split export_2d and export_3d?

I guess I suppose to combine the expoert_2d_onnx, export_3d_onnx with the view_transform_cuda together to generate the trt model so that the acceleration of the model could be realized. But I'm not sure if it is correct.

3090的测试推理速度极慢

尝试跑了一下test,不管是m0还是m5,速度大概都不到1帧/s
tools/test.py configs/fastbev/exp/paper/fastbev_m5_r50_s512x1408_v250x250x6_c256_d6_f4.py work_dirs/fastbev/exp/paper/fastbev_m5_r50_s512x1408_v250x250x6_c256_d6_f4/epoch_20.pth --eval bbox

[>> ] 331/6019, 0.5 task/s, elapsed: 636s, ETA: 10922s

CUDA_VISIBLE_DEVICES=3 python -m torch.distributed.launch --nproc_per_node=1 --master_port=29503 tools/test.py configs/fastbev/exp/paper/fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4.py work_dirs/fastbev/exp/paper/fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4/epoch_20.pth --eval bbox --launcher="pytorch"

load checkpoint from local path: work_dirs/fastbev/exp/paper/fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4/epoch_20.pth
[> ] 140/6019, 0.9 task/s, elapsed: 161s, ETA: 675

确定是用GPU跑的,显卡有被调用。

环境
sys.platform: linux
Python: 3.8.8 (default, Feb 24 2021, 21:46:12) [GCC 7.3.0]
CUDA available: True
GPU 0: NVIDIA GeForce RTX 3090
CUDA_HOME: /usr/local/cuda
NVCC: Build cuda_11.1.TC455_06.29190527_0
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.8.1
PyTorch compiling details: PyTorch built with:

  • GCC 7.3
  • C++ Version: 201402
  • Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v1.7.0 (Git Hash 7aed236906b1f7a05c0917e5257a1af05e9ff683)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • NNPACK is enabled
  • CPU capability usage: AVX2
  • CUDA Runtime 11.1
  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
  • CuDNN 8.0.5
  • Magma 2.5.2
  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.8.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

TorchVision: 0.9.1
OpenCV: 4.7.0
MMCV: 1.4.0
MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 11.1
MMDetection: 2.14.0
MMSegmentation: 0.14.1
MMDetection3D: 0.16.0+12f1931

train

No such file or directory: 'data/nuscenes/nuscenes_infos_train_4d_interval3_max60.pkl'??

How to test speed on my own device?

Hi, thanks for the great work !
I want to test speed on my own RTX3090 with fp32. Which file do I need to run and where do I modify the config? We want to cite your method and make a fair comparison.

2d-to-3d on GPU

Hi bro, for the case of multiple views with overlapping areas, you directly adopt the first encountered view to improve the speed of table building. But this will lead to no feature fusion in the overlapping area. Will this not lead to performance degradation?

时序对齐问题?

np.linalg.inv(lidar2ego) @ np.linalg.inv(egocurr2global) @ egoadj2global @ lidar2ego
这句代码中 np.linalg.inv(lidar2ego)和lidar2ego的含义是,为啥在前面加 np.linalg.inv(lidar2ego)和为啥在后面加lidar2ego

How to generate train/val .pkl?

When I run tools/create_data.py,the generated .pkl has many differences with your provided .pkl. How to generate your .pkl?

Why use sweeps data?

I use fast-bev officially provided tags. I don't have local sweeps data. The data in sweepeps is not labeled, why is it used in the training stage? I wonder it.
when I run
bash tools/dist_train.sh configs/fastbev/exp/paper/fastbev_m1_r18_s320x880_v200x200x4_c192_d2_f4.py 2
I have an error.
FileNotFoundError: [Errno 2] No such file or directory: './data/nuscenes/sweeps/CAM_FRONT/n008-2018-09-18-12-07-26-0400__CAM_FRONT__1537287266112404.jpg

In the testing stage.
I use ann_file='data/nuscenes/nuscenes_infos_val_4d_interval3_max60.pkl' in the config.
and I run CUDA_VISIBLE_DEVICIES=2 python tools/test.py --eval bbox --eval-options jsonfile_prefix=work_dir
I also meet an error.
FileNotFoundError: [Errno 2] No such file or directory: './data/nuscenes/sweeps/CAM_FRONT/n015-2018-08-02-17-16-37+0800__CAM_FRONT__1533201556012460.jpg'.

The data in sweepeps is not labeled, why is it used in the training stage? I wonder it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.