Giter Site home page Giter Site logo

futr3d's People

Contributors

hangzhaomit avatar simonyilunw avatar xyaochen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

futr3d's Issues

total iterations per epoch

Hello,

I am wondering why the total iterations per epoch (total_it_per_ep) is different than my estimates (even though detr3d training's total_it_per_ep matches what I estimated).

I am now using 4 GPUs, samples_per_GPU=10. So, the estimated number is 28130 (# training samples) / (4*10) = appeoximately 700 iters/epoch.

But, it shows 3090 iters/ epoch
.... - mmdet - INFO - Epoch [8][1450/3090] lr: 9.902e-04, eta: ....

Why does this phenomenon happen?

Best,

Unit of distance or range

Hello,

I saw this code snippet in your codebase. Can you let me know the unit of range (m, km, or inch) in this codebase?

# If point cloud range is changed, the models should also change their point
# cloud range accordingly
point_cloud_range = [-50, -50, -5, 50, 50, 3]

And also, this one shown in the evaluation output: what is the unit of ditance (e.g., 0.5 and 1.0)?

pts_bbox_NuScenes/car_AP_dist_0.5': 0.7921, 'pts_bbox_NuScenes/car_AP_dist_1.0': 0.8886, ...

Best,

Corrupted lidar_cam.pth checkpoint file?

Hi there,

I am trying to reproduce your results on the nuscenes validation set, and I downloaded the lidar_cam.pth checkpoint that is linked to in your readme.
However, when loading this file using torch.load(), I get the error that:

RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

Suprisingly, when I attempt to load the lidar_only checkpoint, it loads without any issues.
When looking online for the reason of this error, I find that it is often caused by a corrupted/incorrectly-saved file.
Could it be that the lidar_cam.pth checkpoint that you link to for downloading, is corrupted?
Are others able to load this checkpoint?
I tested this with python=3.8 and torch=1.10.0, and later with python=3.8 and torch=1.13.1.

Any suggestions?

error of when perform nuscenes_converter.py and tools/dist_train.py

i use mmcv-full==1.5.2, mmdet==2.24.0, mmseg==0.20.0 as readme list, but when i perform python nuscenes_converter.py, it said "AssertionError: MMCV==1.5.2 is used but incompatible. Please install mmcv>=(1, 3, 13, 0, 0, 0), <=(1, 5, 0, 0, 0, 0)", so should i downgrade my mmcv version?
when i perform tools/dist_train.py, it also said "AssertionError: MMCV==1.5.2 is used but incompatible. Please install mmcv>=(1, 3, 13, 0, 0, 0), <=(1, 5, 0, 0, 0, 0)".

radar_use_dims = [0, 1, 2, 8, 9, 18]

In the FUTR3D paper, you used radar coordinates, velocity measurements, and intensities.
In the comments of the code, they are " x y z rcs vx_comp vy_comp x_rms y_rms vx_rms vy_rms" .

I think 0->x ,1->y, 2->z,8->vx_comp,9->vy_comp,but I don't know the meaning of 18th dimension.
And how intensities are used?

I really appreciate for your help.

mmcv version

The package I installed is shown below:
image

ERROR: MMCV==2.0.0 is used but incompatible. Please install mmcv>=1.3.17, <=1.8.0.
How to solve this problem?Thanks

Confused about "return " at plugin/futr3d/models/backbones/radar_encoder.py#L93

Hi, @xyaochen ,thanks for your work in multi sensor fusion detection.
But I'm confused about the implementation in plugin/futr3d/models/backbones/radar_encoder.py#L93
If implement like that, the module registried as "radar_encoder" will return nothing.Can you help me understand it?Thanks a lot!

    def forward(self, points):
        '''
        points: [B, N, C]. N: as max
        masks: [B, N, 1]
        ret: 
            out: [B, N, C+1], last channel as 0-1 mask
        '''
        masks = points[:, :, [-1]]
        x = points[:, :, :-1]
        xy = points[:, :, :2]
        
        for feat_layer in self.feat_layers:
            x = feat_layer(x)
        
        out = x * masks

        out = torch.cat((x, masks), dim=-1)

        out = torch.cat((xy, out), dim=-1)
        return 

RuntimeError: shape '[1, 4, 1, 4, 4]' is invalid for input of size 16

I want to train this model on my customized dataset (a camera + a LIDAR), and I have transform it to 'KITTI' format.
But I don't know the exact source and solution of the following error.

2022-09-29 17:52:53,045 - mmdet - INFO - workflow: [('train', 1)], max: 3 epochs
/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /pytorch/c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Traceback (most recent call last):
  File "tools/train.py", line 248, in <module>
    main()
  File "tools/train.py", line 244, in main
    meta=meta)
  File "/data/run01/scz3687/openmmlab0171/mmdetection3d/mmdet3d/apis/train.py", line 35, in train_model
    meta=meta)
  File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/mmdet/apis/train.py", line 170, in train_detector
    runner.run(data_loaders, cfg.workflow)
  File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
    epoch_runner(data_loaders[i], **kwargs)
  File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 50, in train
    self.run_iter(data_batch, train_mode=True, **kwargs)
  File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 30, in run_iter
    **kwargs)
  File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/mmcv/parallel/data_parallel.py", line 67, in train_step
    return self.module.train_step(*inputs[0], **kwargs[0])
  File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/mmdet/models/detectors/base.py", line 237, in train_step
    losses = self(**data)
  File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/mmcv/runner/fp16_utils.py", line 98, in new_func
    return old_func(*args, **kwargs)
  File "/data/run01/scz3687/openmmlab0171/mmdetection3d/mmdet3d/models/detectors/base.py", line 59, in forward
    return self.forward_train(**kwargs)
  File "/data/run01/scz3687/openmmlab0171/mmdetection3d/plugin/futr3d/models/detectors/futr3d.py", line 200, in forward_train
    gt_bboxes_ignore)
  File "/data/run01/scz3687/openmmlab0171/mmdetection3d/plugin/futr3d/models/detectors/futr3d.py", line 135, in forward_mdfs_train
    outs = self.pts_bbox_head(pts_feats, img_feats, rad_feats, img_metas)
  File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/run01/scz3687/openmmlab0171/mmdetection3d/plugin/futr3d/models/dense_head/detr_mdfs_head.py", line 130, in forward
    img_metas=img_metas,
  File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/run01/scz3687/openmmlab0171/mmdetection3d/plugin/futr3d/models/utils/transformer.py", line 157, in forward
    **kwargs)
  File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/run01/scz3687/openmmlab0171/mmdetection3d/plugin/futr3d/models/utils/transformer.py", line 215, in forward
    **kwargs)
  File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/mmcv/cnn/bricks/transformer.py", line 508, in forward
    **kwargs)
  File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/run01/scz3687/openmmlab0171/mmdetection3d/plugin/futr3d/models/utils/attention.py", line 241, in forward
    img_feats, reference_points, self.pc_range, kwargs['img_metas'])
  File "/data/run01/scz3687/openmmlab0171/mmdetection3d/plugin/futr3d/models/utils/attention.py", line 344, in feature_sampling
    lidar2img = lidar2img.view(B, num_cam, 1, 4, 4).repeat(1, 1, num_query, 1, 1)
RuntimeError: shape '[1, 4, 1, 4, 4]' is invalid for input of size 16

This is the log file.
20220929_175225.log

There are many problems when evalutating

There are many problems when evalutating every 2 epochs, such as:

File "/home/futr3d/tools/train.py", line 260, in main
meta=meta)
File "/home/mmdetection3d/mmdet3d/apis/train.py", line 351, in train_model
meta=meta)
File "/home/mmdetection3d/mmdet3d/apis/train.py", line 319, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/opt/conda/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 136, in run
epoch_runner(data_loaders[i], **kwargs)
File "/opt/conda/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 58, in train
self.call_hook('after_train_epoch')
File "/opt/conda/lib/python3.7/site-packages/mmcv/runner/base_runner.py", line 317, in call_hook
getattr(hook, fn_name)(self)
File "/opt/conda/lib/python3.7/site-packages/mmcv/runner/hooks/evaluation.py", line 271, in after_train_epoch
self._do_evaluate(runner)
File "/home/mmdetection/mmdet/core/evaluation/eval_hooks.py", line 63, in do_evaluate
key_score = self.evaluate(runner, results)
File "/opt/conda/lib/python3.7/site-packages/mmcv/runner/hooks/evaluation.py", line 368, in evaluate
results, logger=runner.logger, **self.eval_kwargs)
File "/home/futr3d/plugin/futr3d/datasets/nuscenes_radar.py", line 465, in evaluate
result_files, tmp_dir = self.format_results(results, jsonfile_prefix)
File "/home/futr3d/plugin/futr3d/datasets/nuscenes_radar.py", line 435, in format_results
{name: self.format_bbox(results, tmp_file
)})
File "/home/futr3d/plugin/futr3d/datasets/nuscenes_radar.py", line 306, in _format_bbox
for i, box in enumerate(boxes):
TypeError: 'NoneType' object is not iterable

Could you please review the code again? Thank you!

ImportError: cannot import name 'DeformableDETRHead' from 'mmdet.models.dense_heads'

完整报错如下:

plugin.futr3d
Traceback (most recent call last):
  File "tools/train.py", line 236, in <module>
    main()
  File "tools/train.py", line 119, in main
    plg_lib = importlib.import_module(_module_path)
  File "/HOME/scz3687/.conda/envs/open-mmlab0130/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 728, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/data/run01/scz3687/openmmlab0130/mmdetection3d/plugin/futr3d/__init__.py", line 5, in <module>
    from .models.dense_head.detr_mdfs_head import DeformableFUTR3DHead
  File "/data/run01/scz3687/openmmlab0130/mmdetection3d/plugin/futr3d/models/dense_head/detr_mdfs_head.py", line 15, in <module>
    from mmdet.models.dense_heads import DeformableDETRHead, DETRHead
ImportError: cannot import name 'DeformableDETRHead' from 'mmdet.models.dense_heads' (/HOME/scz3687/.conda/envs/open-mmlab0130/lib/python3.7/site-packages/mmdet/models/dense_heads/__init__.py)

我去官方的仓库看了一下,mmdet==2.11.0里刚好没有这两个包,但是mmdet==2.12.0以后的版本就有了

可是mmdet3d==0.13.0的安装要求是mmdet>=2.10.0, <=2.11.0

请问这个问题该怎么解决呢

related result?

Per-class results:
Object Class AP ATE ASE AOE AVE AAE
car 0.506 0.591 0.154 0.079 0.549 0.212
truck 0.255 0.861 0.220 0.164 0.438 0.179
bus 0.305 0.929 0.233 0.153 0.723 0.235
trailer 0.094 1.236 0.263 0.601 0.288 0.077
construction_vehicle 0.000 1.000 1.000 1.000 1.000 1.000
pedestrian 0.408 0.740 0.291 0.521 0.462 0.214
motorcycle 0.298 0.789 0.267 0.499 0.787 0.195
bicycle 0.275 0.749 0.271 0.699 0.326 0.023
traffic_cone 0.000 1.000 1.000 nan nan nan
barrier 0.000 1.000 1.000 1.000 nan nan
why the construction_vehicle, traffic_cone,barrier always keep 0.0, 1.0 and nan?

ImportError: cannot import name 'build' from 'mmcv.utils.registry'

The detail is showed as follows:

Traceback (most recent call last):
  File "tools/train.py", line 17, in <module>
    from mmdet3d.models.builder import build_detector
  File "/data/run01/scz3687/openmmlab0130/mmdetection3d/mmdet3d/models/__init__.py", line 1, in <module>
    from .backbones import *  # noqa: F401,F403
  File "/data/run01/scz3687/openmmlab0130/mmdetection3d/mmdet3d/models/backbones/__init__.py", line 3, in <module>
    from .nostem_regnet import NoStemRegNet
  File "/data/run01/scz3687/openmmlab0130/mmdetection3d/mmdet3d/models/backbones/nostem_regnet.py", line 2, in <module>
    from ..builder import BACKBONES
  File "/data/run01/scz3687/openmmlab0130/mmdetection3d/mmdet3d/models/builder.py", line 3, in <module>
    from mmcv.utils.registry import build
ImportError: cannot import name 'build' from 'mmcv.utils.registry' (/HOME/scz3687/.conda/envs/open-mmlab0130/lib/python3.7/site-packages/mmcv/utils/registry.py)

My environment:
mmcv-full==1.3.14, mmdet==2.14.0, mmdet3d==0.13.0

I have got the different results when evaluating Radar_cam model

I follow the installation, everything is OK when I evaluating the lidar_cam model, the setting:
mmcv_full=1.6.0
mmsegmentation=0.30.0
mmdet=2.24.0
mmdet3d=1.0.0rc6 (follow the mmdet3d in project root)
cuda=11.1
pytorch=1.10
dataset = nuscenes_mini

the result as shown below:
mAP: 0.5746
mATE: 0.4179
mASE: 0.4498
mAOE: 0.4368
mAVE: 0.4169
mAAE: 0.2883
NDS: 0.5863
Eval time: 1.8s

Per-class results:
Object Class AP ATE ASE AOE AVE AAE
car 0.914 0.171 0.150 0.076 0.096 0.064
truck 0.854 0.178 0.174 0.020 0.083 0.000
bus 0.992 0.176 0.091 0.031 0.582 0.102
trailer 0.000 1.000 1.000 1.000 1.000 1.000
construction_vehicle 0.000 1.000 1.000 1.000 1.000 1.000
pedestrian 0.882 0.160 0.254 0.254 0.169 0.134
motorcycle 0.824 0.228 0.285 0.416 0.059 0.005
bicycle 0.559 0.196 0.203 0.134 0.347 0.000
traffic_cone 0.720 0.069 0.341 nan nan nan
barrier 0.000 1.000 1.000 1.000 nan nan

But I have got a problem when i was evaluating the Radar_cam model, the result I got can be shown below:
mAP: 0.3529
mATE: 0.6667
mASE: 0.7705
mAOE: 1.2119
mAVE: 0.4240
mAAE: 0.2929
NDS: 0.3610
Eval time: 2.2s

Per-class results:
Object Class AP ATE ASE AOE AVE AAE
car 0.682 0.390 0.755 1.530 0.105 0.079
truck 0.514 0.456 0.786 1.413 0.120 0.043
bus 0.492 0.743 0.866 0.535 0.517 0.041
trailer 0.000 1.000 1.000 1.000 1.000 1.000
construction_vehicle 0.000 1.000 1.000 1.000 1.000 1.000
pedestrian 0.558 0.555 0.323 1.555 0.332 0.156
motorcycle 0.447 0.527 0.824 1.449 0.064 0.011
bicycle 0.179 0.641 0.808 1.425 0.255 0.013
traffic_cone 0.657 0.354 0.343 nan nan nan
barrier 0.000 1.000 1.000 1.000 nan nan
I found the mAOE is not the same as the paper , it is too large. I don't know what happen. I also met the same problem when I evaluating DETR3D model. I have found some of possible causing like:

  1. mmdet3d have different parameter with version over than 1.0.0.
  2. Also maybe I did't set the config file in right way. Yes it is that I have do nothing with the config file.

Now I even think of the result of lidar_cam model have something wrong that mAOE is also different with the paper, I don't know what happen. Thanks

cuda execution failed with error 2

when I run bash tools/dist_train.sh plugin/futr3d/configs/lidar_cam/res101_01voxel_step_3e.py 4 got error
RuntimeError indice_cuda.cu 124 cuda execution failed with error 2, It seems that it is a simple "out of memory" error. but the samples_per_gpu = 1, I use the 2080TI GPU with 11G. so How much video memory is required to train the model,which GPU you used? looking forward to your help

missing keys in source state_dict: pts_neck{...}

Hi,

I just noticed that I have some problems with loading weights from the lidar_only.pth checkpoint.
The model uses four layers for pts_neck.fpn_convs (i.e. pts_neck.fpn_convs.{0:3}), however the checkpoint only contains the weights for the first two (pts_neck.fpn_convs.0.{...} and pts_neck.fpn_convs.1.{...}.

This causes the following message to be logged, when using 01voxel_q6_step_38e.py as the cfg:

missing keys in source state_dict: pts_neck.fpn_convs.2.conv.weight, pts_neck.fpn_convs.2.bn.weight, pts_neck.fpn_convs.2.bn.bias, pts_neck.fpn_convs.2.bn.running_mean, pts_neck.fpn_convs.2.bn.running_var, pts_neck.fpn_convs.3.conv.weight, pts_neck.fpn_convs.3.bn.weight, pts_neck.fpn_convs.3.bn.bias, pts_neck.fpn_convs.3.bn.running_mean, pts_neck.fpn_convs.3.bn.running_var.

Is this expected behaviour?
Do others have similar issues?

/configs/_base_ folder is missing

Hi. Thanks for the good work. I have tried the repo and looks like the following config files are missing. Could you please help?
base = [
'../../../configs/base/datasets/nus-3d.py',
'../../../configs/base/schedules/cyclic_20e.py',
'../../../configs/base/default_runtime.py'
]

Thanks,
Trung

Changing the number of radars

Hi, Is it possible to reconfigure the network to use a single radar? If so, what configuration parameters need to be modified?

mmcv version issue

When i am trying to create data through this command: python3 tools/create_data.py nuscenes --root-path ./mmdet3d/tools/configs/data/nuscenes --out-dir ./mmdet3d/tools/configs/data/nuscenes --extra-tag nuscenes
it is giving me the following error:
/home/citacav/futr3d/mmcv/mmcv/init.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
warnings.warn(
Traceback (most recent call last):
File "tools/create_data.py", line 6, in
from tools.data_converter import kitti_converter as kitti
File "/home/citacav/futr3d/tools/data_converter/kitti_converter.py", line 9, in
from mmdet3d.core.bbox import box_np_ops, points_cam2img
File "/home/citacav/futr3d/mmdet3d/init.py", line 4, in
import mmdet
File "/home/citacav/futr3d/mmdetection/mmdet/init.py", line 16, in
assert (mmcv_version >= digit_version(mmcv_minimum_version)
AssertionError: MMCV==1.7.0 is used but incompatible. Please install mmcv>=2.0.0rc4, <2.1.0.
If i install mmcv=2.0.0rc4 then it shows me the following error:
File "/home/citacav/futr3d/mmdet3d/init.py", line 26, in
assert (mmcv_version >= digit_version(mmcv_minimum_version)
AssertionError: MMCV==2.0.0rc4 is used but incompatible. Please install mmcv>=1.5.2, <=1.7.0

新版代码的一些问题

  1. 网络配置文件里面的plugin/fudet 与 文件结构的 plugin/futr3d不匹配;
  2. tool/dist_train.sh文件格式有问题,解析错误,需要拷贝内容重新创建一个。sh文件;
  3. lidar_cam:File "/share/futr3d_torch112/futr3d_rc1006/mmdet3d/datasets/pipelines/transforms_3d.py", line 970, in call
    radar_mask = radar.in_range_3d(self.pcd_range)
    AttributeError: 'dict' object has no attribute 'in_range_3d'

nuscenes test set class-specific scores

Hi there,

In table 1 of your paper you report various metrics on the nuscenes test set.
Can you share the class-specific AP scores as well, for the C, L, L+C model variants on the nuscenes test set?

The L and LC model scores are shown in table 4, but I'm guessing that these results are not on the test set.

Thank you!

KeyError: 'FUTR3D is not in the detector registry'

When I try to fine-tune futr3d config to KITTI dataset, the error occurs as follows:

2022-08-31 15:46:14,605 - mmdet - INFO - Set random seed to 0, deterministic: False
Traceback (most recent call last):
  File "tools/train.py", line 236, in <module>
    main()
  File "tools/train.py", line 200, in main
    test_cfg=cfg.get('test_cfg'))
  File "/data/run01/scz3687/openmmlab0130/mmdetection3d/mmdet3d/models/builder.py", line 52, in build_detector
    return build(cfg, DETECTORS, dict(train_cfg=train_cfg, test_cfg=test_cfg))
  File "/HOME/scz3687/.conda/envs/open-mmlab0130/lib/python3.7/site-packages/mmdet/models/builder.py", line 34, in build
    return build_from_cfg(cfg, registry, default_args)
  File "/HOME/scz3687/.conda/envs/open-mmlab0130/lib/python3.7/site-packages/mmcv/utils/registry.py", line 172, in build_from_cfg
    f'{obj_type} is not in the {registry.name} registry')
KeyError: 'FUTR3D is not in the detector registry'

I have try to recompile the mmdet3d framework, but I it didn't work.

the radar bev feature

Hello, your paper mentioned that you generated the radar bev feature in radar_feature_encoder, but when i read your code, i didn't find it. I'm new to this field, could you please point out where it is, looking forward to your reply, thanks very much!

MMCV incompatibility issue

Hi! Thanks for your impressive work, this work is amazing!

While reproducing this work, I got the following error and saw several same issues have been raised up, but I could not find any clear solution in their posts. How can we solve the following issue?

AssertionError: MMCV==1.6.0 is used but incompatible. Please install mmcv>=(1, 3, 13, 0, 0, 0), <=(1, 5, 0, 0, 0, 0).
(FYI, if I simply downgrade the mmcv's version, the subsequent error msg shows I have to upgrade the version ...)

Checkpoints release

Hi,
Thanks for your great work !!!
When will the checkpoints be released? I'm looking forward to it.

TypeError: NuScenesDatasetRadar: Unsupported format: None

Hi, @xyaochen, thanks for your great work. I met an error as follows.

Traceback (most recent call last):
  File "/home/dm/workspace/py36env4mmlab/lib/python3.6/site-packages/mmcv/utils/registry.py", line 69, in build_from_cfg
    return obj_cls(**args)
  File "/data/mm/project/futr3d/plugin/futr3d/datasets/nuscenes_radar.py", line 125, in __init__
    test_mode=test_mode)
  File "/data/mm/project/futr3d/z-dp/mmdetection3d/mmdet3d/datasets/custom_3d.py", line 89, in __init__
    self.data_infos = self.load_annotations(open(local_path, 'rb'))
  File "/data/mm/project/futr3d/plugin/futr3d/datasets/nuscenes_radar.py", line 174, in load_annotations
    data          = mmcv.load(ann_file)
  File "/home/dm/workspace/py36env4mmlab/lib/python3.6/site-packages/mmcv/fileio/io.py", line 57, in load
    raise TypeError(f'Unsupported format: {file_format}')
TypeError: Unsupported format: None

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "tools/train.py", line 244, in <module>
    main()
  File "tools/train.py", line 206, in main
    datasets = [build_dataset(cfg.data.train)]
  File "/data/mm/project/futr3d/z-dp/mmdetection3d/mmdet3d/datasets/builder.py", line 53, in build_dataset
    dataset = build_from_cfg(cfg, MMDET_DATASETS, default_args)
  File "/home/dm/workspace/py36env4mmlab/lib/python3.6/site-packages/mmcv/utils/registry.py", line 72, in build_from_cfg
    raise type(e)(f'{obj_cls.__name__}: {e}')
TypeError: NuScenesDatasetRadar: Unsupported format: None

It seems that the 'NuScenesDatasetRadar' registered failed or the NuScenes dataset was not processed properly. I followed the rules of mmdet3d and futr3d to process the NuScenes dataset. Other models of 'cam_only', 'lidar_cam', and 'lidar_only' work fine.

Could you give me some advice?

keyerror radar

when running camera+radar training,the error occured

2023-07-28 12:31:40.542790: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
Traceback (most recent call last):
File "tools/train.py", line 318, in
main()
File "tools/train.py", line 314, in main
meta=meta)
File "/home/mmdetection3d-1.0.0rc4/mmdet3d/apis/train.py", line 351, in train_model
meta=meta)
File "/home/mmdetection3d-1.0.0rc4/mmdet3d/apis/train.py", line 319, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/root/anaconda3/envs/futr3d-1/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 136, in run
epoch_runner(data_loaders[i], **kwargs)
File "/root/anaconda3/envs/futr3d-1/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 49, in train
for i, data_batch in enumerate(self.data_loader):
File "/root/anaconda3/envs/futr3d-1/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in next
data = self._next_data()
File "/root/anaconda3/envs/futr3d-1/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1085, in _next_data
return self._process_data(data)
File "/root/anaconda3/envs/futr3d-1/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1111, in _process_data
data.reraise()
File "/root/anaconda3/envs/futr3d-1/lib/python3.7/site-packages/torch/_utils.py", line 428, in reraise
raise self.exc_type(msg)
KeyError: Caught KeyError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/root/anaconda3/envs/futr3d-1/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop
data = fetcher.fetch(index)
File "/root/anaconda3/envs/futr3d-1/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/root/anaconda3/envs/futr3d-1/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/mmdetection3d-1.0.0rc4/mmdet3d/datasets/custom_3d.py", line 435, in getitem
data = self.prepare_train_data(idx)
File "/home/mmdetection3d-1.0.0rc4/mmdet3d/datasets/custom_3d.py", line 229, in prepare_train_data
example = self.pipeline(input_dict)
File "/home/mmdetection3d-1.0.0rc4/mmdet3d/datasets/pipelines/compose.py", line 49, in call
data = t(data)
File "/tmp/algorithm/plugin/futr3d/datasets/loading.py", line 417, in call
radars_dict = results['radar']
KeyError: 'radar'

it seems that mmdet3d can not load radar info, could you please tell me how to solve this problem?

为什么源码中没有plugin.fudet这个子文件夹

Exception has occurred: ModuleNotFoundError
No module named 'plugin.fudet'
File "/FUTR3d/plugin/futr3d/models/detectors/futr3d.py", line 13, in
from plugin.fudet.models.utils.grid_mask import GridMask
File "/FUTR3d/plugin/futr3d/models/detectors/init.py", line 1, in
from .futr3d import *
File "FUTR3d/plugin/futr3d/models/init.py", line 1, in
from .detectors import *
File "FUTR3d/plugin/futr3d/init.py", line 1, in
from .models import *
File "FUTR3d/tools/train.py", line 143, in main
plg_lib = importlib.import_module(_module_path)
File "FUTR3d/tools/train.py", line 288, in
main()

Problem about multi GPUs processing

在单卡训练的时候可以跑通,但是多卡(4张3090)训练的时候,出现这个问题,能帮我解答下吗?谢谢

Traceback (most recent call last):
File "tools/train.py", line 264, in
main()
File "tools/train.py", line 260, in main
meta=meta)
File "/home/fuwuqianyan/LXP/mmdetection3d/mmdet3d/apis/train.py", line 351, in train_model
meta=meta)
File "/home/fuwuqianyan/LXP/mmdetection3d/mmdet3d/apis/train.py", line 319, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 136, in run
epoch_runner(data_loaders[i], **kwargs)
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 49, in train
for i, data_batch in enumerate(self.data_loader):
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 355, in iter
return self._get_iterator()
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 301, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 914, in init
w.start()
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/multiprocessing/process.py", line 112, in start
self._popen = self._Popen(self)
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/multiprocessing/context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in init
super().init(process_obj)
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/multiprocessing/popen_fork.py", line 20, in init
self._launch(process_obj)
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/multiprocessing/reduction.py", line 61, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle dict_keys objects
Traceback (most recent call last):
File "tools/train.py", line 264, in
main()
File "tools/train.py", line 260, in main
meta=meta)
File "/home/fuwuqianyan/LXP/mmdetection3d/mmdet3d/apis/train.py", line 351, in train_model
meta=meta)
File "/home/fuwuqianyan/LXP/mmdetection3d/mmdet3d/apis/train.py", line 319, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 136, in run
epoch_runner(data_loaders[i], **kwargs)
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 49, in train
for i, data_batch in enumerate(self.data_loader):
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 355, in iter
return self._get_iterator()
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 301, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 914, in init
w.start()
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/multiprocessing/process.py", line 112, in start
self._popen = self._Popen(self)
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/multiprocessing/context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in init
super().init(process_obj)
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/multiprocessing/popen_fork.py", line 20, in init
self._launch(process_obj)
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/multiprocessing/reduction.py", line 61, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle dict_keys objects
Killing subprocess 59448
Killing subprocess 59449
Killing subprocess 59450
Killing subprocess 59451
Traceback (most recent call last):
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/site-packages/torch/distributed/launch.py", line 340, in
main()
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/site-packages/torch/distributed/launch.py", line 326, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "/home/fuwuqianyan/anaconda3/envs/wunai3/lib/python3.7/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/fuwuqianyan/anaconda3/envs/wunai3/bin/python3', '-u', 'tools/train.py', '--local_rank=3', 'plugin/futr3d/configs/lidar_only/01voxel_q6_step_38e.py', '--launcher', 'pytorch']' returned non-zero exit status 1.

camera-LiDAR-radar Result

Hi,

Thanks for this amazing work. In the project website I think it's mentioned FUTR3D supports camera-LiDAR-radar fusion but I was not able to find any results and checkpoints mentioned in the paper and github. Could you please suggest the reason/thinking behind this? Thank you.

lidar_0075_900q.pth is actually for 600 queries

Hi there,

Thanks a lot for releasing the new version of your project!

I have a small note regarding the available checkpoint for the LiDAR-only model.
The checkpoint and corresponding cfg file mention the use of num_queries=900, but when loading the checkpoint it appears that it is actually trained with 600 queries;

size mismatch for pts_bbox_head.tgt_embed.weight: copying a param with shape torch.Size([600, 256]) from checkpoint, the shape in current model is torch.Size([900, 256]).
size mismatch for pts_bbox_head.refpoint_embed.weight: copying a param with shape torch.Size([600, 3]) from checkpoint, the shape in current model is torch.Size([900, 3]).

For the LiDAR+camera checkpoints I don't see anything like this, so these are indeed with 900 queries.

Will you release the checkpoint for the LiDAR-only model with 900 queries? I assume this yields different results than a model with 600 queries.

Thank you!

Compatibility problem with mmdetection3d==0.13.0

When working with mmdet3d==0.13.0, I got following error:

Traceback (most recent call last):
  File "tools/train.py", line 18, in <module>
    from mmdet3d.apis import train_model
ImportError: cannot import name 'train_model' from 'mmdet3d.apis' (/home/hzq/projects/futr3d/mmdet3d/apis/__init__.py)
Traceback (most recent call last):
  File "tools/train.py", line 18, in <module>
    from mmdet3d.apis import train_model
ImportError: cannot import name 'train_model' from 'mmdet3d.apis' (/home/hzq/projects/futr3d/mmdet3d/apis/__init__.py)
Traceback (most recent call last):
  File "tools/train.py", line 18, in <module>
    from mmdet3d.apis import train_model
ImportError: cannot import name 'train_model' from 'mmdet3d.apis' (/home/hzq/projects/futr3d/mmdet3d/apis/__init__.py)
Traceback (most recent call last):
  File "tools/train.py", line 18, in <module>
    from mmdet3d.apis import train_model
ImportError: cannot import name 'train_model' from 'mmdet3d.apis' (/home/hzq/projects/futr3d/mmdet3d/apis/__init__.py)

Indicating that mmdetection3d==0.13.0 is not the version to develop with. I found that train_model is added to mmdet3d.apis in version 0.14.0. Maybe readme.md should be updated to a correct requirement version.

Real time performance and hardware specification

Hello, would you kindly share the details about the real-time performance(Frame per second) of the network on Camera+Lidar configuration and the specific hardware it was tested upon?

I noticed there was an older question asking about that, but the hardware it was tested upon was not specified. Hope you could assist me by clarifying this. Thank you and great work.

RuntimeError: Expected to mark a variable ready only once.

To run the plugin/futr3d/configs/lidar_cam/res101_01voxel_step_3e.py, I replace the FPNV2 by FPN and load_from = 'pretrained/res101_01voxel_pretrained.pth' was also commented out firstly. But I still meet the error beflow:

Traceback (most recent call last):
File "tools/train.py", line 263, in
main()
File "tools/train.py", line 252, in main
train_model(
File "/home/projects/bev/futr3d/mmdetection3d/mmdet3d/apis/train.py", line 28, in train_model
train_detector(
File "/workspace/miniconda3/envs/futr3d/lib/python3.8/site-packages/mmdet/apis/train.py", line 174, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/workspace/miniconda3/envs/futr3d/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "/workspace/miniconda3/envs/futr3d/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 51, in train
self.call_hook('after_train_iter')
File "/workspace/miniconda3/envs/futr3d/lib/python3.8/site-packages/mmcv/runner/base_runner.py", line 307, in call_hook
getattr(hook, fn_name)(self)
File "/workspace/miniconda3/envs/futr3d/lib/python3.8/site-packages/mmcv/runner/hooks/optimizer.py", line 35, in after_train_iter
runner.outputs['loss'].backward()
File "/workspace/miniconda3/envs/futr3d/lib/python3.8/site-packages/torch/tensor.py", line 245, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/workspace/miniconda3/envs/futr3d/lib/python3.8/site-packages/torch/autograd/init.py", line 145, in backward
Variable._execution_engine.run_backward(
File "/workspace/miniconda3/envs/futr3d/lib/python3.8/site-packages/torch/autograd/function.py", line 89, in apply
return self._forward_cls.backward(self, *args) # type: ignore
File "/workspace/miniconda3/envs/futr3d/lib/python3.8/site-packages/torch/utils/checkpoint.py", line 112, in backward
torch.autograd.backward(outputs_with_grad, args_with_grad)
File "/workspace/miniconda3/envs/futr3d/lib/python3.8/site-packages/torch/autograd/init.py", line 145, in backward
Variable._execution_engine.run_backward(
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the forward function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple checkpoint functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases yet.

deploy to cloud server

i want to deploy this project to cloud server and use it to fuse data from cam&radar&lidar, where should i modify code? @xyaochen

About GridMask

In configs, use_grid_mask=True, and GridMask regularly crops some parts awary from images.

My questions are:

  1. why use GridMask? Does it lead to performance improvement?
  2. why not use CutOut instead of GridMask?

look forward to your kind reply.

mmdet3d

My GPU only supports @cuda>=11.
After running the following command: python setup.py install develop, the mmdet3d package was installed,but mmdet3d can not support cuda11.

The error message is as follows:
libcuda10.1. so is not found

How do you get mmdet3d to support cuda11?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.