Giter Site home page Giter Site logo

zehuichen123 / autoalignv2 Goto Github PK

View Code? Open in Web Editor NEW
142.0 11.0 14.0 16.1 MB

[ECCV2022, IJCAI2022] AutoAlignV2: Deformable Feature Aggregation for Dynamic Multi-Modal 3D Object Detection

License: Apache License 2.0

Python 89.62% Dockerfile 0.02% Makefile 0.02% Batchfile 0.02% Shell 0.07% C++ 5.34% Cuda 4.91%

autoalignv2's People

Contributors

zehuichen123 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autoalignv2's Issues

Welcome update to OpenMMLab 2.0

Welcome update to OpenMMLab 2.0

I am Vansin, the technical operator of OpenMMLab. In September of last year, we announced the release of OpenMMLab 2.0 at the World Artificial Intelligence Conference in Shanghai. We invite you to upgrade your algorithm library to OpenMMLab 2.0 using MMEngine, which can be used for both research and commercial purposes. If you have any questions, please feel free to join us on the OpenMMLab Discord at https://discord.gg/amFNsyUBvm or add me on WeChat (van-sin) and I will invite you to the OpenMMLab WeChat group.

Here are the OpenMMLab 2.0 repos branches:

OpenMMLab 1.0 branch OpenMMLab 2.0 branch
MMEngine 0.x
MMCV 1.x 2.x
MMDetection 0.x 、1.x、2.x 3.x
MMAction2 0.x 1.x
MMClassification 0.x 1.x
MMSegmentation 0.x 1.x
MMDetection3D 0.x 1.x
MMEditing 0.x 1.x
MMPose 0.x 1.x
MMDeploy 0.x 1.x
MMTracking 0.x 1.x
MMOCR 0.x 1.x
MMRazor 0.x 1.x
MMSelfSup 0.x 1.x
MMRotate 1.x 1.x
MMYOLO 0.x

Attention: please create a new virtual environment for OpenMMLab 2.0.

Reproduction Problem

Thanks for your nice work.
But I get the following results with your pretrained model centerpoint_voxel_nus_8subset_bs4_img1_nuimg_detach_deform_multipts.

smAP: 0.5653                                                                                                                                                                  
mATE: 0.3300
mASE: 0.2699
mAOE: 0.4220
mAVE: 0.4690
mAAE: 0.1972
NDS: 0.6138

By re-training from scratch, I get

mAP: 0.5622                                                                                                                                                                    
mATE: 0.3244
mASE: 0.2704
mAOE: 0.4520
mAVE: 0.4817
mAAE: 0.1992
NDS: 0.6084

This is lower than that in the readme (58.5 mAP and 63.2 NDS). Is there something I missed?

Cannot find ops.modules

Thank you for sharing your wonderful work!

It seems that deform attention module is implemented in MSDeformAttn in ops.modules.
However, I can't find ops.modules folder anywhere.

Could you please upload ops.modules ?

Thanks.

question about preprocess nuscenes datasets

Hello, I preprocess nuscenes datasets by "python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes --version v1.0".The following error occurred:
555

My python environment is: cuda 11.1, python 3.7.15, pytorch1.7.1, mmcv-full1.4.0, mmdet2.14.0, mmdet3d0.16.0(in the current project)
Related code is:
`def nuscenes_data_prep(root_path,
info_prefix,
version,
dataset_name,
out_dir,
max_sweeps=10):
nuscenes_converter.create_nuscenes_infos(
root_path, info_prefix, version=version, max_sweeps=max_sweeps)

if version == 'v1.0-test':
    info_test_path = osp.join(root_path, f'{info_prefix}_infos_test.pkl')
    nuscenes_converter.export_2d_annotation(
        root_path, info_test_path, version=version)
    return

info_train_path = osp.join(root_path, f'{info_prefix}_infos_train.pkl')
info_val_path = osp.join(root_path, f'{info_prefix}_infos_val.pkl')
# nuscenes_converter.export_2d_annotation(
#     root_path, info_train_path, version=version)
# nuscenes_converter.export_2d_annotation(
#     root_path, info_val_path, version=version)
create_groundtruth_database(dataset_name, root_path, info_prefix,
    f'{out_dir}/{info_prefix}_infos_train.pkl',
    with_bbox=True)`

Could you help to analyze the cause of the problem?
Thanks

Questions about training details

First, thanks for your inspiring work! I am trying to reproduce the reported results with this report and have following questions.

1, Could you provide the pretrained model of 2D detectors? I cannot find it in the mmdetection3D.
2, The current config files seem to make some simplification on the training, including using 1/8 dataset, 9 sweeps and 1 images per scene. So is there any difference with the full-set training on the lr configs?
3, Which fusion layer is the final version? multi_voxel_deform_fusion v2/v3 fast .etc?

Thanks in advance.

mmdet3d.ops missing issue

I tried to train the model from sratch and get the issue below.

File "./AutoAlignV2/mmdet3d/core/bbox/structures/base_box3d.py", line 6, in
from mmdet3d.ops.iou3d import iou3d_cuda
ModuleNotFoundError: No module named 'mmdet3d.ops'

I didn`t find mmdet3d.ops in the code, I tried to install mmdet3d seperately, but still not working. Any advice?
Thank you.

Depth-Aware GT-AUG

I would like to ask what part of the code is the Depth-Aware GT-AUG mentioned in your paper, I didn't find the

Question about installation

Hello, zehui! Thanks for your impressive work! I'd like to know what's the version of your installation environment about mmdet3d, mmcv-full, pytorch etc... could you share it with us? Thanks a lot~

cannot import name 'ball_query_ext'

Is there anyone found this issue and fixed it?
Thank you.
ImportError: cannot import name 'ball_query_ext' from 'mmdet3d.ops.ball_query' (AutoAlignV2/mmdet3d/ops/ball_query/init.py)

yolox training config

Hi, I just wonder can I get your yolox training config on NuScenes dataset, as I the yolox's mAP I trained is low than the performance you provide, many thanks.

Prepare for trainning issues

In fork branch, not find MSDeformAttn in from ops.modules import MSDeformAttn. So do I need to install additional libraries if I want to run the code?
Looking forward to your reply~

Where do you implement Image Level Dropout?

In your paper there is a section mentioned image level dropout which is very intuitive. However I didnt find how you implement it in your code. In your previous answer you said you found out only took 3 images as input can reach similar performance compared to 6 images input + image dropout.
Does this mean you chose to input 3 images directly?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.