Giter Site home page Giter Site logo

epnetv2's People

Contributors

happinesslz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

epnetv2's Issues

CB-Fusion into OpenPCDet

Hello, I am a student.
I would like to inquire whether CB-Fusion, which is scheduled to be released to OpenPCDet in January 2023, is now available for use. Where specifically can I find it, or how should I proceed with experiments?

Visualization of Fig.8

Thank you for your awesome work. Could you tell us how to visualize Figure 8 or open source the visualization code?Many thanks!

pedestrian's ap is so craze

Uploading image.png…
why i train the model about pedestrian and test it.(training set and val set is diffierent!). I get this so craze result. i wonder that what is my problem

eval_rcnn TEST mode

eval_rcnn cgf has a test option. Should I change it to test when submitting with kitti performance?
(However, if you change it to test, an error occurs saying there is no train_mask. so.. Should I put train_mask in the test data?)

I want to create data that submits to KITTI. please tell me how i can do it.

[Test CAR command line]
CUDA_VISIBLE_DEVICES=0 python eval_rcnn.py --cfg_file cfgs/CAR_EPNet_plus_plus.yaml --eval_mode rcnn --test
--output_dir ./log/CAR_EPNet_plus_plus/test_results/
--data_path ../data/
--ckpt ./log/CAR_EPNet_plus_plus/ckpt/checkpoint_epoch_48.pth
--set LI_FUSION.ENABLED True LI_FUSION.ADD_Image_Attention True CROSS_FUSION True USE_P2I_GATE True
DEEP_RCNN_FUSION False USE_IMAGE_LOSS True IMAGE_WEIGHT 1.0 USE_IMAGE_SCORE True USE_IMG_DENSE_LOSS True USE_MC_LOSS True
MC_LOSS_WEIGHT 1.0 I2P_Weight 0.5 P2I_Weight 0.5 ADD_MC_MASK True MC_MASK_THRES 0.2 SAVE_MODEL_PREP 0.8

Thank you for your reply.

CB-fusion integration into OpenPCdet

Hello!Thanks for your excellent work on lidar-camera fusion 3D detection detector!
And may I know whether you have integrated CB-fusion into OpenPCdet for the convenience of training multiple categories?

Implement EPNet on Waymo dataset

Hi lz,

Thanks for your great contribution!
I recently tried to replicate EPNet on the Waymo dataset. My initial idea was to borrow the waymo_dataset.py provided by mmdet3d and then process the returned data into the data and format required by EPNet.
However, after a few days' work, I feel such implementation seems very difficult, so I would like to ask you about the implementation process. Another question is whether we only need to rewrite the waymo_dataset.py code file.

Best regards.

Question about the parameters for train and eval

Hi, @happinesslz

Thanks for your great job!

I have two questions for your work:

  1. I have found that the parameter called "MC_MASK_THRES" in your pretrained model is different from that in run_train_and_eval_epnet_plus_plus_car.sh. Could you please explain the reason, and which one is more suitble?
  2. You said you would transfer EPNet++ to Waymo dataset, but the number of RGB images of Waymo dataset is different from the KITTI dataset. Waymo has multiple RGB images as input, so how are you going to deal with this problem?

When can CB-Fusion be integrated into OpenPCDet?

" TODO
**Note: We will integrate CB-Fusion into OpenPCDet for training multiple categories. Besides, we will provide voxel-wise fusion based on CB-Fusion on both Waymo and KITTI dataset. We plan to release them in January, 2023. ** "

I would like to ask when the above TODO items are expected to be completed?

Looking forward to it!

About the training effect of EPNet in OpenPCDet.

Hello @happinesslz , First of all, thank you very much for your open source work.
I have some questions about EPNet for you.
First of all, I referred to your EPNet code and ported it to OpenPCDet framework and trained it.
However, the training results are slightly different from the results in your paper, what is the reason for this? Also, it seems that the IOU threshold of AP_R40 evaluation metric is not mentioned in the EPNet paper. The result is shown below, primeval indicates the result you gave in your paper, because I don't know the IOU threshold, so I default to the threshold in the table.
image

RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`

Hi, when I try to reproduce the code I get an error, my CUDA version is 10.0 and torch version is 1.2.0. can someone please tell me how I should go about fixing this error. Thanks.

/home/bitcqic/EPNetV2/EPNetV2/tools/../lib/config.py:250: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
yaml_cfg = edict(yaml.load(f))
1.0 1.0
2023-11-04 13:34:47,355 INFO Start logging
2023-11-04 13:34:47,355 INFO CUDA_VISIBLE_DEVICES=ALL
2023-11-04 13:34:47,355 INFO cfg_file cfgs/CAR_EPNet_plus_plus.yaml
2023-11-04 13:34:47,355 INFO train_mode rcnn_online
2023-11-04 13:34:47,356 INFO batch_size 16
2023-11-04 13:34:47,356 INFO epochs 50
2023-11-04 13:34:47,356 INFO workers 8
2023-11-04 13:34:47,356 INFO ckpt_save_interval 1
2023-11-04 13:34:47,356 INFO output_dir ./log/CAR_EPNet_plus_plus/
2023-11-04 13:34:47,356 INFO mgpus True
2023-11-04 13:34:47,356 INFO data_path ../../data/
2023-11-04 13:34:47,356 INFO ckpt None
2023-11-04 13:34:47,356 INFO rpn_ckpt None
2023-11-04 13:34:47,356 INFO gt_database None
2023-11-04 13:34:47,356 INFO rcnn_training_roi_dir None
2023-11-04 13:34:47,356 INFO rcnn_training_feature_dir None
2023-11-04 13:34:47,356 INFO train_with_eval False
2023-11-04 13:34:47,356 INFO rcnn_eval_roi_dir None
2023-11-04 13:34:47,356 INFO rcnn_eval_feature_dir None
2023-11-04 13:34:47,356 INFO set_cfgs ['LI_FUSION.ENABLED', 'True', 'LI_FUSION.ADD_Image_Attention', 'True', 'CROSS_FUSION', 'True', 'USE_P2I_GATE', 'True', 'DEEP_RCNN_FUSION', 'False', 'USE_IMAGE_LOSS', 'True', 'IMAGE_WEIGHT', '1.0', 'USE_IMAGE_SCORE', 'True', 'USE_IMG_DENSE_LOSS', 'True', 'USE_MC_LOSS', 'True', 'MC_LOSS_WEIGHT', '1.0', 'I2P_Weight', '0.5', 'P2I_Weight', '0.5', 'ADD_MC_MASK', 'True', 'MC_MASK_THRES', '0.2', 'SAVE_MODEL_PREP', '0.8']
2023-11-04 13:34:47,356 INFO model_type base
2023-11-04 13:34:47,356 INFO cfg.TAG: CAR_EPNet_plus_plus
2023-11-04 13:34:47,357 INFO cfg.CLASSES: Car
2023-11-04 13:34:47,357 INFO cfg.INCLUDE_SIMILAR_TYPE: True
2023-11-04 13:34:47,357 INFO cfg.AUG_DATA: True
2023-11-04 13:34:47,357 INFO cfg.AUG_METHOD_LIST: ['rotation', 'scaling', 'flip']
2023-11-04 13:34:47,357 INFO cfg.AUG_METHOD_PROB: [1.0, 1.0, 0.5]
2023-11-04 13:34:47,357 INFO cfg.AUG_ROT_RANGE: 18
2023-11-04 13:34:47,357 INFO cfg.GT_AUG_ENABLED: False
2023-11-04 13:34:47,357 INFO cfg.GT_EXTRA_NUM: 15
2023-11-04 13:34:47,357 INFO cfg.GT_AUG_RAND_NUM: True
2023-11-04 13:34:47,357 INFO cfg.GT_AUG_APPLY_PROB: 1.0
2023-11-04 13:34:47,357 INFO cfg.GT_AUG_HARD_RATIO: 0.6
2023-11-04 13:34:47,357 INFO cfg.PC_REDUCE_BY_RANGE: True
2023-11-04 13:34:47,357 INFO cfg.PC_AREA_SCOPE: [[-40. 40. ]
[ -1. 3. ]
[ 0. 70.4]]
2023-11-04 13:34:47,358 INFO cfg.CLS_MEAN_SIZE: [[1.5256319 1.6285675 3.8831165]]
2023-11-04 13:34:47,358 INFO cfg.USE_IOU_BRANCH: True
2023-11-04 13:34:47,358 INFO cfg.USE_IM_DEPTH: False
2023-11-04 13:34:47,358 INFO cfg.USE_PSEUDO_LIDAR: False
2023-11-04 13:34:47,358 INFO cfg.CROSS_FUSION: True
2023-11-04 13:34:47,358 INFO cfg.INPUT_CROSS_FUSION: False
2023-11-04 13:34:47,358 INFO cfg.USE_KNN_FUSION: False
2023-11-04 13:34:47,358 INFO cfg.USE_SELF_ATTENTION: False
2023-11-04 13:34:47,358 INFO cfg.DEEP_RCNN_FUSION: False
2023-11-04 13:34:47,358 INFO cfg.USE_IMAGE_LOSS: True
2023-11-04 13:34:47,358 INFO cfg.IMAGE_WEIGHT: 1.0
2023-11-04 13:34:47,358 INFO cfg.USE_IMAGE_LOSS_TYPE: CrossEntropyLoss
2023-11-04 13:34:47,358 INFO cfg.USE_IMAGE_SCORE: True
2023-11-04 13:34:47,358 INFO cfg.USE_IMG_DENSE_LOSS: True
2023-11-04 13:34:47,358 INFO cfg.USE_KL_LOSS: False
2023-11-04 13:34:47,358 INFO cfg.USE_KL_LOSS_TYPE: KL
2023-11-04 13:34:47,358 INFO cfg.MC_LOSS_WEIGHT: 1.0
2023-11-04 13:34:47,359 INFO cfg.SAVE_MODEL_PREP: 0.8
2023-11-04 13:34:47,359 INFO cfg.USE_P2I_GATE: True
2023-11-04 13:34:47,359 INFO cfg.STACK_CROSS_FUSION: False
2023-11-04 13:34:47,359 INFO cfg.USE_IMAGE_RES: False
2023-11-04 13:34:47,359 INFO cfg.RCNN_IMG_CHANNEL: 32
2023-11-04 13:34:47,359 INFO cfg.ONLY_USE_IMAGE_FEAT: False
2023-11-04 13:34:47,359 INFO cfg.USE_POINT_ATT_FEATURE: False
2023-11-04 13:34:47,359 INFO cfg.USE_POINT_FEATURE_RES: False
2023-11-04 13:34:47,359 INFO cfg.I2P_Weight: 0.5
2023-11-04 13:34:47,359 INFO cfg.P2I_Weight: 0.5
2023-11-04 13:34:47,359 INFO cfg.USE_MC_LOSS: True
2023-11-04 13:34:47,359 INFO cfg.ADD_MC_MASK: True
2023-11-04 13:34:47,359 INFO cfg.MC_MASK_THRES: 0.2
2023-11-04 13:34:47,359 INFO cfg.USE_PURE_IMG_BACKBONE: False
2023-11-04 13:34:47,359 INFO cfg.USE_PAINTING_SCORE: False
2023-11-04 13:34:47,359 INFO cfg.USE_PAINTING_FEAT: False
2023-11-04 13:34:47,359 INFO
cfg.LI_FUSION = edict()
2023-11-04 13:34:47,359 INFO cfg.LI_FUSION.ENABLED: True
2023-11-04 13:34:47,360 INFO cfg.LI_FUSION.IMG_FEATURES_CHANNEL: 128
2023-11-04 13:34:47,360 INFO cfg.LI_FUSION.ADD_Image_Attention: True
2023-11-04 13:34:47,360 INFO cfg.LI_FUSION.IMG_CHANNELS: [3, 64, 128, 256, 512]
2023-11-04 13:34:47,360 INFO cfg.LI_FUSION.POINT_CHANNELS: [96, 256, 512, 1024]
2023-11-04 13:34:47,360 INFO cfg.LI_FUSION.DeConv_Reduce: [16, 16, 16, 16]
2023-11-04 13:34:47,360 INFO cfg.LI_FUSION.DeConv_Kernels: [2, 4, 8, 16]
2023-11-04 13:34:47,360 INFO cfg.LI_FUSION.DeConv_Strides: [2, 4, 8, 16]
2023-11-04 13:34:47,360 INFO
cfg.RPN = edict()
2023-11-04 13:34:47,360 INFO cfg.RPN.ENABLED: True
2023-11-04 13:34:47,360 INFO cfg.RPN.FIXED: False
2023-11-04 13:34:47,360 INFO cfg.RPN.USE_INTENSITY: False
2023-11-04 13:34:47,360 INFO cfg.RPN.USE_RGB: False
2023-11-04 13:34:47,360 INFO cfg.RPN.LOC_XZ_FINE: True
2023-11-04 13:34:47,360 INFO cfg.RPN.LOC_SCOPE: 3.0
2023-11-04 13:34:47,360 INFO cfg.RPN.LOC_BIN_SIZE: 0.5
2023-11-04 13:34:47,360 INFO cfg.RPN.NUM_HEAD_BIN: 12
2023-11-04 13:34:47,360 INFO cfg.RPN.BACKBONE: pointnet2_msg
2023-11-04 13:34:47,361 INFO cfg.RPN.USE_BN: True
2023-11-04 13:34:47,361 INFO cfg.RPN.NUM_POINTS: 16384
2023-11-04 13:34:47,361 INFO
cfg.RPN.SA_CONFIG = edict()
2023-11-04 13:34:47,361 INFO cfg.RPN.SA_CONFIG.ATTN_DIM: 128
2023-11-04 13:34:47,361 INFO cfg.RPN.SA_CONFIG.ATTN: [0, 0, 128, 128]
2023-11-04 13:34:47,361 INFO cfg.RPN.SA_CONFIG.NPOINTS: [4096, 1024, 256, 64]
2023-11-04 13:34:47,361 INFO cfg.RPN.SA_CONFIG.RADIUS: [[0.1, 0.5], [0.5, 1.0], [1.0, 2.0], [2.0, 4.0]]
2023-11-04 13:34:47,361 INFO cfg.RPN.SA_CONFIG.NSAMPLE: [[16, 32], [16, 32], [16, 32], [16, 32]]
2023-11-04 13:34:47,361 INFO cfg.RPN.SA_CONFIG.MLPS: [[[16, 16, 32], [32, 32, 64]], [[64, 64, 128], [64, 96, 128]], [[128, 196, 256], [128, 196, 256]], [[256, 256, 512], [256, 384, 512]]]
2023-11-04 13:34:47,361 INFO cfg.RPN.FP_MLPS: [[128, 128], [256, 256], [512, 512], [512, 512]]
2023-11-04 13:34:47,361 INFO cfg.RPN.CLS_FC: [128]
2023-11-04 13:34:47,361 INFO cfg.RPN.REG_FC: [128]
2023-11-04 13:34:47,361 INFO cfg.RPN.DP_RATIO: 0.5
2023-11-04 13:34:47,361 INFO cfg.RPN.LOSS_CLS: SigmoidFocalLoss
2023-11-04 13:34:47,361 INFO cfg.RPN.FG_WEIGHT: 15
2023-11-04 13:34:47,361 INFO cfg.RPN.FOCAL_ALPHA: [0.25, 0.75]
2023-11-04 13:34:47,361 INFO cfg.RPN.FOCAL_GAMMA: 2.0
2023-11-04 13:34:47,361 INFO cfg.RPN.REG_LOSS_WEIGHT: [1.0, 1.0, 1.0, 1.0]
2023-11-04 13:34:47,362 INFO cfg.RPN.LOSS_WEIGHT: [1.0, 1.0]
2023-11-04 13:34:47,362 INFO cfg.RPN.NMS_TYPE: normal
2023-11-04 13:34:47,362 INFO cfg.RPN.SCORE_THRESH: 0.2
2023-11-04 13:34:47,362 INFO
cfg.RCNN = edict()
2023-11-04 13:34:47,362 INFO cfg.RCNN.ENABLED: True
2023-11-04 13:34:47,362 INFO cfg.RCNN.USE_RPN_FEATURES: True
2023-11-04 13:34:47,362 INFO cfg.RCNN.USE_MASK: True
2023-11-04 13:34:47,362 INFO cfg.RCNN.MASK_TYPE: seg
2023-11-04 13:34:47,362 INFO cfg.RCNN.USE_INTENSITY: False
2023-11-04 13:34:47,362 INFO cfg.RCNN.USE_DEPTH: True
2023-11-04 13:34:47,362 INFO cfg.RCNN.USE_SEG_SCORE: False
2023-11-04 13:34:47,362 INFO cfg.RCNN.ROI_SAMPLE_JIT: True
2023-11-04 13:34:47,362 INFO cfg.RCNN.ROI_FG_AUG_TIMES: 10
2023-11-04 13:34:47,362 INFO cfg.RCNN.REG_AUG_METHOD: multiple
2023-11-04 13:34:47,362 INFO cfg.RCNN.POOL_EXTRA_WIDTH: 0.2
2023-11-04 13:34:47,362 INFO cfg.RCNN.USE_RGB: False
2023-11-04 13:34:47,362 INFO cfg.RCNN.LOC_SCOPE: 1.5
2023-11-04 13:34:47,362 INFO cfg.RCNN.LOC_BIN_SIZE: 0.5
2023-11-04 13:34:47,363 INFO cfg.RCNN.NUM_HEAD_BIN: 9
2023-11-04 13:34:47,363 INFO cfg.RCNN.LOC_Y_BY_BIN: False
2023-11-04 13:34:47,363 INFO cfg.RCNN.LOC_Y_SCOPE: 0.5
2023-11-04 13:34:47,363 INFO cfg.RCNN.LOC_Y_BIN_SIZE: 0.25
2023-11-04 13:34:47,363 INFO cfg.RCNN.SIZE_RES_ON_ROI: False
2023-11-04 13:34:47,363 INFO cfg.RCNN.USE_BN: False
2023-11-04 13:34:47,363 INFO cfg.RCNN.DP_RATIO: 0.0
2023-11-04 13:34:47,363 INFO cfg.RCNN.BACKBONE: pointnet
2023-11-04 13:34:47,363 INFO cfg.RCNN.XYZ_UP_LAYER: [128, 128]
2023-11-04 13:34:47,363 INFO cfg.RCNN.NUM_POINTS: 512
2023-11-04 13:34:47,363 INFO
cfg.RCNN.SA_CONFIG = edict()
2023-11-04 13:34:47,363 INFO cfg.RCNN.SA_CONFIG.NPOINTS: [128, 32, -1]
2023-11-04 13:34:47,363 INFO cfg.RCNN.SA_CONFIG.RADIUS: [0.2, 0.4, 100]
2023-11-04 13:34:47,363 INFO cfg.RCNN.SA_CONFIG.NSAMPLE: [64, 64, 64]
2023-11-04 13:34:47,363 INFO cfg.RCNN.SA_CONFIG.MLPS: [[128, 128, 128], [128, 128, 256], [256, 256, 512]]
2023-11-04 13:34:47,363 INFO cfg.RCNN.CLS_FC: [512, 512]
2023-11-04 13:34:47,363 INFO cfg.RCNN.REG_FC: [512, 512]
2023-11-04 13:34:47,363 INFO cfg.RCNN.LOSS_CLS: BinaryCrossEntropy
2023-11-04 13:34:47,364 INFO cfg.RCNN.FOCAL_ALPHA: [0.25, 0.75]
2023-11-04 13:34:47,364 INFO cfg.RCNN.FOCAL_GAMMA: 2.0
2023-11-04 13:34:47,364 INFO cfg.RCNN.CLS_WEIGHT: [1. 1. 1.]
2023-11-04 13:34:47,364 INFO cfg.RCNN.CLS_FG_THRESH: 0.6
2023-11-04 13:34:47,364 INFO cfg.RCNN.CLS_BG_THRESH: 0.45
2023-11-04 13:34:47,364 INFO cfg.RCNN.CLS_BG_THRESH_LO: 0.05
2023-11-04 13:34:47,364 INFO cfg.RCNN.REG_FG_THRESH: 0.55
2023-11-04 13:34:47,364 INFO cfg.RCNN.FG_RATIO: 0.5
2023-11-04 13:34:47,364 INFO cfg.RCNN.ROI_PER_IMAGE: 64
2023-11-04 13:34:47,364 INFO cfg.RCNN.HARD_BG_RATIO: 0.8
2023-11-04 13:34:47,364 INFO cfg.RCNN.IOU_LOSS_TYPE: raw
2023-11-04 13:34:47,364 INFO cfg.RCNN.IOU_ANGLE_POWER: 1
2023-11-04 13:34:47,364 INFO cfg.RCNN.SCORE_THRESH: 0.2
2023-11-04 13:34:47,364 INFO cfg.RCNN.NMS_THRESH: 0.1
2023-11-04 13:34:47,364 INFO
cfg.TRAIN = edict()
2023-11-04 13:34:47,364 INFO cfg.TRAIN.SPLIT: train
2023-11-04 13:34:47,365 INFO cfg.TRAIN.VAL_SPLIT: smallval
2023-11-04 13:34:47,365 INFO cfg.TRAIN.LR: 0.002
2023-11-04 13:34:47,365 INFO cfg.TRAIN.LR_CLIP: 1e-05
2023-11-04 13:34:47,365 INFO cfg.TRAIN.LR_DECAY: 0.5
2023-11-04 13:34:47,365 INFO cfg.TRAIN.DECAY_STEP_LIST: [100, 150, 180, 200]
2023-11-04 13:34:47,365 INFO cfg.TRAIN.LR_WARMUP: True
2023-11-04 13:34:47,365 INFO cfg.TRAIN.WARMUP_MIN: 0.0002
2023-11-04 13:34:47,365 INFO cfg.TRAIN.WARMUP_EPOCH: 1
2023-11-04 13:34:47,365 INFO cfg.TRAIN.BN_MOMENTUM: 0.1
2023-11-04 13:34:47,365 INFO cfg.TRAIN.BN_DECAY: 0.5
2023-11-04 13:34:47,365 INFO cfg.TRAIN.BNM_CLIP: 0.01
2023-11-04 13:34:47,365 INFO cfg.TRAIN.BN_DECAY_STEP_LIST: [1000]
2023-11-04 13:34:47,365 INFO cfg.TRAIN.OPTIMIZER: adam_onecycle
2023-11-04 13:34:47,365 INFO cfg.TRAIN.WEIGHT_DECAY: 0.001
2023-11-04 13:34:47,365 INFO cfg.TRAIN.MOMENTUM: 0.9
2023-11-04 13:34:47,365 INFO cfg.TRAIN.MOMS: [0.95, 0.85]
2023-11-04 13:34:47,365 INFO cfg.TRAIN.DIV_FACTOR: 10.0
2023-11-04 13:34:47,365 INFO cfg.TRAIN.PCT_START: 0.4
2023-11-04 13:34:47,366 INFO cfg.TRAIN.GRAD_NORM_CLIP: 1.0
2023-11-04 13:34:47,366 INFO cfg.TRAIN.RPN_PRE_NMS_TOP_N: 9000
2023-11-04 13:34:47,366 INFO cfg.TRAIN.RPN_POST_NMS_TOP_N: 512
2023-11-04 13:34:47,366 INFO cfg.TRAIN.RPN_NMS_THRESH: 0.85
2023-11-04 13:34:47,366 INFO cfg.TRAIN.RPN_DISTANCE_BASED_PROPOSE: True
2023-11-04 13:34:47,366 INFO cfg.TRAIN.RPN_TRAIN_WEIGHT: 1.0
2023-11-04 13:34:47,366 INFO cfg.TRAIN.RCNN_TRAIN_WEIGHT: 1.0
2023-11-04 13:34:47,366 INFO cfg.TRAIN.CE_WEIGHT: 5.0
2023-11-04 13:34:47,366 INFO cfg.TRAIN.RPN_CE_WEIGHT: 5.0
2023-11-04 13:34:47,366 INFO cfg.TRAIN.IOU_LOSS_TYPE: cls_mask_with_bin
2023-11-04 13:34:47,366 INFO cfg.TRAIN.BBOX_AVG_BY_BIN: True
2023-11-04 13:34:47,366 INFO cfg.TRAIN.RY_WITH_BIN: False
2023-11-04 13:34:47,366 INFO
cfg.TEST = edict()
2023-11-04 13:34:47,366 INFO cfg.TEST.SPLIT: val
2023-11-04 13:34:47,366 INFO cfg.TEST.RPN_PRE_NMS_TOP_N: 9000
2023-11-04 13:34:47,366 INFO cfg.TEST.RPN_POST_NMS_TOP_N: 100
2023-11-04 13:34:47,366 INFO cfg.TEST.RPN_NMS_THRESH: 0.8
2023-11-04 13:34:47,366 INFO cfg.TEST.RPN_DISTANCE_BASED_PROPOSE: True
2023-11-04 13:34:47,366 INFO cfg.TEST.BBOX_AVG_BY_BIN: True
2023-11-04 13:34:47,366 INFO cfg.TEST.RY_WITH_BIN: False
cp: -r not specified; omitting directory '../lib/'
cp: -r not specified; omitting directory '../tools'
cp: 无法获取'../*.py' 的文件状态(stat): 没有那个文件或目录
./log/CAR_EPNet_plus_plus/
2023-11-04 13:34:47,378 INFO Loading TRAIN samples from ../../data/KITTI/object/training/label_2 ...
2023-11-04 13:34:47,892 INFO Done: filter TRAIN results: 3265 / 3712

##############USE Fusion_Cross_Conv_Gate(ADD)#########
##############ADDITION PI2 ATTENTION#########
##############USE Fusion_Cross_Conv_Gate(ADD)#########
##############ADDITION PI2 ATTENTION#########
##############USE Fusion_Cross_Conv_Gate(ADD)#########
##############ADDITION PI2 ATTENTION#########
##############USE Fusion_Cross_Conv_Gate(ADD)#########
##############ADDITION PI2 ATTENTION#########
2023-11-04 13:42:55,602 INFO Start training
epochs: 0%| | 0/50 [12:10<?, ?it/s]
train: 0%| | 0/204 [00:00<?, ?it/s]
Traceback (most recent call last):
File "train_rcnn.py", line 276, in
lr_scheduler_each_iter = (cfg.TRAIN.OPTIMIZER == 'adam_onecycle')
File "/home/bitcqic/EPNetV2/EPNetV2/tools/../tools/train_utils/train_utils.py", line 199, in train
loss, tb_dict, disp_dict = self._train_it(batch)
File "/home/bitcqic/EPNetV2/EPNetV2/tools/../tools/train_utils/train_utils.py", line 132, in _train_it
loss, tb_dict, disp_dict = self.model_fn(self.model, batch)
File "/home/bitcqic/EPNetV2/EPNetV2/tools/../lib/net/train_functions.py", line 68, in model_fn
ret_dict = model(input_data)
File "/home/bitcqic/anaconda3/envs/EPNetV2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/bitcqic/anaconda3/envs/EPNetV2/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/bitcqic/anaconda3/envs/EPNetV2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/bitcqic/EPNetV2/EPNetV2/tools/../lib/net/point_rcnn.py", line 53, in forward
rpn_output = self.rpn(input_data)
File "/home/bitcqic/anaconda3/envs/EPNetV2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/bitcqic/EPNetV2/EPNetV2/tools/../lib/net/rpn.py", line 126, in forward
backbone_xyz, backbone_features, img_feature, l_xy_cor = self.backbone_net(pts_input, img_input, xy_input) # (B, N, 3), (B, C, N)
File "/home/bitcqic/anaconda3/envs/EPNetV2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/bitcqic/EPNetV2/EPNetV2/tools/../lib/net/pointnet2_msg.py", line 341, in forward
image = self.Cross_Fusion[i](li_features, first_img_gather_feature, li_xy_cor, image)
File "/home/bitcqic/anaconda3/envs/EPNetV2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/bitcqic/EPNetV2/EPNetV2/tools/../lib/net/pointnet2_msg.py", line 116, in forward
point_features = self.P2IA_Layer(img_features, point_features)
File "/home/bitcqic/anaconda3/envs/EPNetV2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/bitcqic/EPNetV2/EPNetV2/tools/../lib/net/pointnet2_msg.py", line 91, in forward
ri = self.fc1(img_feas_f)
File "/home/bitcqic/anaconda3/envs/EPNetV2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/bitcqic/anaconda3/envs/EPNetV2/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/home/bitcqic/anaconda3/envs/EPNetV2/lib/python3.7/site-packages/torch/nn/functional.py", line 1369, in linear
ret = torch.addmm(bias, input, weight.t())
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
/home/bitcqic/EPNetV2/EPNetV2/tools/../lib/config.py:250: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
yaml_cfg = edict(yaml.load(f))

RuntimeError: Error compiling objects for extension

Followed all these steps

The Environment:

Linux (tested on Ubuntu 16.04)
Python 3.7.6
PyTorch 1.20 + CUDA-10.0/10.1
a. Clone the EPNet++ repository.

git clone https://github.com/happinesslz/EPNetV2.git
b. Create conda environment.

conda create -n epnet_plus_plus_open python==3.7.6
conda activate epnet_plus_plus_open
conda install pytorch==1.2.0 torchvision==0.4.0 cudatoolkit=10.0 -c pytorch
pip install -r requirements.txt
c. Build and install the pointnet2_lib, iou3d, roipool3d libraries by executing the following command:

sh build_and_install.sh ( When I run this command get this error)

FAILED: /home/fazal/Downloads/EPNetV2/lib/utils/sample2grid/build/temp.linux-x86_64-cpython-37/Voxel_gpu.o
/usr/bin/nvcc -I/home/fazal/.local/lib/python3.7/site-packages/torch/include -I/home/fazal/.local/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/fazal/.local/lib/python3.7/site-packages/torch/include/TH -I/home/fazal/.local/lib/python3.7/site-packages/torch/include/THC -I/home/fazal/Downloads/EPNetV2/lib/utils/sample2grid -I/home/fazal/anaconda3/envs/epnet/include/python3.7m -c -c /home/fazal/Downloads/EPNetV2/lib/utils/sample2grid/Voxel_gpu.cu -o /home/fazal/Downloads/EPNetV2/lib/utils/sample2grid/build/temp.linux-x86_64-cpython-37/Voxel_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O2 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=gridvoxel_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14
/home/fazal/Downloads/EPNetV2/lib/utils/sample2grid/Voxel_gpu.cu(29): warning #177-D: variable "grid_H" was declared but never referenced

/home/fazal/Downloads/EPNetV2/lib/utils/sample2grid/Voxel_gpu.cu(30): warning #177-D: variable "grid_Coor" was declared but never referenced

/home/fazal/Downloads/EPNetV2/lib/utils/sample2grid/Voxel_gpu.cu(106): warning #177-D: variable "grid_H" was declared but never referenced

/usr/include/c++/11/bits/std_function.h:435:145: error: parameter packs not expanded with ‘...’:
435 | function(_Functor&& __f)
| ^
/usr/include/c++/11/bits/std_function.h:435:145: note: ‘_ArgTypes’
/usr/include/c++/11/bits/std_function.h:530:146: error: parameter packs not expanded with ‘...’:
530 | operator=(_Functor&& __f)
| ^
/usr/include/c++/11/bits/std_function.h:530:146: note: ‘_ArgTypes’
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/home/fazal/.local/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1906, in _run_ninja_build
env=env)
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/subprocess.py", line 512, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "setup.py", line 18, in
, include_dirs = ['./'],
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/init.py", line 87, in setup
return distutils.core.setup(**attrs)
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/dist.py", line 1208, in run_command
super().run_command(command)
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/command/install.py", line 74, in run
self.do_egg_install()
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/command/install.py", line 123, in do_egg_install
self.run_command('bdist_egg')
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/dist.py", line 1208, in run_command
super().run_command(command)
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 165, in run
cmd = self.call_command('install_lib', warn_dir=0)
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 151, in call_command
self.run_command(cmdname)
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/dist.py", line 1208, in run_command
super().run_command(command)
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/command/install_lib.py", line 11, in run
self.build()
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/_distutils/command/install_lib.py", line 112, in build
self.run_command('build_ext')
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/dist.py", line 1208, in run_command
super().run_command(command)
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 84, in run
_build_ext.run(self)
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/_distutils/command/build_ext.py", line 346, in run
self.build_extensions()
File "/home/fazal/.local/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 843, in build_extensions
build_ext.build_extensions(self)
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions
_build_ext.build_ext.build_extensions(self)
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/_distutils/command/build_ext.py", line 468, in build_extensions
self._build_extensions_serial()
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/_distutils/command/build_ext.py", line 494, in _build_extensions_serial
self.build_extension(ext)
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 246, in build_extension
_build_ext.build_extension(self, ext)
File "/home/fazal/anaconda3/envs/epnet/lib/python3.7/site-packages/setuptools/_distutils/command/build_ext.py", line 556, in build_extension
depends=ext.depends,
File "/home/fazal/.local/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 668, in unix_wrap_ninja_compile
with_cuda=with_cuda)
File "/home/fazal/.local/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1578, in _write_ninja_file_and_compile_objects
error_prefix='Error compiling objects for extension')
File "/home/fazal/.local/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1916, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension

The results of the evaluation model are not consistent with the documentation

Hi,@happinesslz.
Thank you very much for your great work.
I have a few questions about the model evaluation part of this work:
First, I downloaded your pre-trained model for evaluation, and the final results are shown below, which are significantly inconsistent with the results you gave in readme.md. Why is this? Is it because the evaluation metric AP is not consistent with AP_R40 and the set IOU threshold? Can you tell me how to calculate the results in your paper?
image
Best wishes to you.

error: command '/usr/bin/g++' failed with exit code 1

Hey bro! what a perfect work!
When i run this repository, and run the "cd pointnet2_lib/pointnet2. python setup.py install " to build the pointnet2,
an error arised, that shows "ball_query.o group_point.o interpolate.o sampling.o" not exist, and gave the error "error: command 'usr/bin/g++ failed withe exit code 1".
i think this error may be caused by the wrong version of g++, so i want to known the g++ version you used, thank you !

CUDA Error while evaluation on KITTI dataset:

CUDA Error while evaluation on KITTI dataset:
cuda_error

2023-08-12 12:11:03,009 INFO ---- EPOCH 44 JOINT EVALUATION ----
2023-08-12 12:11:03,009 INFO ==> Output file: ./epnet_plus_plus_released_trained_models/PED/eval_results/eval/epoch_44/val
eval: 0%| | 0/3769 [00:00<?, ?it/s]Traceback (most recent call last):
File "eval_rcnn.py", line 1026, in
eval_single_ckpt(root_result_dir, data_path=args.data_path)
File "eval_rcnn.py", line 868, in eval_single_ckpt
eval_one_epoch(model, test_loader, epoch_id, root_result_dir, logger)
File "eval_rcnn.py", line 790, in eval_one_epoch
ret_dict = eval_one_epoch_joint(model, dataloader, epoch_id, result_dir, logger)
File "eval_rcnn.py", line 555, in eval_one_epoch_joint
ret_dict = model(input_data)
File "/home/truartadmin/anaconda3/envs/3DOD/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/truartadmin/Shivam_Shukla/SHR/CyDAS_Object_Detection/3D_OD/EPNetV2/tools/../lib/net/point_rcnn.py", line 53, in forward
rpn_output = self.rpn(input_data)
File "/home/truartadmin/anaconda3/envs/3DOD/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/truartadmin/Shivam_Shukla/SHR/CyDAS_Object_Detection/3D_OD/EPNetV2/tools/../lib/net/rpn.py", line 126, in forward
backbone_xyz, backbone_features, img_feature, l_xy_cor = self.backbone_net(pts_input, img_input, xy_input) # (B, N, 3), (B, C, N)
File "/home/truartadmin/anaconda3/envs/3DOD/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/truartadmin/Shivam_Shukla/SHR/CyDAS_Object_Detection/3D_OD/EPNetV2/tools/../lib/net/pointnet2_msg.py", line 341, in forward
image = self.Cross_Fusion[i](li_features, first_img_gather_feature, li_xy_cor, image)
File "/home/truartadmin/anaconda3/envs/3DOD/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/truartadmin/Shivam_Shukla/SHR/CyDAS_Object_Detection/3D_OD/EPNetV2/tools/../lib/net/pointnet2_msg.py", line 116, in forward
point_features = self.P2IA_Layer(img_features, point_features)
File "/home/truartadmin/anaconda3/envs/3DOD/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/truartadmin/Shivam_Shukla/SHR/CyDAS_Object_Detection/3D_OD/EPNetV2/tools/../lib/net/pointnet2_msg.py", line 91, in forward
ri = self.fc1(img_feas_f)
File "/home/truartadmin/anaconda3/envs/3DOD/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/truartadmin/anaconda3/envs/3DOD/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/home/truartadmin/anaconda3/envs/3DOD/lib/python3.7/site-packages/torch/nn/functional.py", line 1369, in linear
ret = torch.addmm(bias, input, weight.t())
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
eval: 0%|

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.