Giter Site home page Giter Site logo

yezhen17 / 3dioumatch Goto Github PK

View Code? Open in Web Editor NEW
150.0 150.0 15.0 15.27 MB

[CVPR 2021] PyTorch implementation of 3DIoUMatch: Leveraging IoU Prediction for Semi-Supervised 3D Object Detection.

Python 83.88% C++ 5.66% Cuda 8.84% C 0.79% Shell 0.25% MATLAB 0.58%

3dioumatch's People

Contributors

hughw19 avatar nicolgo avatar yezhen17 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

3dioumatch's Issues

Number of points sampled for SUN RGB-D

Hello! Thank you for open-sourcing your codebase.

I wanted to ask, how many points per scene are you sampling for SUN RGB-D?
For the general 3D object detection works, 20k points sampled from 50k during training & testing is standard, but it seems that 40k is used in SESS & 3DIoUMatch.

When looking at SESS's paper:
image
as well as their code:
https://github.com/Na-Z/sess/blob/f1bbb44ea6ed73bb71bce12f54ca9bc33746dce8/scripts/run_sess_sunrgbd.py#L16-L26
Default num_points is 40k:
https://github.com/Na-Z/sess/blob/f1bbb44ea6ed73bb71bce12f54ca9bc33746dce8/train_sess.py#L34
Input into SUNRGB-D dataset:
https://github.com/Na-Z/sess/blob/f1bbb44ea6ed73bb71bce12f54ca9bc33746dce8/train_sess.py#L97-L108
40k is used for SESS.

Similarly in this repo, (I think this repository is based on SESS),
https://github.com/THU17cyz/3DIoUMatch/blob/ace9b2e783cd4b9d203998fec516d6c880a22e6d/run_train.sh#L7-L8
Then I think default is used:
https://github.com/THU17cyz/3DIoUMatch/blob/ace9b2e783cd4b9d203998fec516d6c880a22e6d/train.py#L38
Then passed into SUNRGB-D dataset, overriding the default parameter of num_points=20000 in sunrgbd
https://github.com/THU17cyz/3DIoUMatch/blob/ace9b2e783cd4b9d203998fec516d6c880a22e6d/train.py#L112-L127

Please let me know if I overlooked anything.

Also, would it be possible to have training logs so I can see how performance progresses over the iterations?

IoU question

Hi , I have a question in lhs_3d_faster_samecls when I run your code.

Is the process of calculating the IOU in 3D so simple? No need to consider the rotation angle?

` xx1 = np.maximum(x1[i], x1[I[:last-1]])
zz1 = np.maximum(z1[i], z1[I[:last-1]])
xx2 = np.minimum(x2[i], x2[I[:last-1]])
yy2 = np.minimum(y2[i], y2[I[:last-1]])
zz2 = np.minimum(z2[i], z2[I[:last-1]])
cls1 = cls[i]
cls2 = cls[I[:last-1]]

    l = np.maximum(0, xx2-xx1)
    w = np.maximum(0, yy2-yy1)
    h = np.maximum(0, zz2-zz1)

    if old_type:
        o = (l*w*h)/area[I[:last-1]]
    else:
        inter = l*w*h
        o = inter / (area[i] + area[I[:last-1]] - inter)`

code process:
Input 8 corners of Box1 and Box2,find the maximum and minimum values on x,y,z in each of the 8 corners.
Then for the x-minimum of the two boxes, take the largest one. y and z are the same.
Then for the x-maximum of the two boxes, take the smallest one. y and z are the same.
Then calculate l,w,h, and multiply them to get the intersection.

Can you explain the process? It feels like the IOU calculated this way is not correct?

Error occurred when remove_empty_box.

In the inference phase, there is a step before nms that needs to remove 128 proposals with few points,

but when I get the data like 001333, because this data is special like a face and there is no gt in the whole scene. causing all my boxes to be removed in this step, thus causing errors. Have you met a similar error?

d14bd1535b1dcee044a1106881014c6
301cba4ffec7e9b82f2896f84d97fea
e196aaf9ac3f042f57ee3936e4b9bc8

result of outdoor dataset

Hi,yezhen,thanks for sharing the code,I just wonder did you try the method on outdoor dataset like kitti or others。

Test-time IoU optimization

Hello,

Thank you for your open-sourcing code and interesting paper.

I want to ask about test-time IoU optimization.
In your paper, your 3D IoU module is differentiable and it can optimize the bounding box in the test time.

image

However, I cannot find this implemented code in your repository.
Did you release the implemented code or am i missing something?

Some questions about your paper

您好,感谢您精彩的工作。我有一个问题关于iou 阈值的。就是您生成伪标签后需要通过三个指标进行过滤,其中就是有iou。但您生成的伪标签和谁做iou呢?我读完文章没能很好的理解,希望您能帮我解答一下。

Environment

When torch_version==1.3.0 OpenPCDet reports an error:

from models.votenet_iou_branch import VoteNet

File "/home/point/3DIoUMatch/models/votenet_iou_branch.py", line 17, in
from models.backbone_module import Pointnet2Backbone
File "/home/point/3DIoUMatch/models/backbone_module.py", line 19, in
from pointnet2_modules import PointnetSAModuleVotes, PointnetFPModule
File "/home/point/3DIoUMatch/pointnet2/pointnet2_modules.py", line 26, in
import pointnet2_utils
File "/home/point/3DIoUMatch/pointnet2/pointnet2_utils.py", line 31, in
"Could not import _ext module.\n"
ImportError: Could not import _ext module.
Please see the setup instructions in the README: https://github.com/erikwijmans/Pointnet2_PyTorch/blob/master/README.rst

When torch_version==1.5.1, pointnet2 reports an error:

import pointnet2._ext as _ext

ModuleNotFoundError: No module named 'pointnet2._ext'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/point/3DIoUMatch/pointnet2/pointnet2_utils.py", line 27, in
import pointnet2_ops._ext as _ext
ModuleNotFoundError: No module named 'pointnet2_ops'

Could you give me adivice? thanks!

How to visualize

How to visualize the predicted and ground truth bounding boxes, as shown in the paper

Compared with sess

Congratulation! a good paper!

I want to ask the difference between SESS and this paper. Does SESS have no pseudo-label mechanism? But this paper first introduces the filtering module of pseudo-label mechanism, and then adds an IOU module?
Is my understanding correct?

Reproducing KITTI resutlts

Hello,

Thank you for publishing the code and your interesting paper!
I'm trying to reproduce your results on the KITTI dataset, however, I found no code or configuration for that in your repo.
It seems that the OpenPCDet directory is untouched (wrt. original repo) and has no modifications to adapt to your framework.
Can you please explain how your current code supports KITTI via OpenPCDet?

Thanks

filtering mechanism on SESS does not achieve good results

Hi, Thank you for sharing such great work!
As far as I understand, 3DIoUMatch is a filtering mechanism on sess.

But I try to filter on sess by setting the threshold of score and objectness, the effect is not good, do you know what is the reason?
The thresholds are selected as 0.9.

some detail of kitti implementation

Hi, Yezhen, I'm reproducing kitti result, and I'm have some question about the detail with pvrcnn on kitti dataset.

  1. you use rpn output or roi output as the input of pesudo label filtering?
  2. in paper,you say you don‘t use LHS module,so just use class probability and iou prediction to filter pesudo label ?

About the environment

Thank you for sharing your code.
I follow the instructions in readme.txt and install all the packages.
When I tried to run the following command:
sh run_pretrain.sh 0 pretrain_scannet scannet scannetv2_train_0.1.txt
I encountered this error:
Traceback (most recent call last): File "pretrain.py", line 24, in <module> from models.loss_helper_labeled import get_labeled_loss File "/home/logic/m2/project/3DIOU/3DIoUMatch/models/loss_helper_labeled.py", line 14, in <module> from models.loss_helper_iou import compute_iou_labels File "/home/logic/m2/project/3DIOU/3DIoUMatch/models/loss_helper_iou.py", line 11, in <module> from utils.box_util import box3d_iou_batch_gpu, box3d_iou_gpu_axis_aligned File "/home/logic/m2/project/3DIOU/3DIoUMatch/utils/box_util.py", line 20, in <module> raise ImportError("please first install pcdet according to README.md") ImportError: please first install pcdet according to README.md
Have you encountered a similar problem, or can you provide a solution?
Looking forward to your reply.

Question on points sampling

Hi,

First of all, thank you for the great work and sharing the code.

I have a doubt regarding the points sampling. In the paper, it is:
The input point clouds to our teacher network are augmented only by random sub-sampling while the inputs to the student network further undergo a set of stochastic transformation T , including random flip, random rotation around the upright axis, and a random uniform scaling.
Does this mean that same set of sampled points are fed into both Teacher and Student? The only difference being that after sampling, the points + boxes are augmented and then fed into Student model.

However, in the code, it seems like different sets of points are sampled for both Teacher (https://github.com/THU17cyz/3DIoUMatch/blob/1e18d3edae7a223cf6548a15b7e3a8e41d90bbcd/sunrgbd/sunrgbd_ssl_dataset.py#L242) and Student (https://github.com/THU17cyz/3DIoUMatch/blob/1e18d3edae7a223cf6548a15b7e3a8e41d90bbcd/sunrgbd/sunrgbd_ssl_dataset.py#L280). Maybe I am missing something?

Could you also please confirm if it would have any impact if same or different sets are sampled?

Thanks !!

Anuj

mAP on sunrgbd on pretrained model

Greetings!
I do sh run_pretrain.sh 1 pretrain_sunrgbd sunrgbd sunrgbd_v1_train_0.05.txt. After the training is over, I do sh run_eval.sh 0 pretrain_sunrgbd sunrgbd sunrgbd_v1_train_0.05.txt pretrain_sunrgbd/checkpoint.tar, and get output as follows

---------- iou_thresh: 0.250000 ----------
eval bed Average Precision: 0.719783
eval table Average Precision: 0.359931
eval sofa Average Precision: 0.414853
eval chair Average Precision: 0.582705
eval toilet Average Precision: 0.747250
eval desk Average Precision: 0.077238
eval dresser Average Precision: 0.031235
eval night_stand Average Precision: 0.235248
eval bookshelf Average Precision: 0.006786
eval bathtub Average Precision: 0.117187
eval mAP: 0.329222
eval bed Recall: 0.874770
eval table Recall: 0.749693
eval sofa Recall: 0.776213
eval chair Recall: 0.755451
eval toilet Recall: 0.927152
eval desk Recall: 0.659616
eval dresser Recall: 0.356481
eval night_stand Recall: 0.681102
eval bookshelf Recall: 0.207358
eval bathtub Recall: 0.615385
eval AR: 0.660322
---------- iou_thresh: 0.500000 ----------
eval bed Average Precision: 0.392357
eval table Average Precision: 0.102301
eval sofa Average Precision: 0.225640
eval chair Average Precision: 0.306473
eval toilet Average Precision: 0.401577
eval desk Average Precision: 0.005982
eval dresser Average Precision: 0.003862
eval night_stand Average Precision: 0.038365
eval bookshelf Average Precision: 0.000176
eval bathtub Average Precision: 0.030140
eval mAP: 0.150687
eval bed Recall: 0.530387
eval table Recall: 0.285072
eval sofa Recall: 0.414710
eval chair Recall: 0.453349
eval toilet Recall: 0.589404
eval desk Recall: 0.143138
eval dresser Recall: 0.069444
eval night_stand Recall: 0.185039
eval bookshelf Recall: 0.013378
eval bathtub Recall: 0.269231
eval AR: 0.295315

Does that mean [email protected] and [email protected] are respectively 0.329222 and 0.150687?
If so, Table 1 says that [email protected] and [email protected] are respectively 29.9 and 10.5. Why?
image

openPCDet install successed , but bash of evaluation report not install

Hi , there. Thanks for posting such a amazing work.
When run installment of openpcdet , it installed successfully:

Using /home/fusion/miniconda3/envs/3Dmatch/lib/python3.7/site-packages
Finished processing dependencies for pcdet==0.3.0+0

But when i preform the evaluation, the problem i encountered is:

Traceback (most recent call last):
  File "/home/fusion/project/3DIoUMatch/utils/box_util.py", line 18, in <module>
    from pcdet.ops.iou3d_nms.iou3d_nms_utils import boxes_iou3d_gpu
ModuleNotFoundError: No module named 'pcdet.ops.iou3d_nms.iou3d_nms_utils'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "train.py", line 17, in <module>
    from models.votenet_iou_branch import VoteNet
  File "/home/fusion/project/3DIoUMatch/models/votenet_iou_branch.py", line 18, in <module>
    from models.grid_conv_module import GridConv
  File "/home/fusion/project/3DIoUMatch/models/grid_conv_module.py", line 12, in <module>
    from utils.box_util import rot_gpu
  File "/home/fusion/project/3DIoUMatch/utils/box_util.py", line 20, in <module>
    raise ImportError("please first install pcdet according to README.md")
ImportError: please first install pcdet according to README.md

Could you please tell me is there something i can do to fix it. THX~

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.