Giter Site home page Giter Site logo

hikvision-research / probabilisticteacher Goto Github PK

View Code? Open in Web Editor NEW
64.0 5.0 10.0 438 KB

An official implementation of ICML 2022 paper "Learning Domain Adaptive Object Detection with Probabilistic Teacher"."

License: Apache License 2.0

Python 99.86% Shell 0.14%
domain-adaptation object-detection unsupervised-domain-adaptation domain-adaptive-object-detection

probabilisticteacher's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

probabilisticteacher's Issues

About labeled Regression Loss(Gaussian Probability)

Hello, Thank you for sharing interesting research

I'm wondering why gaussian pdf is directly used to formulate labeled regression loss as a cross-entropy loss.
Is this generally used?

gaussian = gaussian_dist_pdf(fg_pred_deltas[..., :4],  gt_pred_deltas, sigma_xywh)
loss_box_reg_gaussian = - torch.log(gaussian + 1e-9).sum()

and gaussian_dist_pdf

def gaussian_dist_pdf(val, mean, var, eps=1e-9):
     simga_constant = 0.3
     return torch.exp(-(val - mean) ** 2.0 / (var + eps) / 2.0) / torch.sqrt(2.0 * np.pi * (var + simga_constant))

As far as I understand, the goal of mean and variance term used in this paper is to be gt_box (x_off, y_off, box_width, box_height) and zeros, since gt has direct delta fucntion with mean = gt_box and var = 0.
In this respect, directly using gaussian pdf as above would have infinite loss.

p.s/ How did you get sigma_constant = 0.3??

Thanks in advance
Joo Young Jang

Issues about Data Preparation

Thanks for your great work and sharing the code!

I'm trying to reproduce your results on Cityscapes to FoggyCityscapes. However, I find the data preparation description is not very clear on this dataset:

  • If I do not modify Detectron2 code, it will by default read images with .jpg suffix. However, the images from Cityscapes are in .png format.
  • In Detectron2's evaluation code (https://github.com/facebookresearch/detectron2/blob/v0.5/detectron2/evaluation/pascal_voc_evaluation.py#L132), it reads pose truncate difficult properties which are not actually produced by your annotation transformation script.
  • Besides, the data folder structure is not very clear from the description and code. It would really take some time to understand how to unzip the Cityscapes files and move things around so that they can match your scripts.

Would you mind giving more complete instructions on data preparation?

Inference on own data

Hello, I'm interested in using this work for inference in my own data, however, I'm not able to find a description about how to do it. Is it possible? And in that case, could you explain me how I can do it? Thanks in advance!

Cannot get similar AP on K2C

I use the same config files to execute Probablistic Teacher on KITTI to Cityscapes (k2c).
According to your paper, AP of k2c improve from 40.3 to 60.2.
However, I got only 23.8 for AP50.
BTW, in the README page, it seems that only f2c model weights and log files are given. (the link of k2c is same as f2c)
Thanks for your kindly help!

The log of my training is as following:
[10/16 02:43:47] d2.evaluation.testing INFO: copypaste: AP,AP50,AP75
[10/16 02:43:47] d2.evaluation.testing INFO: copypaste: 9.6286,23.8045,7.0624
[10/16 02:43:47] d2.utils.events INFO: eta: 0:00:00 iter: 29999 total_loss: 3.475 loss_cls: 0.04451 loss_box_reg: 0.3731 loss_rpn_cls: 0.03362 loss_rpn_loc: 0.324 loss_cls_sup: 0.04147 loss_box_reg_sup: 0.3772 loss_rpn_cls_sup: 0.03775 loss_rpn_loc_sup: 0.3196 loss_cls_unsup: 0.292 loss_box_reg_unsup: 1.025 loss_rpn_cls_unsup: 0.3071 loss_rpn_loc_unsup: 1.081 time: 6.9902 data_time: 1.4791 lr: 0.016 max_mem: 19448M

model weights

Hello, I would like to consult about model weights. I can't find model weights from those provided links, except c2f_log.txt.

About the results

Hi, thanks for sharing the codes. Will you update the links of weights and log files in the "Main Results" ?

Results about common datasets?

Hello,glad to see your excellent work.I wonder whether you can supply the results at commom datasets like coco or voc?
thx for replying.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.