Comments (29)
@JiamingSuen Sorry for the late reply. We have listed several hyper-parameters in our paper, which are borrowed from Mask R-CNN. In other words, we strictly followed the training parameters used by Mask R-CNN.
from panet.
hi @thangvubk
i set two images per gpu, the performance can get 0.388, which is still a little gap comparing with the paper.
from panet.
Hi,
I have uploaded the model trained with 2fc. The performance should be 39.6 box AP. Please try with the new model and corresponding config file.
Thanks!
from panet.
Hi @ShuLiu1993 , thanks for this wonderful work! I wonder if pretrained model on Cityscapes can be provided(or converted) in this PyTorch implementation? Thanks!
from panet.
Thank you for your response, I will try with your advice. : )
from panet.
I tried to train the panet on res50-fpn without using any gn to get a precise evaluation on the influence of PA-structure and get the 37.1 bbox mAP. I find that your new model still use GN in both fpn-part and the 2fc-head. Does it mean that GN is really important for training PA-structure?
from panet.
@JiamingSuen
Thanks a lot for your interest! Sorry that I really don't have much time on this currently. Maybe you can try this by yourself since the pre-trained model on COCO is already released.
from panet.
@JrPeng
In our origin implementation, we used Sync BN. In this version, we us GN instead. Using this kind of normalization can help the network converge better and achieve better performance.
from panet.
@ShuLiu1993 I can totally understand, thanks for your reply!
from panet.
Thanks for your reply. I will try to train a model with BN-backbone and gn-FPN/heads without using PA-structure to compare with the PA-net to give a better evaluation on this structure.
from panet.
@ShuLiu1993 I'm trying to reproduce cityscapes training with COCO pretrained model, and I'm currently having 34.4 mask AP on Cityscapes validation set. I used the finetuing schedule described in the Mask-RCNN paper(4k iter in which reduce lr at 3k with initial lr 0.01 and 8 batch size). Did you choose the same finetuning steps and lr schedule in PANet? Thanks very much.
from panet.
@ShuLiu1993 Any comment please? I didn't find any reference about finetuning parameters in your paper. Thanks very much!
from panet.
@JrPeng Hi, any progress about training the model with BN-backbone and gn-FPN/heads without using PA-structure ? Thanks!
from panet.
@JiamingSuen can you give me a short tutorial on how to finetune the pretrained coco model? I would like to finetune the model for pedestrian detection on the Kitti Dataset but I don't know where to start.
from panet.
You may start by trying to adapt cityscapes loader/model to KITTI since KITTI segmentation dataset is using cityscapes format. Cityscapes tools In this codebase would be helpful.
from panet.
@panxj I find that baseline of this implementation is higher than baseline of Detectron.pytorch by Roy. For example, res50-FPN without GN in this implementation has a 38.3box AP while it has a 37.7box AP in Roy's implementation. Besides, I add only PA structure in Roy's code, and find it only contributes 0.3AP improvment, but GN/adaptive fusion contributes a lot. Maybe there is something wrong with my implementation. You can try yourself and we can discuss based on your results.
from panet.
@ShuLiu1993 thank you for sharing your work. I had run the ablation on PAnet with panet_R-50-FPN_1x_det_2fc.yaml
config file. I also found that this repo has better baseline compared to original pytorch repo or official detectron in caffe. Without any GN, buttom up path, ada feature pool and multi scale training, this repo can achieve 37.9 mAP, it is higher than official detectron by 1.2 mAP. It seems like your original panet version using sync BN has different improvement factors. But could you please clear us why here we get so good baseline :D.
GN | BU path | ada pool | ms train | Result | Note |
---|---|---|---|---|---|
x | x | x | x | 39.6 | Default panet_R-50-FPN_1x_det_2fc.yaml |
x | x | x | 39 | configSCALES=(1000,) |
|
x | x | 38.5 | configSCALES=(1000,) ROI_BOX_HEAD=fast_rcnn_heads.roi_2mlp_head_gn |
||
x | 38.4 | configSCALES=(1000,) ROI_BOX_HEAD=fast_rcnn_heads.roi_2mlp_head_gn CONV_BODY:FPN.fpn_ResNet50_conv5_body |
|||
37.9 | configSCALES=(1000,) ROI_BOX_HEAD=fast_rcnn_heads.roi_2mlp_head_gn CONV_BODY:FPN.fpn_ResNet50_conv5_body FPN.USE_GN=False |
from panet.
Hi guys, thanks a lot for your interests!
This codebase is heavily based on Detectron.pytorch by Roy. In this codebase and released configs, I used multi-scale training larger testing scale as I noted in the paper. This may be the main reason that the baseline is with a better performance. I also made some minor modifications may also contribute a little bit.
With GN or SyncBN do help PANet to achieve a better performance with other settings kept the same. They can help the network to converge better. So that's why we should try GN or SyncBN first. Currently I haven't compared GN and SyncBN under the same codebase. But I think SyncBN will achieve comparable or even better performance.
from panet.
hi @thangvubk
i use bu-path, ada-pool, and ms-train, without gn
, and test use MIN_SIZE_TEST: 1000
, only get 38.1 mAP. is there something i miss?
from panet.
@zimenglan-sysu-512 just use panet_R-50-FPN_1x_det_2fc.yaml
config file and dont make any modifications.
from panet.
hi @thangvubk
do u try soft-nms instead of standard-nms? if use, can u share the results here?
from panet.
@zimenglan-sysu-512 For bu_path + ada-pool + ms, i didnt make any modification, just clone the network and train. Did you do the same thing?
from panet.
i did not use this repo, i just add some code of panet to maskrcnn-benchmark (w/o GN).
from panet.
@zimenglan-sysu-512 It is hard to say when you implement in other repo. Ussually, it doesnt work as expected due to underlining implementation details. You can try to add GN to see if their is any improvement.
from panet.
thanks @thangvubk
i will try to add GN to FPN and heads.
from panet.
hi @thangvubk
i add GN to FPN and RoI head, but the performance only gets 38.3%. btw, when training, since the gpu memory limits, so each gpu holds 1 image.
from panet.
@zimenglan-sysu-512 i'm not sure where u are wrong. Btw, mmdet is planing to release PANet also, see here.
from panet.
hi @thangvubk
i guess the performance that drops a little is due to that i use 2fc
instead of Xconv1fc
.
from panet.
@zimenglan-sysu-512 I also re-implement the panet using maskrcnn_benchmark(2mlp without ms train, gn, sbn). But the performance is worse than this repo. If convenient, can you share your code with me? I just want to make sure whether there is something wrong with my code.
from panet.
Related Issues (20)
- How to train Cityscapes dataset
- sh make.sh HOT 1
- OOM during inference
- AttributeError: module 'modeling.FPN' has no attribute 'fpn_ResNet50_conv5_body_bup'
- Poor test results
- In the process of program operation, the continuous growth of GPU memory,About 2M per minute, from the beginning of 6G memory to 8G, and finally OOM, what is the problem?
- Input and output names
- Model conversion to TRT
- Input names error HOT 1
- test error
- is it possible compile on 2080Ti? HOT 8
- Training on cityscpes
- please point out the location of adaptive feture pooling of the project,thank you very much HOT 1
- AssertionError----assert len(roidb)==len(cached_roidb)
- The FPN code HOT 1
- Error about tools
- Extract the features HOT 1
- where is the code of PAN? HOT 1
- Request Pretrained Weight
- 关于数据集的存放位置
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from panet.