Comments (11)
By the way, i test at iter/miou 20000/41.48 25000/43.32 75000/34.34
from seg-uncertainty.
@jianlong-yuan
Thanks for the question.
I have to admit that segmentation adaptation has large variance. Although Memory Regularization could provide generally better results, the performance could be different.
- You may try to run the code again.
- If you have one GPU card with large memory, you could try to run on one card, which could minimize the error caused by BN or other layers on two GPUs.
- Early stopping is one of the key to segmentation adaptation. Generally speaking, 25000 and 30000 usually is the best choice.
from seg-uncertainty.
In addtion, I also can not reproduce your performance for stage2. I use your model of stage1(45.46 miou) to generate pseudo label, then i just run your commd python train_ft.py --snapshot-dir ./snapshots/1280x640_restore_ft_GN_batchsize9_512x256_pp_ms_me0_classbalance7_kl0_lr1_drop0.2_seg0.5_BN_80_255_0.8_Noaug --restore-from ./snapshots/SE_GN_batchsize2_1024x512_pp_ms_me0_classbalance7_kl0.1_lr2_drop0.1_seg0.5/GTA5_25000.pth --drop 0.2 --warm-up 5000 --batch-size 9 --learning-rate 1e-4 --crop-size 512,256 --lambda-seg 0.5 --lambda-adv-target1 0 --lambda-adv-target2 0 --lambda-me-target 0 --lambda-kl-target 0 --norm-style gn --class-balance --only-hard-label 80 --max-value 7 --gpu-ids 0,1,2 --often-balance --use-se --input-size 1280,640 --train_bn --autoaug False
But performance is poor
from seg-uncertainty.
You may test the model of different iterations. I usually use the model of 25000-th iteration or 50000-th iteration.
Due to the evaluation metric of mean class accuracy, the performance could be affect by the rare classes, such as train/bike.
In my practice, the Stage-II model usually achieves around 49~50 mIoU.
from seg-uncertainty.
For the stage 1, i cheack the bn, i found bn is not trainable. So, Is there any thing different from one GPU?
from seg-uncertainty.
Hi @jianlong-yuan
Recently, I re-run my code with different dropout rates. It all achieves about 50% mIoU.
The code runs on 3 GPUs.
from seg-uncertainty.
For the stage 2:
Load your pretrained model.
cityscapes1280x640_restore_ft_GN_batchsize9_512x256_pp_ms_me0_classbalance7_kl0_lr1_drop0.2_seg0.5_BN_80_255_0.8_Noaug
I found drop is different from yours, but readme is 0.2.
I test all of models with droprate 0.2. I think the result is close to yours.
10000 48.74
15000 48.75
20000 49.68
25000 48.95
30000 49.79
35000 48.52
40000 49.72
45000 49.02
50000 48.52
For stage 1
cityscapesSE_GN_batchsize2_1024x512_pp_ms_me0_classbalance7_kl0.1_lr2_drop0.1_seg0.5
I use droprate 0.1. Is it different from yours?
I test all of models
10000 34.45
15000 36.33
20000 41.48
25000 43.32
30000 41.33
35000 40.07
40000 40.15
45000 39.93
50000 39.39
55000 36.24
60000 37.57
65000 34.91
70000 33.57
75000 34.34
80000 33.48
85000 33.52
90000 32.0
100000 32.08
from seg-uncertainty.
By the way, I found your baseline model is different from others. Your baseline model adds se and GN. Have you compared these difference?
from seg-uncertainty.
- You may try different drop rates and learning rates. In practise, I use drop=0.2.
- The result of GN is close to the result of BN. Due to the batchsize limitation, I usually set GN as default normalization layer. The result of GN is more stable.
- The final SE also is one alternative option. I use the SE in the first version, so I keep it. Actually SE does not affect the final result significantly.
from seg-uncertainty.
Thank you, i try again with drop 0.3, i got 45.2 at 2w iters. It is almost same as yours.
if i_iter < 15000: self.lambda_kl_target_copy = 0 self.lambda_me_target_copy = 0 else: self.lambda_kl_target_copy = self.lambda_kl_target self.lambda_me_target_copy = self.lambda_me_target
I found that you didn't use the loss at the beginning, but you started to use the loss after a period of time. But I didn't find a reasonable explanation in paper. Could you explain why you did this? Thank you.
from seg-uncertainty.
Hi @jianlong-yuan
Sorry for the late response. I was preparing the rebuttal at that time...So I missed your message.
Yes. It is a small trick. The prediction of the main classifier and the auxiliary classifier is not stable at the beginning.
Therefore, I enable the loss in the middle epoch of the training.
from seg-uncertainty.
Related Issues (20)
- Discriminator' Problem HOT 1
- problems about training HOT 3
- About data loading HOT 1
- Synthia -> Cityscapes pre-trained model HOT 3
- batchsize HOT 2
- Question about SYNTHIA dataset classes num HOT 2
- Question about no folder of "ground truth" of SYNTHIA dataset HOT 1
- Question about Pseudo-labels are source domain but not target domain in the code HOT 1
- Where is the code for MC-Dropout? HOT 1
- Prediction variance and Prediction confidence HOT 1
- How get the vriance visualization HOT 1
- AttributeError: 'Namespace' object has no attribute 'sync_bn' HOT 1
- load state dict error HOT 3
- Without recitfying pseudo label in Stage 2 HOT 1
- tabular data/ noisy instances HOT 2
- Question about stage 3 HOT 4
- About rectification in stage 2 HOT 1
- Question about stage2 HOT 4
- Out of memory HOT 11
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from seg-uncertainty.