mil-tokyo / mcd_da Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
I founded that there is no vgg_fcn
file which is defined in models/model_util.py
.
I would really appreciate it if you could share the network for FCN8s, since I'm not really sure about how to split the base FCN8s network into SegBase
and SegPixelClassifier
.
Thank you in advance.
So many great papers! Thank you!
I've found your CVPR 2019 paper Strong-Weak Distribution Alignment for Adaptive Object Detection which achieves great performance on adaptive object detection. You said the code is coming soon, but I can't wait any longer. :) Hope sooner, thanks. :)
In Table 1 the paper shows very high scores (more than 90%) in the digit classification tasks. So the question is that the testing set is merely the target-domain testing set or the combination of both testing sets in source and target domains. For example, in the unsupervised domain adaptation task of SVHN--> MNIST, the classification score can reach as high as 96.2%. Is the testing set for getting 96.2% ONLY MNIST TEST SET or the combination of SVHN TEST SET and MNIST TEST SET? Thanks in advance.
Very nice work! During my training, I found that loss can become negative:
Train Epoch: 198 [0/100 (0%)] Loss1: 0.024868 Loss2: 0.022132 Discrepancy: 0.018226
Test set: Average loss: -0.0588, Accuracy C1: 9449/10000 (94%) Accuracy C2: 9509/10000 (95%) Accuracy Ensemble: 9554/10000 (96%)
recording record/usps_mnist_k_4_alluse_no_onestep_False_1_test.txt
Train Epoch: 199 [0/100 (0%)] Loss1: 0.012343 Loss2: 0.020431 Discrepancy: 0.030520
Test set: Average loss: -0.0581, Accuracy C1: 9419/10000 (94%) Accuracy C2: 9518/10000 (95%) Accuracy Ensemble: 9537/10000 (95%)
recording record/usps_mnist_k_4_alluse_no_onestep_False_1_test.txt
Do you think this is normal?
Hello, would you please explain the pros and cons between two types of implementations?
Did they achieve similar performance?
ReLabel(255, args.n_class - 1), # Last Class is "Void" or "Background" class
actually, the args.n_class - 1 means 'bicyle' in cityscapes.
so you will map all unsure pixels to bicycle?
(pytorch1.1) zgm@zgm-icv:~/Lufei/MCD_DA/segmentation$ python adapt_trainer.py gta city --net drn_d_105 Downloading: "https://tigress-web.princeton.edu/~fy/drn/models/drn_d_105-12b40979.pth" to /home/zgm/.cache/torch/checkpoints/drn_d_105-12b40979.pth Traceback (most recent call last): File "adapt_trainer.py", line 69, in <module> is_data_parallel=args.is_data_parallel) File "/home/zgm/Lufei/MCD_DA/segmentation/models/model_util.py", line 51, in get_models model_list = get_MCD_model_list() File "/home/zgm/Lufei/MCD_DA/segmentation/models/model_util.py", line 41, in get_MCD_model_list model_g = DRNSegBase(model_name=net_name, n_class=n_class, input_ch=input_ch) File "/home/zgm/Lufei/MCD_DA/segmentation/models/dilated_fcn.py", line 109, in __init__ pretrained=pretrained, num_classes=1000, input_ch=input_ch) File "/home/zgm/Lufei/MCD_DA/segmentation/models/drn.py", line 343, in drn_d_105 model.load_state_dict(model_zoo.load_url(model_urls['drn-d-105'])) File "/home/zgm/.conda/envs/pytorch1.1/lib/python3.6/site-packages/torch/hub.py", line 439, in load_state_dict_from_url _download_url_to_file(url, cached_file, hash_prefix, progress=progress) File "/home/zgm/.conda/envs/pytorch1.1/lib/python3.6/site-packages/torch/hub.py", line 354, in _download_url_to_file u = urllib.request.urlopen(req).read() File "/home/zgm/.conda/envs/pytorch1.1/lib/python3.6/urllib/request.py", line 223, in urlopen return opener.open(url, data, timeout) File "/home/zgm/.conda/envs/pytorch1.1/lib/python3.6/urllib/request.py", line 532, in open response = meth(req, response) File "/home/zgm/.conda/envs/pytorch1.1/lib/python3.6/urllib/request.py", line 642, in http_response 'http', request, response, code, msg, hdrs) File "/home/zgm/.conda/envs/pytorch1.1/lib/python3.6/urllib/request.py", line 570, in error return self._call_chain(*args) File "/home/zgm/.conda/envs/pytorch1.1/lib/python3.6/urllib/request.py", line 504, in _call_chain result = func(*args) File "/home/zgm/.conda/envs/pytorch1.1/lib/python3.6/urllib/request.py", line 650, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 403: Forbidden
So can you provide the drn_d_105-12b40979.pth, thanks so much!
Where can I find the code of the toy dataset, which is figure 4 in the paper ??
Hi, could you share the files of synth traffic dataset or where I can download it? I can't find this dataset in the Internet. Thanks.
I have tried to rerun the segmentation part of the code in python 2.7
and I was unable to reproduce the result in the paper.
I have noticed several differences.
I would love to see if anyone was able to reproduce the segmentation result, and I would be really appreciated if you could give me that piece of code such that I could see where went wrong.
THANKS, A LOT!
It is an unsupervised domain adaptation method, however, you also need labels for target domain in the segmentation task. I am a little confused here.
could you give me the dataset in your paper named "misnt_data.mat"? I can't find it in the internet.
Thank for your code ! I am trying to run the segmentation task. But there is error :
File "eval.py", line 223, in eval_city
with open(join(devkit_dir, 'data', dset, 'info.json'), 'r') as fp:
IOError: [Errno 2] No such file or directory: '/data/ugui0/dataset/adaptation/taskcv-2017-public/segmentation/data/cityscapes/info.json'
can you help me ?
The classification code seems to be running "test" on every epoch and is printing the test accuracy.
Which of these accuracies do you report?
How is the model selection done?
I understand each experiment is repeated 5 times. But each time, is the last epoch accuracy considered for the mean or the max(accuracy at each epoch)?
Just wondering if you could provide the additional train/val . txt files for gta.
The code needs to read "train.txt" and "val.txt" in
MCD_DA/segmentation/datasets.py
Line 244 in 37cb1bc
Are they the ids stored in the split.mat@gta and saved externally as .txt?
Thank you for sharing the code! I am trying to run the segmentation task, and I am really confused about the segmentation labels.
What code do you run to transfer the original colored ground truth images of GTAV into the grayscale images with the 20 classes that you report on?
The same question about Cityscrapes - the dataset has 33 classes. How do you turn them into 20?
The paper said that the source-only accuracy of USPS->Mnist is 0.634. But I can only get the accuracy of nearly 0.2~0.3 in USPS->Mnist by using the same network. Does anyone get the same source-only result with me? How to solve it?
hello, can you share your result of svhn2mnist accuracy?
I set svhn as the source and mnist as the target. Is it normal that the target accuracy is higher?
it seems that when I use lr=1e-3, the validation mIoU is very low,such as 4%.
Any suggestions?
Thank you very much for the code you provided.Recently, I have been studied the dataset you provided for classification. Mnist serves as the source domain data and usps serves as the target domain data.However, when I build my own data to replace mnist and usps data, there is always an error. I hope to get your help.Error is shown as follows:
IndexError: index 149 is out of bounds for axis 0 with size 1
actually, i am not familiar with torch, but i think loss_dis does not applied if you comment
loss_dis.backward().
could you please explain about it?
thank you in advance.
Hi, this link http://crcv.ucf.edu/data/adaptationseg/ICCV_dataset.zip is lost. Do you know where to download this dataset instead?
Thanks for the code !
I have a question about the classifier. What is the differences between F1 and F2 classifier?
I can't find any differences in the code of classification folder.
Can you tell me the differences on code implementation ?
Thank you for sharing your code. Could you release the code on the VisDA dataset? I find it somehow difficult to re-implement the accuracy? Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.