Giter Site home page Giter Site logo

tdo-dif's Introduction

TDo-Dif

idea.png

This is a PyTorch implementation of TDo-Dif

English | 中文

Installation

There are two installation options you can choose from:

  1. Install according to the env.yml provided in the repository, assuming you have anaconda3 installed

    conda env create -f env.yml
  2. Install the required libraries manually, the required dependencies are (may contain unnecessary libraries)

    conda:
        python=3.7.10
        pytorch=1.7.1
        torchaudio=0.7.2
        torchvision=0.8.2
        numpy=1.20.2
    pip:
        imageio==2.9.0
        matplotlib==3.4.2
        natsort==7.1.1
        opencv-python==4.5.2.54
        scipy==1.7.0
        visdom==0.1.8.9
        tqdm==4.61.1
        scikit-learn==0.24.2
        scikit-image==0.18.2

Dataset Preparation

  • Cityscape Dataset

    You can download this dataset from the official website and unzip it, the structure of this dataset is

    /cityscapes
        /gtFine
        /leftImg8bit
    
  • Foggy Zurich and Foggy Driving dataset

    Both datasets can be downloaded from the author's website the structure of Foggy Zurich is

    /Foggy_Zurich
        /gt_color
        /gt_labelIds
        /gt_labelTrainIds
        /lists_file_names
        /RGB
    

    the structure of FoggyDriving is

    /Foggy_Driving
        /gtCoarse
        /gtFine
        /leftImg8bit
        /lists_file_names
        /scripts
    

Test

Model Network Dataset mIoU BaiduYun
base model RefinNet Foggy Zurich 40.02 link code: zz6z
TDo-Dif-zurich RefinNet Foggy Zurich 51.84 link code: mbb5

After downloading the dataset and the model in the table above, run the following code, note that target_dataset is one of three choices, target_data_root and source_data_root are the paths of your dataset, and ckpt is the path of your model

python main.py --source_dataset cityscapes --source_data_root your_citiscapes_dataset_path --target_dataset FoggyDriving|FoggyZurich|ACDC --target_data_root your_target_dataset_path  --gpu_id 0 --batch_size 1 --val_batch_size 1 --ckpt checkpoints/xxx.pth --save_val_results --model refineNet --test_only --usegpu

Training

Download the appropriate dataset, along with the model, and run the following code

python main.py --source_dataset cityscapes --source_data_root your_citiscapes_dataset_path --target_dataset FoggyDriving|FoggyZurich|ACDC --target_data_root your_target_dataset_path  --gpu_id 0 --batch_size 1 --val_batch_size 1 --ckpt checkpoints/xxx.pth --epoch_one_round 10 --save_val_results --model refineNet --usegpu --save_model_prefix none --train_type CRST_sp_with_loss_lp_constract --seg_num 500 --init_target_portion 0.2 --round_idx 0

In fact the training process is divided into two phases, the first phase generates and processes the pseudo tags, the second phase starts the training, if you only want to finish the first phase you can add --only_generate, if you have already finished the first phase you can add the following parameters to skip the first phase --skip_thresh_gen --skip_p_gen --skip_sp_extend

The Foggy Zurich dataset is a bit special in that it needs to be trained on two separate concentrations of fog, and the example code for training on light fog is

python main.py --source_dataset cityscapes --source_data_root your_citiscapes_dataset_path --target_dataset FoggyZurich --target_data_root your_target_dataset_path  --gpu_id 0 --batch_size 1 --val_batch_size 1 --ckpt checkpoints/xxx.pth --epoch_one_round 10 --save_val_results --model refineNet --usegpu --save_model_prefix none --train_type CRST_sp_with_loss_lp_constract --seg_num 500 --init_target_portion 0.2 --round_idx 0 --train_dataset_type light

When training on the medium fog, the pseudo-labels on the light fog need to participate in the training together, so you need to make the pseudo-labels of the light fog first.

python main.py --source_dataset cityscapes --source_data_root your_citiscapes_dataset_path --target_dataset FoggyZurich --target_data_root your_target_dataset_path  --gpu_id 0 --batch_size 1 --val_batch_size 1 --ckpt checkpoints/xxx.pth --epoch_one_round 10 --save_val_results --model refineNet --usegpu --save_model_prefix none --train_type CRST_sp_with_loss_lp_constract --seg_num 500 --init_target_portion 0.2 --round_idx 0 --train_dataset_type light --only_generate

Then run:

python main.py --source_dataset cityscapes --source_data_root your_citiscapes_dataset_path --target_dataset FoggyZurich --target_data_root your_target_dataset_path  --gpu_id 0 --batch_size 1 --val_batch_size 1 --ckpt checkpoints/xxx.pth --epoch_one_round 10 --save_val_results --model refineNet --usegpu --save_model_prefix none --train_type CRST_sp_with_loss_lp_constract --seg_num 500 --init_target_portion 0.2 --round_idx 0 --train_dataset_type medium --light_pseudo_label_path results/foggyzurich_prefix_round_0_light_CRST_sp/500_muti_views_labels_intra

Results

The results on the three datasets (containing the results for each class) are as follows

zurich_sota.png

zurich_.png

driving_.png

tdo-dif's People

Contributors

velor2012 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

Forkers

trellixvulnteam

tdo-dif's Issues

FoggyZurich下gt_labelids

请问该文件夹下的图片内容中如何修改呢,想修改标签注释,但没有找到对应的方法,能请教一下吗

opts.xymap_dir

Hello.

How do we get the xymap?

/media/user/storeDisk2/data/cwy/DenseMatching/xymap_720p/1508041975.0_frame_001064_xymap_back.npy

Thank you

测试结果

作者你好,我是西北工业大学软件工程大四学生, 现在在毕设中用您的开源代码做了对比实验,但还希望用您方法中用的对比方法,如CMAda、CuDa-Net,但这些方法未开源,我无法进行测试,但我只需要用模型在FoggyZurich下测试出的图片结果,不知道作者能否向我提供一下,我的邮箱为[email protected]

对于第二阶段的训练

请问对第二阶段的训练理解:是对第一阶段训练好的伪标签作为第二阶段新的数据集来进行训练的吗,或者说第二阶段是一个简单的语义分割网络的训练吗?

关于FoggyZurichScripts

请问当前在github中已经找不到FoggyZurichScripts相关工具了,如何能够修改FoggyZurich数据集感兴趣的类呢?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.