Giter Site home page Giter Site logo

stal's Introduction


Iterative Loop Learning Combining Self-Training and Active Learning for Domain Adaptive Semantic Segmentation

by Licong Guan, Xue Yuan

Paper

This repository provides the official code for the paper Iterative Loop Learning Combining Self-Training and Active Learning for Domain Adaptive Semantic Segmentation.

Abstract. Although data-driven methods have achieved great success in many tasks, it remains a significant challenge to ensure good generalization for different domain scenarios. Recently, self-training and active learning have been proposed to alleviate this problem. Self-training can improve model accuracy with massive unlabeled data, but some pseudo labels containing noise would be generated with limited or imbalanced training data. And there will be suboptimal models if human guidance is absent. Active learning can select more effective data to intervene, while the model accuracy can not be improved because the massive unlabeled data are not used. And the probability of querying sub-optimal samples will increase when the domain difference is too large, increasing annotation cost. This paper proposes an iterative loop learning method combining Self-Training and Active Learning (STAL) for domain adaptive semantic segmentation. The method first uses self-training to learn massive unlabeled data to improve model accuracy and provide more accurate selection models for active learning. Secondly, combined with the sample selection strategy of active learning, manual intervention is used to correct the self-training learning. Iterative loop to achieve the best performance with minimal label cost. Extensive experiments show that our method establishes state-of-the-art performance on tasks of GTAV→Cityscapes, SYNTHIA→Cityscapes, improving by 4.9% mIoU and 5.2% mIoU, compared to the previous best method, respectively.

image

For more information on STAL, please check our [Paper].

Usage

Prerequisites

  • Python 3.6.9
  • Pytorch 1.8.1
  • torchvision 0.9.1

Step-by-step installation

git clone https://github.com/licongguan/STAL.git && cd STAL
conda create -n stal python=3.6.9
conda activate stal
pip install -r requirements.txt
pip install pip install torch==1.8.1+cu102 torchvision==0.9.1+cu102 -f https://download.pytorch.org/whl/torch_stable.html

Data Preparation

For Cityscapes

Unzip the files to folder data and make the dictionary structures as follows:

data/dataset
├── gtFine
│   ├── test
│   ├── train
│   └── val
└── leftImg8bit
    ├── test
    ├── train
    └── val
For GTAV and SYNTHIA

Unzip the files to folder data and rename the image/label files for GTAV/SYNTHIA Datasets by running

python datasets/rename_gta5.py

next, move data to follow folder:

data/dataset
├── gtFine
│   ├── test
│   ├── train
│   │   ├── gta5
│   │   ├── synthia
│   └── val
└── leftImg8bit
    ├── test
    ├── train
    │   ├── gta5
    │   ├── synthia
    └── val

Prepare Pretrained Backbone

Before training, please download ResNet101 pretrained on ImageNet-1K from one of the following:

After that, modify model_urls in stal/models/resnet.py to </path/to/resnet101.pth>

Model Zoo

GTAV to Cityscapes

We have put our model checkpoints here [Google Drive] [百度网盘] (提取码STAL).

Method Net budget mIoU Chekpoint Where in Our Paper
STAL V3+ 1% 70.0 Google Drive/BaiDu Table1
STAL V3+ 2.2% 75.0 Google Drive/BaiDu Table1
STAL V3+ 5.0% 76.1 Google Drive/BaiDu Table1

SYNTHIA to Cityscapes

Method Net budget mIoU Chekpoint Where in Our Paper
STAL V3+ 1% 73.2 Google Drive/BaiDu Table2
STAL V3+ 2.2% 76.0 Google Drive/BaiDu Table2
STAL V3+ 5.0% 76.6 Google Drive/BaiDu Table2

STAL Training

We can train a model with labeled data from the GTAV dataset 2000 and labeled data from the Cityscapes dataset 30 (1%):

cd  experiments/splits/gtav2cityscapes/1.0%
# use torch.distributed.launch
sh train.sh <num_gpu> <port>

STAL Testing

sh eval.sh

Acknowledgement

This project is based on the following open-source projects: U2PL and RIPU. We thank their authors for making the source code publically available.

Citation

If you find this project useful in your research, please consider citing:

@inproceedings{guan2023iterative,
  title={Iterative Loop Learning Combining Self-Training and Active Learning for Domain Adaptive Semantic Segmentation},
  author={Guan, Licong and Yuan, Xue},
  journal={arXiv preprint arXiv:2301.13361},
  year={2023}
}

Contact

If you have any problem about our code, feel free to contact

or describe your problem in Issues.

stal's People

Contributors

licongguan avatar

Watchers

Kostas Georgiou avatar

Forkers

tingtin846

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.