Giter Site home page Giter Site logo

sjyu001 / onda Goto Github PK

View Code? Open in Web Editor NEW

This project forked from theo2021/onda

0.0 0.0 0.0 48.07 MB

Source code for "Online Unsupervised Domain Adaptation for Semantic Segmentation in Ever-Changing Conditions", ECCV 2022. This is the code has been implemented to perform training and evaluation of UDA approaches in continuous scenarios. The library has been implemented in PyTorch 1.7.1. Some newer versions should work as well.

License: GNU General Public License v2.0

Python 100.00%

onda's Introduction

🌊 OnDa

Online Domain Adaptation for Semantic Segmentation in Ever-Changing Conditions

Theodoros Panagiotakopoulos1* Pier Luigi Dovesi2 Linus Härenstam-Nielsen3,4* Matteo Poggi5

1 King 2 Univrses 3 Kudan 4 Technical University of Munich 5 University of Bologna

* Part of the work carried out while at Univrses.

📜 Source code for Online Unsupervised Domain Adaptation for Semantic Segmentation in Ever-Changing Conditions, ECCV 2022.

📽️ Check out our project page and video.

OnDA (literally "wave" in Italian) allows for adapting across a flow of domains, while avoiding catastrophic forgetting.

This code has been implemented to perform training and evaluation of UDA approaches in continuous scenarios. The library has been implemented in PyTorch 1.7.1. Some newer versions should work as well.

Method Cover

All assets to run a simple inference can be found here.

Moreover, recording and tracking for the run is happening through wandb if you haven't an account is necessary to track the adaptation.

Citation

If you find this repo useful for your work, please cite our paper:

@inproceedings{Panagiotakopoulos_ECCV_2022,
  title     = {Online Domain Adaptation for Semantic Segmentation in Ever-Changing Conditions},
  author    = {Panagiotakopoulos, Theodoros and
               Dovesi, Pier Luigi and
               H{\"a}renstam-Nielsen, Linus and
               Poggi, Matteo},
  booktitle = {European Conference on Computer Vision (ECCV)},
  year = {2022}
}

Repositories

We would advise you to use conda or miniconda to run the package. Run the following command to install the necessary modules:

conda env create -f environment.yml

After creating the environment, load it using conda activate ouda.

You would then need to login to wandb to record the experiments simply type wandb login.

Creating the rainy dataset

First download the Cityscapes dataset from here. To add rain to the cityscapes dataset you need to follow the steps as shown here. The autors provide the rain mask for each image. With their dev-kit one can create the rainy images.

Download the pretrained source model and prototypes

Download the files precomputed_prototypes.pickle , pretrained_resnet50_miou645.pth and save them into a folder named pretrained

Edit configuration

Open the file configs/hybrid_switch.yml and edit the PATH variable with the location of the dataset. The path should point to the leftImg8bit and gtFine folders. Make sure that the paths for the pretrained models at METHOD.ADAPTATION.PROTO_ONLINE_HYBRIDSWITCH.LOAD_PROTO and MODEL.LOAD are correct. The paths should point to the pretrained source and prototypes downloaded in the previous steps.

Run

We recommend using a powerful graphics card with at least 16GB of VRAM. To run this code it needs a bit over 1 day in an RTX3090. If necessary one can play arround with the batch size and resolution on the configuration file to test the approach, but results will not be replicated.

To run first one should initialise wandb wandb login and then simply run python train_ouda.py --cfg=configs/hybrid_switch.yml

The run performs evaluation accross domains from the start and for each pass through the data. We demonstrated how to run the hybrid switch but by configuring or selecting other configuration files one can use different switches or approaches. By default the approach will create folders to save predictions.

clip

Code library

The approaches can be found under framework/domain_adaptation/methods: The code that handles the prototypes can be found in: framework/domain_adaptation/methods/prototype_handler.py While the switching approach is written here: framework/domain_adaptation/methods/prototypes.py The Confidence switch (and Soft) is here: framework/domain_adaptation/methods/prototypes_hswitch.py The Confidence Derivative Switch is here: framework/domain_adaptation/methods/prototypes_vswitch.py Lastly the code for the hybrid switch can be found here: framework/domain_adaptation/methods/prototypes_hybrid_switch.py Advent is the implementation here: framework/domain_adaptation/methods/advent_da.py

Regards

Don't hesitate to contact us if there are questions about the code or about the different options in the cfg file. Thank you!!

onda's People

Contributors

theo2021 avatar mattpoggi avatar pierluigidovesi avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.