Giter Site home page Giter Site logo

molierflower / openmixup Goto Github PK

View Code? Open in Web Editor NEW

This project forked from westlake-ai/openmixup

0.0 0.0 0.0 3.05 MB

CAIRI Supervised, Semi- and Self-Supervised Visual Representation Learning Toolbox and Benchmark

Home Page: https://openmixup.readthedocs.io

License: Apache License 2.0

Shell 0.85% Python 99.13% Dockerfile 0.02%

openmixup's Introduction

OpenMixup

release PyPI docs license open issues issue resolution

📘Documentation | 🛠️Installation | 🚀Model Zoo | 👀Awesome Mixup | 🔍Awesome MIM | 🆕News

Introduction

The main branch works with PyTorch 1.8 (required by some self-supervised methods) or higher (we recommend PyTorch 1.12). You can still use PyTorch 1.6 for supervised classification methods.

OpenMixup is an open-source toolbox for supervised, self-, and semi-supervised visual representation learning with mixup based on PyTorch, especially for mixup-related methods.

Major Features
  • Modular Design. OpenMixup follows a similar code architecture of OpenMMLab projects, which decompose the framework into various components, and users can easily build a customized model by combining different modules. OpenMixup is also transplantable to OpenMMLab projects (e.g., MMSelfSup).

  • All in One. OpenMixup provides popular backbones, mixup methods, semi-supervised, and self-supervised algorithms. Users can perform image classification (CNN & Transformer) and self-supervised pre-training (contrastive and autoregressive) under the same framework.

  • Standard Benchmarks. OpenMixup supports standard benchmarks of image classification, mixup classification, self-supervised evaluation, and provides smooth evaluation on downstream tasks with open-source projects (e.g., object detection and segmentation on Detectron2 and MMSegmentation).

  • State-of-the-art Methods. Openmixup provides awesome lists of popular mixup and self-supervised methods. OpenMixup is updating to support more state-of-the-art image classification and self-supervised methods.

Table of Contents
  1. Introduction
  2. News and Updates
  3. Installation
  4. Getting Started
  5. Overview of Model Zoo
  6. Change Log
  7. License
  8. Acknowledgement
  9. Contributors
  10. Contributors and Contact

News and Updates

[2022-12-16] OpenMixup v0.2.7 is released (issue #35).

[2022-12-02] Update new features and documents of OpenMixup v0.2.6 (issue #24, issue #25, issue #31, and issue #33). Update the official implementation of MogaNet.

[2022-09-14] OpenMixup v0.2.6 is released (issue #20).

Installation

OpenMixup is compatible with Python 3.6/3.7/3.8/3.9 and PyTorch >= 1.6. Here are quick installation steps for development:

conda create -n openmixup python=3.8 pytorch=1.12 cudatoolkit=11.3 torchvision -c pytorch -y
conda activate openmixup
pip install openmim
mim install mmcv-full
git clone https://github.com/Westlake-AI/openmixup.git
cd openmixup
python setup.py develop

Please refer to install.md for more detailed installation and dataset preparation.

Getting Started

OpenMixup supports Linux and macOS. It enables easy implementation and extensions of mixup data augmentation methods in existing supervised, self-, and semi-supervised visual recognition models. Please see get_started.md for the basic usage of OpenMixup.

Training and Evaluation Scripts

Here, we provide scripts for starting a quick end-to-end training with multiple GPUs and the specified CONFIG_FILE.

bash tools/dist_train.sh ${CONFIG_FILE} ${GPUS} [optional arguments]

For example, you can run the script below to train a ResNet-50 classifier on ImageNet with 4 GPUs:

CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 bash tools/dist_train.sh configs/classification/imagenet/resnet/resnet50_4xb64_cos_ep100.py 4

After trianing, you can test the trained models with the corresponding evaluation script:

bash tools/dist_test.sh ${CONFIG_FILE} ${GPUS} ${PATH_TO_MODEL} [optional arguments]

Development

Please see Tutorials for more developing examples and tech details:

(back to top)

Overview of Model Zoo

Please refer to Mixup Benchmarks for the benchmarking results of existing mixup methods, and Model Zoos for comprehensive collection of mainstream backbones and self-supervised algorithms. We also provide the paper lists of Awesome Mixups for your reference. Checkpoints and training logs will be updated soon!

(back to top)

Change Log

Please refer to changelog.md for more details and release history.

License

This project is released under the Apache 2.0 license. See LICENSE for more information.

Acknowledgement

  • OpenMixup is an open-source project for mixup methods created by researchers in CAIRI AI Lab. We encourage researchers interested in visual representation learning and mixup methods to contribute to OpenMixup!
  • This repo borrows the architecture design and part of the code from MMSelfSup and MMClassification.

(back to top)

Citation

If you find this project useful in your research, please consider star our GitHub repo and cite tech report:

@misc{2022openmixup,
    title = {{OpenMixup}: Open Mixup Toolbox and Benchmark for Visual Representation Learning},
    author = {Siyuan Li and Zicheng Liu and Zedong Wang and Di Wu and Stan Z. Li},
    journal = {GitHub repository},
    howpublished = {\url{https://github.com/Westlake-AI/openmixup}},
    year = {2022}
}
@article{li2022openmixup,
  title = {OpenMixup: Open Mixup Toolbox and Benchmark for Visual Representation Learning},
  author = {Siyuan Li and Zedong Wang and Zicheng Liu and Di Wu and Stan Z. Li},
  journal = {ArXiv},
  year = {2022},
  volume = {abs/2209.04851}
}

(back to top)

Contributors and Contact

For help, new features, or reporting bugs associated with OpenMixup, please open a GitHub issue and pull request with the tag "help wanted" or "enhancement". For now, the direct contributors include: Siyuan Li (@Lupin1998), Zedong Wang (@Jacky1128), and Zicheng Liu (@pone7). We thank all public contributors and contributors from MMSelfSup and MMClassification!

This repo is currently maintained by:

(back to top)

openmixup's People

Contributors

lupin1998 avatar jacky1128 avatar minhlong94 avatar trellixvulnteam avatar wang-tf avatar diwu121 avatar pone7 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.