📘Documentation | 🛠️Installation | 🚀Model Zoo | 👀Awesome Mixup | 🆕News
The main branch works with PyTorch 1.8 (required by some self-supervised methods) or higher (we recommend PyTorch 1.10). You can still use PyTorch 1.6 for supervised classification methods.
OpenMixup
is an open-source toolbox for supervised, self-, and semi-supervised visual representation learning with mixup based on PyTorch, especially for mixup-related methods.
Major Features
-
Modular Design. OpenMixup follows a similar code architecture of OpenMMLab projects, which decompose the framework into various components, and users can easily build a customized model by combining different modules. OpenMixup is also transplatable to OpenMMLab projects (e.g., MMSelfSup).
-
All in One. OpenMixup provides popular backbones, mixup methods, and self-supervised algorithms. Users can perform supervised training or self-supervised pre-training under the same setting.
-
Standard Benchmarks. OpenMixup supports standard benchmarks of image classification, mixup classification, self-supervised evaluation, and provides smooth evaluation on downstream tasks with open-source projects (e.g., object detection and segmentation on Detectron2 and MMSegmentation).
[2020-07-07] OpenMixup
v0.2.4 is released (issue #7), which fixs bugs of weight initialization and fine-tuning, updates docs, etc.
There are quick installation steps for develepment:
conda create -n openmixup python=3.8 pytorch=1.10 cudatoolkit=11.3 torchvision -c pytorch -y
conda activate openmixup
pip3 install openmim
mim install mmcv-full
git clone https://github.com/Westlake-AI/openmixup.git
cd openmixup
python setup.py develop
Please refer to install.md for more detailed installation and dataset preparation.
Please see get_started.md for the basic usage of OpenMixup. You can start a multiple GPUs training with CONFIG_FILE
using the following script. An example,
bash tools/dist_train.sh ${CONFIG_FILE} ${GPUS} [optional arguments]
Please Then, see Tutorials for more tech details:
- config files
- add new dataset
- data pipeline
- add new modules
- customize schedules
- customize runtime
- benchmarks
Please refer to Model Zoos for various backbones, mixup methods, and self-supervised algorithms. We also provide the paper lists of Awesome Mixups for your reference. Checkpoints and traning logs will be updated soon!
-
Backbone architectures for supervised image classification on ImageNet.
Currently supported backbones
- VGG (ICLR'2015) [readme]
- ResNet (CVPR'2016) [readme]
- ResNeXt (CVPR'2017) [readme]
- SE-ResNet (CVPR'2018) [readme]
- SE-ResNeXt (CVPR'2018) [readme]
- ShuffleNetV2 (ECCV'2018) [readme]
- MobileNetV2 (CVPR'2018) [readme]
- MobileNetV3 (ICCV'2019)
- EfficientNet (ICML'2019) [readme]
- Swin-Transformer (ICCV'2021) [readme]
- RepVGG (CVPR'2021)
- Vision-Transformer (ICLR'2021) [readme]
- MLP-Mixer (NIPS'2021) [readme]
- DeiT (ICML'2021)
- ConvMixer (Openreview'2021) [readme]
- UniFormer (ICLR'2022) [readme]
- PoolFormer (CVPR'2022) [readme]
- ConvNeXt (CVPR'2022) [readme]
- VAN (ArXiv'2022) [readme]
-
Mixup methods for supervised image classification.
Currently supported mixup methods
- Mixup (ICLR'2018)
- CutMix (ICCV'2019)
- ManifoldMix (ICML'2019)
- FMix (ArXiv'2020)
- AttentiveMix (ICASSP'2020)
- SmoothMix (CVPRW'2020)
- SaliencyMix (ICLR'2021)
- PuzzleMix (ICML'2020)
- GridMix (Pattern Recognition'2021)
- ResizeMix (ArXiv'2020)
- AutoMix (ECCV'2022) [readme]
- SAMix (ArXiv'2021) [readme]
Currently supported datasets for mixups
-
Self-supervised algorithms for visual representation.
Currently supported self-supervised algorithms
- Relative Location (ICCV'2015) [readme]
- Rotation Prediction (ICLR'2018) [readme]
- DeepCluster (ECCV'2018) [readme]
- NPID (CVPR'2018) [readme]
- ODC (CVPR'2020) [readme]
- MoCov1 (CVPR'2020) [readme]
- SimCLR (ICML'2020) [readme]
- MoCoV2 (ArXiv'2020) [readme]
- BYOL (NIPS'2020) [readme]
- SwAV (NIPS'2020) [readme]
- DenseCL (CVPR'2021) [readme]
- SimSiam (CVPR'2021) [readme]
- Barlow Twins (ICML'2021) [readme]
- MoCoV3 (ICCV'2021) [readme]
- MAE (CVPR'2022) [readme]
- SimMIM (CVPR'2022) [readme]
- CAE (ArXiv'2022)
- A2MIM (ArXiv'2022) [readme]
Please refer to changelog.md for details and release history.
This project is released under the Apache 2.0 license.
- OpenMixup is an open-source project for mixup methods created by researchers in CAIRI AI LAB. We encourage researchers interested in visual representation learning and mixup methods to contribute to OpenMixup!
- This repo borrows the architecture design and part of the code from MMSelfSup and MMClassification.
If you find this project useful in your research, please consider cite our repo:
@misc{2022openmixup,
title={{OpenMixup}: Open Mixup Toolbox and Benchmark for Visual Representation Learning},
author={Li, Siyuan and Liu, Zichen and Wu, Di and Stan Z. Li},
howpublished = {\url{https://github.com/Westlake-AI/openmixup}},
year={2022}
}
For now, the direct contributors include: Siyuan Li (@Lupin1998), Zicheng Liu (@pone7), and Di Wu (@wudi-bu). We thanks contributors for MMSelfSup and MMClassification.
This repo is currently maintained by Siyuan Li ([email protected]) and Zicheng Liu ([email protected]).