Convolutional Neural Network of classification models on Pytorch(to be continue)
In the past few years, in the field of computer vision,Convolutional Neural Networks(CNNs) have developed rapidly, especially in the image classification task. This project is based on the recent computer vision top conferences (CVPR, ICCV, ECCV) and other excellent papers. Various types of convolutional neural networks are implemented on the framework of Pytorch. Some models have special training, regularization, test mode, etc. because the author of this project is not a professional person, there may be minor problems. It is proposed that the project will continue to be updated.
This is my experiment eviroument
1.hardware:
- Intel@Core i9-9900K CPU @ 3.60HZ x 16
- GeForce RTX 2080 Ti x1
- 32 GB DDR4
2.software:
- Python 3.7.3
- Pytorch 1.1.0
- CUDA 10
1.dataset
By default, the code uses cifar10 dataset from torchvision for model training, and can be replaced with its own dataset.
2.train
a.You need to specify the net you want to train using arg -net
b.You need to specify the number of labels you want to train using arg -num_class
c.You need to specify Whether to initialize the weight you want to train using arg -initialize
d.You need to specify the learning rate you want to train using arg -lr
For example
$ python train.py -net resnet18 -num_class 10 -initialize True -lr 0.001
3.models and papers
The convolutional neural network model and paper contained in this project are as follows:
- VGG 11
- vgg 11 lrn
- VGG 13
- VGG 16C
- VGG 16D
- VGG19
- Paper:Very Deep Convolutional Networks for Large-Scale Image Recognition
- GoogLeNet
- Paper:Going Deeper with Convolutions
- ResNet 18
- ResNet 34
- ResNet 50
- ResNet 101
- ResNet 152
- Paper:Deep Residual Learning for Image Recognition
- Fractalnet 34
- Paper:FractalNet: Ultra-Deep Neural Networks without Residuals
- Inception v3
- Paper:Batch Normalization : Accelerating Deep Network Training by Reducing Internal Covariate Shift
- Paper:Rethinking the Inception Architecture for Computer Vision
- MobileNet v1
- Paper:MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
- PreActResNet 18
- PreActResNet 34
- PreActResNet 50
- PreActResNet 101
- PreActResNet 152
- Paper:Identity mappings in deep residual networks
- SENet 50
- SENet 101
- SENet 152
- Paper:Squeeze-and-Excitation Networks
- ShuffleNet 0.5x g1
- ShuffleNet 0.5x g2
- ShuffleNet 0.5x g3
- ShuffleNet 0.5x g4
- ShuffleNet 1x g1
- ShuffleNet 1x g2
- ShuffleNet 1x g3
- ShuffleNet 1x g4
- ShuffleNet 0.25x g1
- ShuffleNet 0.25x g2
- ShuffleNet 0.25x g3
- ShuffleNet 0.25x g4
- Paper:ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
- StochasticDepth 18
- StochasticDepth 34
- StochasticDepth 50
- StochasticDepth 101
- StochasticDepth 152
- Paper:Deep networks with stochastic depth
- Wide ResNet 40 4
- Wide ResNet 16 8
- Wide ResNet 40 4
- Wide ResNet 28 10
- Paper:Wide Residual Networks
- to be continue