Giter Site home page Giter Site logo

featurelearningrotnet's Introduction

Unsupervised Representation Learning by Predicting Image Rotations

Introduction

The current code implements on pytorch the following ICLR2018 paper:
Title: "Unsupervised Representation Learning by Predicting Image Rotations"
Authors: Spyros Gidaris, Praveer Singh, Nikos Komodakis
Institution: Universite Paris Est, Ecole des Ponts ParisTech
Code: https://github.com/gidariss/FeatureLearningRotNet
Link: https://openreview.net/forum?id=S1v4N2l0-

Abstract:
Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4%$that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification.

Citing FeatureLearningRotNet

If you find the code useful in your research, please consider citing our ICLR2018 paper:

@inproceedings{
  gidaris2018unsupervised,
  title={Unsupervised Representation Learning by Predicting Image Rotations},
  author={Spyros Gidaris and Praveer Singh and Nikos Komodakis},
  booktitle={International Conference on Learning Representations},
  year={2018},
  url={https://openreview.net/forum?id=S1v4N2l0-},
}

Requirements

It was developed and tested with pytorch version 0.2.0_4

License

This code is released under the MIT License (refer to the LICENSE file for details).

Before running the experiments

  • Inside the FeatureLearningRotNet directory with the downloaded code you must create a directory named experiments where the experiments-related data will be stored: mkdir experiments.
  • You must download the desired datasets and set in dataloader.py the paths to where the datasets reside in your machine. We recommend creating a datasets directory mkdir datasets and placing the downloaded datasets there.
  • Note that all the experiment configuration files are placed in the ./config directory.

CIFAR-10 experiments

  • In order to train (in an unsupervised way) the RotNet model on the CIFAR-10 training images and then evaluate object classifiers on top of the RotNet-based learned features see the run_cifar10_based_unsupervised_experiments.sh script. Pre-trained model (in pytorch format) is provided here (note that it is not exactly the same model used in the paper).
  • In order to run the semi-supervised experiments on CIFAR-10 see the run_cifar10_semi_supervised_experiments.sh script.

ImageNet and Places205 experiments

  • In order to train (in an unsupervised way) a RotNet model with AlexNet-like architecture on the ImageNet training images and then evaluate object classifiers (for the ImageNet and Places205 classification tasks) on top of the RotNet-based learned features see the run_imagenet_based_unsupervised_feature_experiments.sh script.
  • In order to train (in an unsupervised way) a RotNet model with AlexNet-like architecture on the Places205 training images and then evaluate object classifiers (for the ImageNet and Places205 classification tasks) on top of the RotNet-based learned features see the run_places205_based_unsupervised_feature_experiments.sh scritp.

Download the already trained RotNet model

  • In order to download the RotNet model (with AlexNet architecture) trained on the ImageNet training images using the current code, go to: ImageNet_RotNet_AlexNet_pytorch. Note that:

    1. The model is saved in pytorch format.
    2. It is not the same as the one used in the paper and probably will give (slightly) different outcomes (in the ImageNet and Places205 classification tasks that it was tested it gave better results than the paper's model).
    3. It expects RGB images that their pixel values are normalized with the following mean RGB values mean_rgb = [0.485, 0.456, 0.406] and std RGB values std_rgb = [0.229, 0.224, 0.225]. Prior to normalization the range of the image values must be [0.0, 1.0].
  • In order to download the RotNet model (with AlexNet architecture) trained on the ImageNet training images using the current code and convered in caffe format, go to: ImageNet_RotNet_AlexNet_caffe. Note that:

    1. The model is saved in caffe format.
    2. It is not the same as the one used in the paper and probably will give (slightly) different outcomes (in the PASCAL segmentation task it gives slightly better results than the paper's model).
    3. It expects BGR images that their pixel values are mean normalized with the following mean BGR values mean_bgr = [0.406*255.0, 0.456*255.0, 0.485*255.0]. Prior to normalization the range of the image values must be [0.0, 255.0].
    4. The weights of the model are rescaled with the approach of Kraehenbuehl et al, ICLR 2016.

featurelearningrotnet's People

Contributors

gidariss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

featurelearningrotnet's Issues

Problem about weight rescaling technique

I am wondering what parameters did you use when you rescaled the model?There are many parameters in magic_init.py, such as -t -nit -d. Could you please give more details?Thanks.

question about learning rate and rotate strategy

Hi,

Thanks for providing this awesome work to us!!!

After reading the code, I am not sure whether I have fully understood it, so I feel I better open an issue to ask:

  1. the original cifar-10 is trained with learning-rate of 0.1 when the batchsize is 128. With the rotnet method, the batchsize is amplified to 512 (128x4) but the learning rate is still kept 0.1, is that right ?

  2. I see in the paper that the strategy of "simultaneously rotate the input image by 4 degrees and enlarge the batchsize 4 times" outperforms "randomly choose one degree to rotate and kept the batchsize not changed". Will the "randomly choose method" bring a significantly bad result, or it is only slightly outperformed by the proposed "4 rotates method" ?

I would be very happy to have your rely. Would you please show me your ideas on these details?

about accuracy.

i run the script, then it begin to train.
what i want know is the accuracy. so, what should i do?

Question regarding fine-tuning phase of ImageNet classification task

The paper published here explains how the pretext task training is conducted, but not how the transfer learning is conducted. I had some questions regarding the procedure for transfer learning for the ImageNet classification task.

The entire procedure can be described as:
a) Train an AlexNet using the rotation prediction pretext task on the entire ImageNet dataset.
b) Freeze all layers except the fully connected layers.
c) Train the AlexNet using the Imagenet dataset using the ImageNet labels.

  1. During phase (c), is the entire imagenet dataset used? Or is a fraction of it used? I would expect self-supervised learning to fine-tune using a relatively small dataset.
  2. What hyper-parameters are used during phase (c)? Such as learning rate, weight decay etc.

Thank you.

Accuracy on AlexNet validation set with conv4 feataure

Hi,
I just re-implement your idea myself by following the details in this repository.
The experiments on CIFAR10 obtained about 1% lower accuracy than published results.
However, the experiments on ImageNet with AlexNet achieved about 3% higher accuracy on the ImageNet validation set than the published results.

supervised: 59.48 (59.70 in the paper)

conv4: 52.92 (50 in the paper)

conv5: 46.06 (43.8 in the paper)

Could you give more details about the training?

conv2d() arguments error

Hello!
While executing the code as given for training using algorithm.solve(), I have been experiencing the following error. Can someone give a clarity regarding the same?
Issue

Reproducing the results in the Paper

Dear Spyros Gidaris, Praveer Singh and Nikos Komodakis,

i have read your paper "Unsupervised Representation Learning by Predicting Image Rotations" and was impressed by your work and the astonishing results receive by pretraining a "RotNet" on the rotation task and later train classifiers on top of the feature maps.

I have downloaded your code from GitHub and tried to reproduce the values in Table 1 for a RotNet with 4 conv. blocks. However, running "run_cifar10_based_unsupervised_experiments.sh" and altering line 33 and for 'conv1' also line 31 in the config file "CIFAR10_MultLayerClassifier_on_RotNet_NIN4blocks_Conv2_feats.py", i obtain slightly lower values than in the paper especially for the fourth block:

Rotation Task: 93,65 (Running your Code) / ---
ConvBlock1: 84,65 (Running your Code) / 85,07 (Paper)
ConvBlock2: 88,89 (Running your Code) / 89,06 (Paper)
ConvBlock3: 85,88 (Running your Code) / 86,21 (Paper)
ConvBlock4: 54,04 (Running your Code) / 61,73 (Paper)

Are there further things I need to consider before running the code to achieve the results in the paper? I have used a GeForce GPX 1070 to run the experiment.

How can I generate a log file, which is needed during training the RotNet?

I was training a RotNet of CIFAR10, when I run the command, "python main.py --exp=CIFAR10_RotNet_NIN4bloc", an error occurs.

The error occurs at
algorithm = getattr(alg, config['algorithm_type'])(config) of main.py.
it says,
OSError: [Errno 22] Invalid argument: 'E:\FeatureLearningRotNet-master\experiments\CIFAR10_RotNet_NIN4blocks\logs\LOG_INFO_2020-08-06_17:02:54.623166.txt'

I suppose a log file is needed here but I didn't see it in the log folder. Should I add some lines to generate the log?

Problem in data transformations

In dataloader.py when preparing rotated images
rotated_imgs = [

                self.transform(img0),
                self.transform(rotate_img(img0,  90)),
                self.transform(rotate_img(img0, 180)),
                self.transform(rotate_img(img0, 270))
            ]

the following error arises.
ValueError: some of the strides of a given numpy array are negative. This is currently not supported, but will be added in future releases.

How to solve this error?

How do you create attention map?

I read your paper, it says
`` In order to generate the attention mapof a conv. layer we first compute the feature maps of this layer, then we raise each feature activation on the power p, and finally we sum the activations at each location of the feature map. For the conv.layers 1, 2, and 3 we used the powers p = 1, p = 2, and p = 4 respectively. ''

What do you do after summing up each neuron's power of activations in these layers? I guess some backpropagation or deconvolution is needed to generate such attention map.

Horizontal flip augmentation changes angle

Hi all, this might be a basic question, but in this line, and other lines in the dataloader, the RandomHorizontalFlip() is one of the augmentations, but would that not change the rotation angle of the image, hence consequently changing the ground truth. Is this being handled anywhere (i.e. changing the ground truth label when RandomHorizontalFlip() augmentation happens )?
Thank you!

Unable to reproduce the ImageNet Linear Classification Results

Hello, I am really attracted by your work, which is precise and effective. But when i clone your code and run the run_imagenet_based_unsupervised_feature_experiments.sh script(I used your pretrained model so comment the training command) to test the linear classification, but I got the following results, after runing around 25 epochs,
image Especially the conv3,4,5, the results are so low. Maybe I have ommited some important details. Can you give me some advice?

CIFAR10 self.data does not have attribute test_labels ,test_data

When run CIFAR10_ConvClassifier_on_RotNet_NIN4blocks_Conv2_feats_K1000.py, there are some errors iin dataloader.py.

    if self.dataset_name == 'cifar10':
        labels = self.data.test_labels if (self.split == 'test') else self.data.train_labels
        data = self.data.test_data if (self.split == 'test') else self.data.train_data

I found that self.data does not have attribute test_labels ,test_data, train_labels, and train_data. How to solve this? Thk.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.