Giter Site home page Giter Site logo

junlinhan / yoco Goto Github PK

View Code? Open in Web Editor NEW
99.0 99.0 10.0 5.42 MB

Code for You Only Cut Once: Boosting Data Augmentation with a Single Cut, ICML 2022.

Python 100.00%
cifar10 cifar100 classification computer-vision contrastive-learning data-augmentation data-augmentation-strategies data-augmentations icml imagenet imagenet-classifier instance-segmentation object-detection pytorch rain-removal super-resolution

yoco's People

Contributors

junlinhan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

yoco's Issues

Missing Test top-1 error rate

Dear @JunlinHan

Thank you for your work. I am working on your paper and reproducing the results given in the paper. I have trained a model based on your configuration for cifar-10 but I cannot find an error-rate file. How we can evaluate the top-1 error-rate?
Like this

print('Test\t  Prec@1: {top1.avg:.3f} (Err: {error:.3f} )\n'
          .format(top1=top1, error=100 - top1.avg))

1% label and 10% label in Table 4 (Results of contrasitve learning)

Dear @JunlinHan

I hope you are doing well. You have mentioned ImageNet classification result (linear protocol, 1% label and 10% label). I have found the main_lincls.py for linear protocol experiment and main_moco.py for label experiment. However, I'm not able to find where the 1% and 10% label specifications are mentioned in your code. Could you please guide me on this? A prompt response would greatly assist me in my ongoing work.

Regards,
Khawar

YOCO generates a single image when we have single image

Hi @JunlinHan

I have a dataset contains single image and I am simply applying your YOCO technique to visualize image generated by YOCO. I just get a single output sometimes the output is same image as input and sometimes flip+cut. Is that correct?

Code

import torch
from torchvision.utils import save_image
import torchvision.transforms as transforms
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader

training_data = datasets.ImageFolder(root="/media/cvpr/CM_22/dataset/train", transform=ToTensor())
train_loader = DataLoader(training_data, batch_size=64, shuffle=True)


def YOCO(images, aug, h, w):
    images = torch.cat((aug(images[:, :, :, 0:int(w / 2)]), aug(images[:, :, :, int(w / 2):w])), dim=3) if \
        torch.rand(1) > 0.5 else torch.cat((aug(images[:, :, 0:int(h / 2), :]), aug(images[:, :, int(h / 2):h, :])),
                                           dim=2)
    return images


for i, (images, target) in enumerate(train_loader):
    aug = torch.nn.Sequential(
        transforms.RandomHorizontalFlip(), )
    _, _, h, w = images.shape
    # perform augmentations with YOCO
    images = YOCO(images, aug, h, w)
    save_image(images, 'img' + str(i) + '.png')

Input image:
aeroplane_+_tiger_google_0017

Output Image
img0

Same data augmentation but different instance

很有意思的工作,简单有效!

一个小疑惑,为什么要都是采用相同(same type)的数据增强方法用于两个cuts呢?看了下ablation,似乎是关于位置的和cuts数目的消融实验。

会不会不同的cuts用不同的数据增强更好呢?

比如说:一部分颜色变换,另一部分旋转反转。

Did you split the training dataset into train and val?

Hi, thanks for your inspiring work! I got a question here.
Have you split the training dataset into train and val? From the code you shared, it seems that you directly used the test dataset for validation by which the model with the best performance is picked:
val_loader = torch.utils.data.DataLoader( datasets.CIFAR10('../data', train=False, transform=transform_test), batch_size=args.batch_size, shuffle=True, num_workers=args.workers, pin_memory=True)
I am not sure if it is ok and I am really confused, because I notice some researchers make the split while some others do not.

Calculating accuracy on CIFAR-10 multiple times or Single time

Dear @JunlinHan,

Did you run all experiments multiple times than average and put them into the table? Like training resnet18 fives times and then averaging the accuracy and putting it into the table. Why I am asking this question because every time when I am training resnet18 I got different accuracy and error rate (almost 1% fluctuation).

Regards,
Khawar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.