deepvoltaire / autoaugment Goto Github PK
View Code? Open in Web Editor NEWUnofficial implementation of the ImageNet, CIFAR 10 and SVHN Augmentation Policies learned by AutoAugment using pillow
License: MIT License
Unofficial implementation of the ImageNet, CIFAR 10 and SVHN Augmentation Policies learned by AutoAugment using pillow
License: MIT License
A link to your TDS blog post can be added to the readme.
This repo is just the result from the paper, which implements the policies instead of the RL agent controller.
Hi,
Thanks a lot for your kindly publishing the code base !!
Line 177 in d708e21
I noticed that, in this line, the magnitude of translate operation is divided by 331. Would you please tell me the reason you implement this?
By the way, is the meanings of the magnitudes and their explanations in the paper of learning augs for object detection the same as in this paper? Could I simply modify the combination of the policies in this codebase to implement the augmentation method proposed in that paper?
I am looking for your generous guidance :)
Magnitudes are in range [1-10]
, but based on https://pillow.readthedocs.io/en/3.0.x/reference/ImageOps.html#PIL.ImageOps.posterize
solarize
: range is [0, 256]
posterize
: range is [1, 8]
Shouldn't the magnitudes be normalized in https://github.com/DeepVoltaire/AutoAugment/blob/master/autoaugment.py#L210 ? For example, line 211 would be
"solarize": lambda img, magnitude: ImageOps.solarize(img, 256.0 / magnitude)
AttributeError: Can't pickle local object 'SubPolicy.init..' when used in ptorch distributed learning
When I used auto augment with my custom NN arch with CIFAR-10 dataset, I got more testing accuracy and less training accuracy. I tried many things and it happens when I implement autoaugmentation only, the under fitting problem, Can please explain why this happening?
Just looking for answers...
TypeError: transform() got an unexpected keyword argument 'fillcolor'
Hello, how can I solve this problem?
Line 103 in 00e838e
Sub-policy 24(Equalize,0.8,8)(Equalize,0.6,3)
Hi,
Would it be possible to release the pretrained imagenet autoaugment models? In particular, the resnet50 one would be very helpful. I am looking to use it for further downstream work.
thanks
Why train a model directly with cifar10 policy given in the article, but the test accuracy of the model is very low ?
hello,I want to know how to use autoaugment.py? thanks
Hi, @DeepVoltaire
I tried your autoaugment policies to my image-classification learning(imagenet-pretrained fine tuning).
But, accuracy is not good than default augmentation.
What's wrong to me?
Autoaugmentation is not suitable for transfer learning or finetuning?
Thanks at any rate.
Line 95 in d708e21
I want to apply data enhancement on tensor instead of first data enhancement and then convert into tensor. Can this be done?
downgrading to 6.1 fixed the problem.
may want to leave notes in the readme that 7.0.0 doesn't work.
There are one or more calendar dates in the readme. They're written in a locale specific style, and this can lead to needless confusion.For example, July 11 can be confused with November 7. Please rewrite the dates in a neutral format, e.g. as in this comment.
The readme says the following for RandomCrop with cifar10:
[transforms.RandomCrop(32, padding=4, fill=128), # gray fill value is important bc of the color operations
However, torchvision.transforms.RandomCrop(size, padding=0, pad_if_needed=False) does not support fill as a valid parameter.
Hi, does anyone gets the performance on ImageNet with the provided autoaugment?
Here is my results with autoaugment using official implementation, compared to official results, no impressive improvements are got?
Results of ResNet50,101,152 in terms of top1/5 accuracy:
official without autoaugment: 76.15/92.87, 77.37/93.56, 78.31/94.06.
mine with autoaugment: 75.33/92.45, 77.57/93.78, 78.51/94.07.
Update: all the above results are tested with training epochs as 90, a longer one such as 270 used in the paper may help get the reported results.
I noticed that the autocontrast operation didn't fill in the cutoff parameter. Is this normal? Without this parameter, it can only make a small change to the picture.
Line 218 in 1002a40
When I test the demo AutoAugment_Exploration.ipynb, I always get the same error: "TypeError: transform() got an unexpected keyword argument 'fillcolor'". Anyone knows why this happens?
How to use the policy given in the article to train our model? What should the training process be like? The test accuracy of the model trained according to my own thinking is very low,please answer
Line 218 in 17d7182
Hi,
I notice that for autocontrast
, the magnitude is not used, and for ImageOps.autocontrast
, there is an args named cutoff
, which controls the degree of contrast. If we set cutoff=0
, the original image would be returned by this function. Does this mean that the ImageOps.autocontrast
is not used at all in both the auto-ml search phase and the verification phase(experiments on imagenet dataset) ?
Did you run full training with the transformations?
In autoaugment.py.
best Sub-policies.How to determine the parameters.Do these parameters also use my own data?
Hi, I meet a problem when I run AutoAugment and I can't find any solution by google.
My environment is different from the tested one. Well, I will feel lucky if it is not a complex compatibility issue.
File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/site-packages/torch/utils/data/dataloader.py",
line 469, in init
w.start()
File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/process.py", line 112, in start
self._popen = self._Popen(self)
File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32,
in init
super().init(process_obj)
File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/popen_fork.py", line 20, in init
self._launch(process_obj)
File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 47,
in _launch
reduction.dump(process_obj, fp)
File "/home/dc2-user/anaconda3/envs/kaidi_env/lib/python3.7/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object "SubPolicy.init.<locals>.<lambda>"
This is a great tool - but wondering how to go about installing it to run with my pytorch models?
Hi,
Thank you for the wonderful work!
Just wanna share a tiny concern. It seems to me that here the magitude
should be magnitude * random.choice([-1, 1]
, otherwise the rotate will always be in one direction.
Line 208 in 17d7182
How can I train my own AutoAugment for my custom dataset ?
Hi,
I tried to train EfficientNet-B0 with the configuration of 'https://github.com/tensorflow/tpu/blob/master/models/official/efficientnet/main.py', but only obtain top-1 accuracy of 75%. The autoaugment paper reported a top-1 accuracy of 77.3%. The model script I used is from 'https://github.com/kakaobrain/fast-autoaugment/tree/master/FastAutoAugment/networks/efficientnet_pytorch'. Has anyone tried to train EfficientNet? I don't know what I did wrong.
Thanks
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.