ptran1203 / pytorch-animegan Goto Github PK
View Code? Open in Web Editor NEWPytorch implementation of AnimeGAN for fast photo animation
Pytorch implementation of AnimeGAN for fast photo animation
Since the v2 training code is unavailable from https://github.com/bryandlee/animegan2-pytorch, do you have any suggestions for improving on your wonderful pytorch v1 version here, for those that want the faster, smaller v2 model? V1 model outputs ~15mb, while the V2 pretrained models are ~8mb. Are there some tweaks I can make to your model to approach the V2 model?
Please tell me how to make smooth image from source image, how to sematic source image to size?
Is there any python file to use?
The code can indeed run and the effect is very good. I don’t understand why there are so few people star this project. I hope more and more people can pay attention to this.
Would you mind sharing your training dataset? The dataset I downloaded with wget -O anime-gan.zip https://github.com/ptran1203/pytorch-animeGAN/releases/download/v1.0/dataset_v1.zip
cannot be decompressed.
unzip: cannot find zipfile directory in one of anime-gan.zip or anime-gan.zip.zip, and cannot find anime-gan.zip.ZIP, period.
Thank you for your amazing work! If your team is able to upload the full model (weights plus architecture) to torch.hub.load then the model could be converted to a mobile version. Or do you already have the pretrained model weights plus architecture file? I can see that your .pth is only the OrderedStrict weights.
I have run 100 epoch for shikai dataset ,and adv loss is more and more larger。and the result is not real like shikai style and do it wrong to get result?
After reading the primitive paper and analyzing your code, I find that the generator network in your code is not the same as said in the paper, and your learning rates of G and D are not equal to that in their paper. Also weights of loss. Why? Is this based on your practical experience ? Thank you!
Init models...
Compute mean (R, G, B) from 1792 images
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1792/1792 [00:04<00:00, 373.49it/s]
Mean(B, G, R) of Hayao are [-4.4346958 -8.66591597 13.10061177]
Dataset: real 6656 style 1792, smooth 1792
G weight loaded
Could not load checkpoint, train from scratch [Errno 2] No such file or directory: '{ckp_dir}/discriminator_Hayao.pth'
Epoch 13/100
0%| | 0/1110 [00:06<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 247, in
main(args)
File "train.py", line 191, in main
fake_img = G(img).detach()
File "/root/miniconda3/envs/AGAN/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/root/autodl-tmp/pytorch-animeGAN-master/modeling/anime_gan.py", line 55, in forward
out = self.encode_blocks(x)
File "/root/miniconda3/envs/AGAN/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/root/miniconda3/envs/AGAN/lib/python3.6/site-packages/torch/nn/modules/container.py", line 119, in forward
input = module(input)
File "/root/miniconda3/envs/AGAN/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/root/autodl-tmp/pytorch-animeGAN-master/modeling/conv_blocks.py", line 73, in forward
out = self.ins_norm(out)
File "/root/miniconda3/envs/AGAN/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/root/miniconda3/envs/AGAN/lib/python3.6/site-packages/torch/nn/modules/instancenorm.py", line 59, in forward
self.training or not self.track_running_stats, self.momentum, self.eps)
File "/root/miniconda3/envs/AGAN/lib/python3.6/site-packages/torch/nn/functional.py", line 2183, in instance_norm
input, weight, bias, running_mean, running_var, use_input_stats, momentum, eps, torch.backends.cudnn.enabled
RuntimeError: CUDA error: no kernel image is available for execution on the device
Hi!
I'm facing a issue when i have to resume the trainning, even when I reach 80+ epochcs, the transformer has no effect on the images with the trained weights. I tried Hayao and other custom datasets, here are images with 15 epochs (without resume), 30, 40, 60 and 80 epochs with Hayao Dataset, and the changes are barely visible, if existent, I'm using a copy of the Google Collab Notebook.
Is there anything I'm doing wrong with the process? Here are the params used for trainning:
!python3 train.py --dataset 'Hayao'
--batch 6
--debug-samples 0
--init-epochs 10
--epochs 100 (Changed at each trainning resume)
--checkpoint-dir {ckp_dir}
--save-image-dir {save_img_dir}
--save-interval 1
--gan-loss lsgan
--init-lr 0.0001
--lr-g 0.00002
--lr-d 0.00004
--wadvd 10.0
--wadvg 10.0
--wcon 1.5
--wgra 3.0
--wcol 70.0
--resume GD
--use_sn\
Init models...
Compute mean (R, G, B) from 1793 images
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▊| 1791/1793 [00:06<00:00, 281.65it/s]
Traceback (most recent call last):
File "train.py", line 247, in
main(args)
File "train.py", line 124, in main
AnimeDataSet(args),
File "/root/autodl-tmp/pytorch-animeGAN-master/dataset.py", line 32, in init
self.mean = compute_data_mean(os.path.join(anime_dir, 'style'))
File "/root/autodl-tmp/pytorch-animeGAN-master/utils/image_processing.py", line 114, in compute_data_mean
total += image.mean(axis=(0, 1))
AttributeError: 'NoneType' object has no attribute 'mean'
Hello. Thank you for good documentation.
can i ask if what is this repository's license is? is it follows animeGAN?
pytorch-animeGAN/modeling/losses.py
Line 111 in a1642db
G_loss label : torch.ones_like(pred) or torch.zeros_like(pred) ?
Init models.. .
Compute mean (R,G,B)from 1792 images100%|[
Mean(B,G, R)of Hayao are [-4.4346958-8.6659159713.10061177]Dataset: real 9 style 1792,smooth 1793
Epoch 0/100
O%|
Traceback (most recent call last):
File "train. py", line 247,in
main(args)
File "train. py",line 162,in mainfor img,*_ in bar:
File "/root/miniconda3/envs/AGAN/lib/python3.8/site-packages/tqdm/
/ tadm. py",line 919,in _iter
for obj in iterable:
File"/root/miniconda3/envs/AGAV/1ib/python3.8/site-packages/torch/utils/data/dataloader.py",line 517,in_nextdata = self._next_data()
File "/root/minicandla3/envs/AGNV1ib/python3.8/site packages/torch/utils/data/dataloader. py",line 1199,in _next_datareturn self. _process_data(data)
File "/rot/minicomda83/envs /ACGNV/lib/python3.8/site-packages/torch/utils/data/dataloader.py",line 1225,in _process_datadata.reraise()
File "/root/miniconda3/envs/AGAV/1ib/python3.8/site packages/torch/_utils.py",line 429,in reraiseraise self.exc_type(msg)
TypeError: Caught TypeError in DataLoader worker process 0.Original Traceback (most recent call last):
File "/root/miniconda3/envs/AGNV1ib/python3.8/site-packages/torch/utils/data/l_utils/worker.py",line 202,in _worker_loopdata = fetcher.fetch(index)
File"/root/miniconda3/envs/AGAV1ib/python3.8/site packages/torch/utils/data/_utils/fetch. py",line 44,in fetchdata = [self.dataset[idx] for idx in possibly_batched_index]
File "/root/miniconda3/envs/AGAV/1ib/python3.8/site packages/torch/uti1s/data/_utils/fetch.py", line 44,in data = [self.dataset[idx] for idx in possibly_batched_index]
File "/root/autodl-tmp/pytorch-animeGAN-master/dataset.py",line 67,in _getitemimage = self.load photo(index)
File "/root/autodl-tmp/pytorch-animeGAN-master/dataset.py",line 79, in load photoimage = cv2.imread(fpath)[:,:,::-1]
TypeError: 'NoneType’object is not subscriptable
Training triggers cuda out of memory, any workaround available?
murugan86@murugan86-IdeaPad-Gaming3-15ARH05D:~/anime/AnimeGenDir/animagen-pytorch-mur/pytorch-animeGAN$ python3 train.py --dataset Kimetsu --batch 6 --init-epochs 4 --checkpoint-dir {ckp_dir} --save-image-dir {save_img_dir} --save-interval 1 --gan-loss lsgan --init-lr 0.0001 --lr-g 0.00002 --lr-d 0.00004 --wadvd 10.0 --wadvg 10.0 --wcon 1.5 --wgra 3.0 --wcol 30.0
def video():
os.system("python inference_video.py "
"--checkpoint generator_hayao.pth "
"--src C:/Zero/Python/AnimeGAN/test.mp4 "
"--dest res.mp4 "
"--batch 2")
When I run this, got the error above, I have print the val of img it is correct I think sth wrong with the writer
Why do not use gradient_panalty, can it be train faster?
I would like to know whether you have released the pretrained model for inference?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.