Giter Site home page Giter Site logo

xiaowei-hu / cyclegan-tensorflow Goto Github PK

View Code? Open in Web Editor NEW
714.0 31.0 299.0 12.32 MB

Tensorflow implementation for learning an image-to-image translation without input-output pairs. https://arxiv.org/pdf/1703.10593.pdf

Shell 2.69% Python 97.31%
cyclegan tensorflow image-translation

cyclegan-tensorflow's People

Contributors

cyberj0g avatar mathandy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cyclegan-tensorflow's Issues

'module' object has no attribute 'imread

discriminatorA/d_h3_conv/Conv/weights:0
discriminatorA/d_bn3/scale:0
discriminatorA/d_bn3/offset:0
discriminatorA/d_h3_pred/Conv/weights:0
[*] Reading checkpoint...
[!] Load failed...
Processing image: ./datasets/horse2zebra/testA/n02381460_530.jpg
Traceback (most recent call last):
File "main.py", line 55, in
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "main.py", line 52, in main
else model.test(args)
File "/root/CycleGAN-tensorflow/model.py", line 254, in test
sample_image = [load_test_data(sample_file)]
File "/root/CycleGAN-tensorflow/utils.py", line 41, in load_test_data
img = imread(image_path)
File "/root/CycleGAN-tensorflow/utils.py", line 93, in imread
return scipy.misc.imread(path, mode='RGB').astype(np.float)
AttributeError: 'module' object has no attribute 'imread

typo in model.py

line 175 : % (epoch, idx, batch_idxs, time.time() - start_time))) should be % (epoch+1, idx+1, batch_idxs, time.time() - start_time)))

when training G

Hi,when I was reading model.py,I found training Generator without freeze the Discriminator
Does it mean they are trained together?

What is the purpose of the U-Net generator?

Hello,
currently I am implementing the CycleGAN with the Resnet generator for a medical application, and it yields very good results!
However, what is the purpose of the U-Net generator in the code? Does it work for image translation tasks, and if so, how do I get it to work? I tried it out, and it does not work at all for my case. As far as I know the U-Net is only used for segmentation tasks.

Example code for deploying on Google cloud

Hi,

Can you please provide some sample code for deploying on Google cloud? I can't seem to make it work.

Thank you!

===============================
Log:
INFO 2019-01-14 22:37:36 -0500 master-replica-0 Running task with arguments: --cluster={"master": ["127.0.0.1:2222"]} --task={"type": "master", "index": 0} --job={ "scale_tier": "BASIC_GPU", "package_uris": ["gs://cyclegan/cyclegan_2019011422235/480d21ee7d352f60e443e2fbad629f32b1aaee5de205d8fd1fc762e0aed776fa/CycleGAN-tensorflow-0.0.0.tar.gz"], "python_module": "main.py", "args": ["--dataset_dir\u003dgs://cyclegan/Resized2/", "--epoch\u003d10", "--batch_size\u003d", "4", "--phase\u003dtrain", "--checkpoint_dir\u003dgs://cyclegan/logs3/", "--test_dir\u003dgs://karmacyclegan/logs3/"], "region": "asia-east1", "runtime_version": "1.2", "run_on_raw_vm": true}
INFO 2019-01-14 22:37:43 -0500 master-replica-0 Running module main.py.
INFO 2019-01-14 22:37:43 -0500 master-replica-0 Downloading the package: gs://cyclegan/cyclegan_2019011422235/480d21ee7d352f60e443e2fbad629f32b1aaee5de205d8fd1fc762e0aed776fa/CycleGAN-tensorflow-0.0.0.tar.gz
INFO 2019-01-14 22:37:43 -0500 master-replica-0 Running command: gsutil -q cp gs://cyclegan/cyclegan_2019011422235/480d21ee7d352f60e443e2fbad629f32b1aaee5de205d8fd1fc762e0aed776fa/CycleGAN-tensorflow-0.0.0.tar.gz CycleGAN-tensorflow-0.0.0.tar.gz
INFO 2019-01-14 22:37:45 -0500 master-replica-0 Installing the package: gs://cyclegan/cyclegan_2019011422235/480d21ee7d352f60e443e2fbad629f32b1aaee5de205d8fd1fc762e0aed776fa/CycleGAN-tensorflow-0.0.0.tar.gz
INFO 2019-01-14 22:37:45 -0500 master-replica-0 Running command: pip install --user --upgrade --force-reinstall --no-deps CycleGAN-tensorflow-0.0.0.tar.gz
INFO 2019-01-14 22:37:46 -0500 master-replica-0 Processing ./CycleGAN-tensorflow-0.0.0.tar.gz
INFO 2019-01-14 22:37:46 -0500 master-replica-0 Building wheels for collected packages: CycleGAN-tensorflow
INFO 2019-01-14 22:37:46 -0500 master-replica-0 Running setup.py bdist_wheel for CycleGAN-tensorflow: started
INFO 2019-01-14 22:37:47 -0500 master-replica-0 creating '/tmp/pip-wheel-wfXg5U/CycleGAN_tensorflow-0.0.0-cp27-none-any.whl' and adding '.' to it
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN-tensorflow/ops.py'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN-tensorflow/init.py'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN-tensorflow/model.py'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN-tensorflow/module.py'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN-tensorflow/utils.py'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN-tensorflow/main.py'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/DESCRIPTION.rst'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/metadata.json'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/top_level.txt'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/WHEEL'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/METADATA'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/RECORD'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 Running setup.py bdist_wheel for CycleGAN-tensorflow: finished with status 'done'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 Stored in directory: /root/.cache/pip/wheels/8e/c5/0e/065505a9bea2174b701147091945c0aa410ea831750a81b3aa
INFO 2019-01-14 22:37:47 -0500 master-replica-0 Successfully built CycleGAN-tensorflow
INFO 2019-01-14 22:37:47 -0500 master-replica-0 Installing collected packages: CycleGAN-tensorflow
INFO 2019-01-14 22:37:47 -0500 master-replica-0 Successfully installed CycleGAN-tensorflow-0.0.0
INFO 2019-01-14 22:37:47 -0500 master-replica-0 Running command: pip install --user CycleGAN-tensorflow-0.0.0.tar.gz
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Processing ./CycleGAN-tensorflow-0.0.0.tar.gz
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Building wheels for collected packages: CycleGAN-tensorflow
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Running setup.py bdist_wheel for CycleGAN-tensorflow: started
INFO 2019-01-14 22:37:48 -0500 master-replica-0 creating '/tmp/pip-wheel-4XE7L2/CycleGAN_tensorflow-0.0.0-cp27-none-any.whl' and adding '.' to it
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN-tensorflow/ops.py'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN-tensorflow/init.py'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN-tensorflow/model.py'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN-tensorflow/module.py'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN-tensorflow/utils.py'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN-tensorflow/main.py'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/DESCRIPTION.rst'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/metadata.json'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/top_level.txt'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/WHEEL'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/METADATA'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/RECORD'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Running setup.py bdist_wheel for CycleGAN-tensorflow: finished with status 'done'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Stored in directory: /root/.cache/pip/wheels/8e/c5/0e/065505a9bea2174b701147091945c0aa410ea831750a81b3aa
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Successfully built CycleGAN-tensorflow
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Installing collected packages: CycleGAN-tensorflow
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Found existing installation: CycleGAN-tensorflow 0.0.0
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Uninstalling CycleGAN-tensorflow-0.0.0:
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Successfully uninstalled CycleGAN-tensorflow-0.0.0
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Successfully installed CycleGAN-tensorflow-0.0.0
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Running command: python -m main.py --dataset_dir=gs://cyclegan/Resized2/ --epoch=10 --batch_size= 4 --phase=train --checkpoint_dir=gs://cyclegan/logs3/ --test_dir=gs://cyclegan/logs3/
ERROR 2019-01-14 22:37:49 -0500 master-replica-0 /usr/bin/python: No module named main
ERROR 2019-01-14 22:37:49 -0500 master-replica-0 Command '['python', '-m', u'main.py', u'--dataset_dir=gs://cyclegan/Resized2/', u'--epoch=10', u'--batch_size=', u'4', u'--phase=train', u'--checkpoint_dir=gs://cyclegan/logs3/', u'--test_dir=gs://cyclegan/logs3/']' returned non-zero exit status 1
INFO 2019-01-14 22:37:49 -0500 master-replica-0 Module completed; cleaning up.
INFO 2019-01-14 22:37:49 -0500 master-replica-0 Clean up finished.
ERROR 2019-01-14 22:37:59 -0500 service The replica master 0 exited with a non-zero status of 1.

Unable to run this on windows 10?

I tried to run this on windows 10 but i get this in the console i was wondering if anyone knows how to fix this:

C:\Users\Reece\Reverse>python main.py --dataset_dir=horse2zebra
C:\Users\Reece\Miniconda3\lib\site-packages\h5py_init_.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
2018-06-01 06:36:35.521182: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1356] Found device 0 with properties:
name: GeForce GTX 960 major: 5 minor: 2 memoryClockRate(GHz): 1.1775
pciBusID: 0000:01:00.0
totalMemory: 2.00GiB freeMemory: 1.64GiB
2018-06-01 06:36:35.553624: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1435] Adding visible gpu devices: 0
2018-06-01 06:36:38.678984: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-06-01 06:36:38.682738: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:929] 0
2018-06-01 06:36:38.685304: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:942] 0: N
2018-06-01 06:36:38.688370: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1404 MB memory) -> physical GPU (device: 0, name: GeForce GTX 960, pci bus id: 0000:01:00.0, compute capability: 5.2)
generatorA2B/g_e1_c/Conv/weights:0
generatorA2B/g_e1_bn/scale:0
generatorA2B/g_e1_bn/offset:0
generatorA2B/g_e2_c/Conv/weights:0
generatorA2B/g_e2_bn/scale:0
generatorA2B/g_e2_bn/offset:0
generatorA2B/g_e3_c/Conv/weights:0
generatorA2B/g_e3_bn/scale:0
generatorA2B/g_e3_bn/offset:0
generatorA2B/g_r1_c1/Conv/weights:0
generatorA2B/g_r1_bn1/scale:0
generatorA2B/g_r1_bn1/offset:0
generatorA2B/g_r1_c2/Conv/weights:0
generatorA2B/g_r1_bn2/scale:0
generatorA2B/g_r1_bn2/offset:0
generatorA2B/g_r2_c1/Conv/weights:0
generatorA2B/g_r2_bn1/scale:0
generatorA2B/g_r2_bn1/offset:0
generatorA2B/g_r2_c2/Conv/weights:0
generatorA2B/g_r2_bn2/scale:0
generatorA2B/g_r2_bn2/offset:0
generatorA2B/g_r3_c1/Conv/weights:0
generatorA2B/g_r3_bn1/scale:0
generatorA2B/g_r3_bn1/offset:0
generatorA2B/g_r3_c2/Conv/weights:0
generatorA2B/g_r3_bn2/scale:0
generatorA2B/g_r3_bn2/offset:0
generatorA2B/g_r4_c1/Conv/weights:0
generatorA2B/g_r4_bn1/scale:0
generatorA2B/g_r4_bn1/offset:0
generatorA2B/g_r4_c2/Conv/weights:0
generatorA2B/g_r4_bn2/scale:0
generatorA2B/g_r4_bn2/offset:0
generatorA2B/g_r5_c1/Conv/weights:0
generatorA2B/g_r5_bn1/scale:0
generatorA2B/g_r5_bn1/offset:0
generatorA2B/g_r5_c2/Conv/weights:0
generatorA2B/g_r5_bn2/scale:0
generatorA2B/g_r5_bn2/offset:0
generatorA2B/g_r6_c1/Conv/weights:0
generatorA2B/g_r6_bn1/scale:0
generatorA2B/g_r6_bn1/offset:0
generatorA2B/g_r6_c2/Conv/weights:0
generatorA2B/g_r6_bn2/scale:0
generatorA2B/g_r6_bn2/offset:0
generatorA2B/g_r7_c1/Conv/weights:0
generatorA2B/g_r7_bn1/scale:0
generatorA2B/g_r7_bn1/offset:0
generatorA2B/g_r7_c2/Conv/weights:0
generatorA2B/g_r7_bn2/scale:0
generatorA2B/g_r7_bn2/offset:0
generatorA2B/g_r8_c1/Conv/weights:0
generatorA2B/g_r8_bn1/scale:0
generatorA2B/g_r8_bn1/offset:0
generatorA2B/g_r8_c2/Conv/weights:0
generatorA2B/g_r8_bn2/scale:0
generatorA2B/g_r8_bn2/offset:0
generatorA2B/g_r9_c1/Conv/weights:0
generatorA2B/g_r9_bn1/scale:0
generatorA2B/g_r9_bn1/offset:0
generatorA2B/g_r9_c2/Conv/weights:0
generatorA2B/g_r9_bn2/scale:0
generatorA2B/g_r9_bn2/offset:0
generatorA2B/g_d1_dc/Conv2d_transpose/weights:0
generatorA2B/g_d1_bn/scale:0
generatorA2B/g_d1_bn/offset:0
generatorA2B/g_d2_dc/Conv2d_transpose/weights:0
generatorA2B/g_d2_bn/scale:0
generatorA2B/g_d2_bn/offset:0
generatorA2B/g_pred_c/Conv/weights:0
generatorB2A/g_e1_c/Conv/weights:0
generatorB2A/g_e1_bn/scale:0
generatorB2A/g_e1_bn/offset:0
generatorB2A/g_e2_c/Conv/weights:0
generatorB2A/g_e2_bn/scale:0
generatorB2A/g_e2_bn/offset:0
generatorB2A/g_e3_c/Conv/weights:0
generatorB2A/g_e3_bn/scale:0
generatorB2A/g_e3_bn/offset:0
generatorB2A/g_r1_c1/Conv/weights:0
generatorB2A/g_r1_bn1/scale:0
generatorB2A/g_r1_bn1/offset:0
generatorB2A/g_r1_c2/Conv/weights:0
generatorB2A/g_r1_bn2/scale:0
generatorB2A/g_r1_bn2/offset:0
generatorB2A/g_r2_c1/Conv/weights:0
generatorB2A/g_r2_bn1/scale:0
generatorB2A/g_r2_bn1/offset:0
generatorB2A/g_r2_c2/Conv/weights:0
generatorB2A/g_r2_bn2/scale:0
generatorB2A/g_r2_bn2/offset:0
generatorB2A/g_r3_c1/Conv/weights:0
generatorB2A/g_r3_bn1/scale:0
generatorB2A/g_r3_bn1/offset:0
generatorB2A/g_r3_c2/Conv/weights:0
generatorB2A/g_r3_bn2/scale:0
generatorB2A/g_r3_bn2/offset:0
generatorB2A/g_r4_c1/Conv/weights:0
generatorB2A/g_r4_bn1/scale:0
generatorB2A/g_r4_bn1/offset:0
generatorB2A/g_r4_c2/Conv/weights:0
generatorB2A/g_r4_bn2/scale:0
generatorB2A/g_r4_bn2/offset:0
generatorB2A/g_r5_c1/Conv/weights:0
generatorB2A/g_r5_bn1/scale:0
generatorB2A/g_r5_bn1/offset:0
generatorB2A/g_r5_c2/Conv/weights:0
generatorB2A/g_r5_bn2/scale:0
generatorB2A/g_r5_bn2/offset:0
generatorB2A/g_r6_c1/Conv/weights:0
generatorB2A/g_r6_bn1/scale:0
generatorB2A/g_r6_bn1/offset:0
generatorB2A/g_r6_c2/Conv/weights:0
generatorB2A/g_r6_bn2/scale:0
generatorB2A/g_r6_bn2/offset:0
generatorB2A/g_r7_c1/Conv/weights:0
generatorB2A/g_r7_bn1/scale:0
generatorB2A/g_r7_bn1/offset:0
generatorB2A/g_r7_c2/Conv/weights:0
generatorB2A/g_r7_bn2/scale:0
generatorB2A/g_r7_bn2/offset:0
generatorB2A/g_r8_c1/Conv/weights:0
generatorB2A/g_r8_bn1/scale:0
generatorB2A/g_r8_bn1/offset:0
generatorB2A/g_r8_c2/Conv/weights:0
generatorB2A/g_r8_bn2/scale:0
generatorB2A/g_r8_bn2/offset:0
generatorB2A/g_r9_c1/Conv/weights:0
generatorB2A/g_r9_bn1/scale:0
generatorB2A/g_r9_bn1/offset:0
generatorB2A/g_r9_c2/Conv/weights:0
generatorB2A/g_r9_bn2/scale:0
generatorB2A/g_r9_bn2/offset:0
generatorB2A/g_d1_dc/Conv2d_transpose/weights:0
generatorB2A/g_d1_bn/scale:0
generatorB2A/g_d1_bn/offset:0
generatorB2A/g_d2_dc/Conv2d_transpose/weights:0
generatorB2A/g_d2_bn/scale:0
generatorB2A/g_d2_bn/offset:0
generatorB2A/g_pred_c/Conv/weights:0
discriminatorB/d_h0_conv/Conv/weights:0
discriminatorB/d_h1_conv/Conv/weights:0
discriminatorB/d_bn1/scale:0
discriminatorB/d_bn1/offset:0
discriminatorB/d_h2_conv/Conv/weights:0
discriminatorB/d_bn2/scale:0
discriminatorB/d_bn2/offset:0
discriminatorB/d_h3_conv/Conv/weights:0
discriminatorB/d_bn3/scale:0
discriminatorB/d_bn3/offset:0
discriminatorB/d_h3_pred/Conv/weights:0
discriminatorA/d_h0_conv/Conv/weights:0
discriminatorA/d_h1_conv/Conv/weights:0
discriminatorA/d_bn1/scale:0
discriminatorA/d_bn1/offset:0
discriminatorA/d_h2_conv/Conv/weights:0
discriminatorA/d_bn2/scale:0
discriminatorA/d_bn2/offset:0
discriminatorA/d_h3_conv/Conv/weights:0
discriminatorA/d_bn3/scale:0
discriminatorA/d_bn3/offset:0
discriminatorA/d_h3_pred/Conv/weights:0

C:\Users\Reece\Reverse>eedIt
'eedIt' is not recognized as an internal or external command,
operable program or batch file.

how to change the image channels ?

how could I change the code to adopt to the input images( 3 three channels ) and output images(1 channel)?

or both the input and output are 1 channel?

Swap out generator for another one

Say I Train my model on the horse2zebra dataset. I get a checkpoint and model etc.. lets call it BobSession.

Then I change trainB zebra folder to contain an entirely new dataset.
I begin to train this new Model... AliceSession.

Can I somehow transfer the generator of Bob-trainB into Alice-TrainB ?

I'm imagining some sort of tensorflow pruning / surgery operation of connecting up the model parts.

One-sided Label Smoothing

Hi . Thanks for the wonderful explanation. I am new in GAN. And I have a question.
I want to know how to modify the data label of the model, I want to replace it with a smoother label, thank you !

Results are not good enough

I trained the network on apple2orange dataset.
First of all, thanks! The output object is adapting the texture.
There are a few things about the result which are not plausible:

  1. In apple-to-orange, the shape of the generated orange is same as that of the apple in the input image. In reality, it is not.
  2. The background gets changed completely.

Could you please suggest some solution to both the problems or any other GAN which can be helpful.

Thanks!!

Excuese me , is there any Caffe code?

Is there any code about CycleGan that can run in Caffe? Cause there is only code can run in tensorflow. :)
what is more, when I run your code, there is no loss print, will it be reasonable to print ga2b_loss & gb2a_loss?

Load Failed

When I try to run the script main.py for horse2zebra dataset which was downloaded as told in readme, I get a error saying Load Failed, once the graph is built. no further Information is given as why this is happening. Is there a bug in the script or some dependencies which are missing.
I am using tensorflow 1.3 gpu version

Results Good?

Hi
Thanks for sharing your code!
Have you tested the results and compared with the original version?

How to train with parameter `--load_size=512 --fine_size=512` ?

When i try to run: CUDA_VISIBLE_DEVICES=0 python main.py --dataset_dir=my_dataset --load_size=512 --fine_size=512, a error apear:

Traceback (most recent call last):
  File "main.py", line 53, in <module>
    tf.app.run()
  File "/home/vsocr/py3/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "main.py", line 49, in main
    model.train(args) if args.phase == 'train' \
  File "/home/vsocr/workplace/CycleGAN-tensorflow/model.py", line 158, in train
    feed_dict={self.real_data: batch_images, self.lr: lr})
  File "/home/vsocr/py3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 895, in run
    run_metadata_ptr)
  File "/home/vsocr/py3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1100, in _run
    % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1, 511, 511, 6) for Tensor 'real_A_and_B_images:0', which has shape '(?, 512, 512, 6)'

And run: CUDA_VISIBLE_DEVICES=0 python main.py --dataset_dir=2k_v1 --load_size=513 --fine_size=512, error:

    tf.app.run()
  File "/home/vsocr/py3/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "main.py", line 49, in main
    model.train(args) if args.phase == 'train' \
  File "/home/vsocr/workplace/CycleGAN-tensorflow/model.py", line 176, in train
    self.sample_model(args.sample_dir, epoch, idx)
  File "/home/vsocr/workplace/CycleGAN-tensorflow/model.py", line 218, in sample_model
    feed_dict={self.real_data: sample_images}
  File "/home/vsocr/py3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 895, in run
    run_metadata_ptr)
  File "/home/vsocr/py3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1100, in _run
    % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1, 256, 256, 6) for Tensor 'real_A_and_B_images:0', which has shape '(?, 512, 512, 6)'

My images input shape: (512, 512, 3).

It seems to be a bug in code. Can you show me right way to put parameters?

max_size argument that controls the image_pool

Recently start using this tensorflow implementation of CycleGAN. Does anybody knows the function of the imagepool (that is controlled by the max_size argument). It is kind of a buffer to store fake_A's and fake_B's to be fed to the discriminator. And if the buffer is full it randomly takes an old one, or a new one. But the old ones never get replaced during training, which seems odd. Any clues on this anyone?

max_size parameter, does it impact training

Not really an issue, I'm just puzzled where max_size is used for. It serves as the size of an ImagePool pool and is used to store 'fake outputs'. Used here:

` # Update G network and record fake outputs
fake_A, fake_B, _, summary_str = self.sess.run(
[self.fake_A, self.fake_B, self.g_optim, self.g_sum],
feed_dict={self.real_data: batch_images, self.lr: lr})
self.writer.add_summary(summary_str, counter)
[fake_A, fake_B] = self.pool([fake_A, fake_B])

            # Update D network
            _, summary_str = self.sess.run(
                [self.d_optim, self.d_sum],
                feed_dict={self.real_data: batch_images,
                           self.fake_A_sample: fake_A,
                           self.fake_B_sample: fake_B,
                           self.lr: lr})
            self.writer.add_summary(summary_str, counter)

`
Does the size influence training? Default it is set to 50. Any ideas on this?
Thanks!

dataset

Hello @xhujoy i am trying to run my own data on this model.It is possible as it only feed dataset. Thank you hope you can help me

CUDA

Does it run on Nvidia Geforce if we just give CUDA=1 in the terminal while running the file

CycleGAN convergence

Hi . Thanks for the wonderful explanation. I am new in GAN. And I have a question.
When I train cycleGAN, I found that the discriminator loss decreases and converges, but the generator loss always change not much. So, How to judge whether cycleGAN converges?

About the dimension of the output of the discriminator.

Hello!

I find the dimension of the output of the discriminator is # h4 is (32 x 32 x 1), and then the code calculate the loss :

# losses
self.g_loss = self.criterionGAN(self.DA_fake, tf.ones_like(self.DA_fake)) 

I am so confused, as I think the the dimension of the output of the discriminator should be 1.

Could you please give some hints?

THX

Parameters not used

The main.py script offers some parameters that one can change, e.g. enable/disable flipping, image size etc. Some of these parameters (at least the ones mentioned before) are not passed to the actual image loading functions.

  • model.py, line 147: load_data is called without flip parameter so even if you pass "--flip=0" to the script, it will still flip the images for training (since default is True)
  • model.py, line 254: load_test_data is called without image size parameter and hence will always be the default size
  • utils.py, line 48: preprocess_A_and_B is called without parameters for load_size and fine_size, they are always set to the default values

Quantitative results of code

After running this code, I only got qualitative results. I want some quantitative results as well such as generator or discriminator loss or accuracy. How can I get these? Please help!

Pre-trained model

Could you please upload pre-trained model on apple2orange.
Thanks!!

Discriminator

Hi very nice work and thanks for sharing!
I noticed that there is no any normalization after the first convolution of Discriminator.
In my opinion this won't matter a lot.
But are there any theoretically reasons or empirical results making you choose so?
I'll be glad if you reply me~
And thank you again!

won't generate results

In the Train model stage, using
python main.py --dataset_dir=/horse2zebra

Will generate no result:
/sample, /checkpoints /tests will be created but contains 0 files.
In run Window
...
discriminatorA/d_bn2/offset:0
discriminatorA/d_h3_conv/Conv/weights:0
discriminatorA/d_bn3/scale:0
discriminatorA/d_bn3/offset:0
discriminatorA/d_h3_pred/Conv/weights:0
...

Did I miss anything?

I have tried training 1024x1024 model but got OOM. How do I approach this?

Hello there!
As stated in subject, I tried modifying some config settings to allow the higher resolution to compute on GTX 1080 Ti that I have, with no luck. This is what I tried:

config = tf.ConfigProto(allow_soft_placement=True) config.gpu_options.allocator_type = 'BFC' config.gpu_options.per_process_gpu_memory_fraction = 0.40

Unfortunately, the tensor gets smaller but I still get OOM. Is there anything else I can try?

Thanks a lot in advance!

Multi-GPU support

Great that this is on TensorFlow! I was wondering if it'd be possible to have multi-GPU support?

Load Failed

se
@xhujoy
Hello @xhujoy , im a newbie but here i encountered some problem during running this model,im not sure how to fix it. i got an load failed error message and the epoch seems to be running even the load failed. Thank you if you can get me out of this problem...

identity mapping loss

Hi,
thank you for sharing this implementation.
do you plan on adding support for the identity mapping loss as described in the original paper?
thanks

discriminatorB/d_h3_pred/Conv/weights:0

When I use the 'CUDA_VISIBLE_DEVICES=0 python main.py --dataset_dir=horse2zebra',there are many problems:
2018-05-29 09:49:48.450923: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-05-29 09:49:48.528092: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:898] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-05-29 09:49:48.528340: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.683
pciBusID: 0000:01:00.0
totalMemory: 10.91GiB freeMemory: 6.17GiB
2018-05-29 09:49:48.528354: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1312] Adding visible gpu devices: 0
2018-05-29 09:49:48.991358: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5950 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
generatorA2B/g_e1_c/Conv/weights:0
generatorA2B/g_e1_bn/scale:0
generatorA2B/g_e1_bn/offset:0
generatorA2B/g_e2_c/Conv/weights:0
generatorA2B/g_e2_bn/scale:0
generatorA2B/g_e2_bn/offset:0
generatorA2B/g_e3_c/Conv/weights:0
generatorA2B/g_e3_bn/scale:0
generatorA2B/g_e3_bn/offset:0
generatorA2B/g_r1_c1/Conv/weights:0
generatorA2B/g_r1_bn1/scale:0
generatorA2B/g_r1_bn1/offset:0
generatorA2B/g_r1_c2/Conv/weights:0
generatorA2B/g_r1_bn2/scale:0
generatorA2B/g_r1_bn2/offset:0
generatorA2B/g_r2_c1/Conv/weights:0
generatorA2B/g_r2_bn1/scale:0
generatorA2B/g_r2_bn1/offset:0
generatorA2B/g_r2_c2/Conv/weights:0
generatorA2B/g_r2_bn2/scale:0
generatorA2B/g_r2_bn2/offset:0
generatorA2B/g_r3_c1/Conv/weights:0
generatorA2B/g_r3_bn1/scale:0
generatorA2B/g_r3_bn1/offset:0
generatorA2B/g_r3_c2/Conv/weights:0
generatorA2B/g_r3_bn2/scale:0
generatorA2B/g_r3_bn2/offset:0
generatorA2B/g_r4_c1/Conv/weights:0
generatorA2B/g_r4_bn1/scale:0
generatorA2B/g_r4_bn1/offset:0
generatorA2B/g_r4_c2/Conv/weights:0
generatorA2B/g_r4_bn2/scale:0
generatorA2B/g_r4_bn2/offset:0
generatorA2B/g_r5_c1/Conv/weights:0
generatorA2B/g_r5_bn1/scale:0
generatorA2B/g_r5_bn1/offset:0
generatorA2B/g_r5_c2/Conv/weights:0
generatorA2B/g_r5_bn2/scale:0
generatorA2B/g_r5_bn2/offset:0
generatorA2B/g_r6_c1/Conv/weights:0
generatorA2B/g_r6_bn1/scale:0
generatorA2B/g_r6_bn1/offset:0
'
How can I solve the problem of 'weights=0'???????

Image Generation with resolution 512x256

I am currently working with this repo and it already produces very nice images.
However, I am confused how to generate rectangular images with the size 512x256 (or whatever what is not quadratic).
What do I have to consider? Which lines must be changed?

I appreciate any help/remarks and hints on this topic. Thanks in advance! @xhujoy

Failing to load pre-trained set

Hi,

After downloading the pre-trained set and untar-ing to the a 'checkpoint' folder, I try test run.
I get:
"[...]
discriminatorA/d_h3_pred/Conv/weights:0
[*] Reading checkpoint...
[!] Load failed...
Processing image: ./datasets/horse2zebra/testA\n02381460_1000.jpg
"

DIfficult to tell why the model won't load.
I'm on WIN10 if that matters, any advice wellcome
Thanks

Julien

Possibility of adding other pretrained models to the README (rather than just horse2zebra)

Hi,
Thanks for contributing with this amazing tensorflow implementation.

I trained the horse2zebra model on my home GPU (980Ti) but after a day I decided to use the pretrained model instead given by the URL. Wow was it so much more accurate!

Would it be possible to add other pre-trained models to the project? Hosting them on a static drive somewhere? e.g. photo2monet/cezanne/ukiyoe etc...

I was looking at https://github.com/junyanz/CycleGAN and many pretrained models are provided but unfortunately I cannot find a suitable pytorch --> tensorflow model converter.

Thanks,

ImagePool fix

class ImagePool(object): 
    def __init__(self, maxsize=50):
        self.maxsize = maxsize
        self.num_img = 0
        self.images = []
    def __call__(self, image):
        if self.maxsize == 0:
            return image
        if self.num_img < self.maxsize:
            self.images.append(image)
            self.num_img=self.num_img+1  //It seems you forgot this line
            return image
        if np.random.rand() > 0.5:
            idx = int(np.random.rand()*self.maxsize)
            tmp = copy.copy(self.images[idx])
            self.images[idx] = image
            return tmp
        else:
            return image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.