xiaowei-hu / cyclegan-tensorflow Goto Github PK
View Code? Open in Web Editor NEWTensorflow implementation for learning an image-to-image translation without input-output pairs. https://arxiv.org/pdf/1703.10593.pdf
Tensorflow implementation for learning an image-to-image translation without input-output pairs. https://arxiv.org/pdf/1703.10593.pdf
Thank you for providing your codes.
I trained horse2zebra network using your codes and tested without modification.
However, the result of test images is quite different from yours.
Is this because I'm using Windows7?
self.fake_A_sample = tf.placeholder(tf.float32,
[None, self.image_size, self.image_size, self.input_c_dim],
name='fake_A_sample')
self.fake_B_sample = tf.placeholder(tf.float32,
[None, self.image_size, self.image_size, self.output_c_dim],
name='fake_B_sample')
Why exactly is this needed?
discriminatorA/d_h3_conv/Conv/weights:0
discriminatorA/d_bn3/scale:0
discriminatorA/d_bn3/offset:0
discriminatorA/d_h3_pred/Conv/weights:0
[*] Reading checkpoint...
[!] Load failed...
Processing image: ./datasets/horse2zebra/testA/n02381460_530.jpg
Traceback (most recent call last):
File "main.py", line 55, in
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "main.py", line 52, in main
else model.test(args)
File "/root/CycleGAN-tensorflow/model.py", line 254, in test
sample_image = [load_test_data(sample_file)]
File "/root/CycleGAN-tensorflow/utils.py", line 41, in load_test_data
img = imread(image_path)
File "/root/CycleGAN-tensorflow/utils.py", line 93, in imread
return scipy.misc.imread(path, mode='RGB').astype(np.float)
AttributeError: 'module' object has no attribute 'imread
line 175 : % (epoch, idx, batch_idxs, time.time() - start_time)))
should be % (epoch+1, idx+1, batch_idxs, time.time() - start_time)))
Hi,when I was reading model.py,I found training Generator without freeze the Discriminator
Does it mean they are trained together?
Hello,
currently I am implementing the CycleGAN with the Resnet generator for a medical application, and it yields very good results!
However, what is the purpose of the U-Net generator in the code? Does it work for image translation tasks, and if so, how do I get it to work? I tried it out, and it does not work at all for my case. As far as I know the U-Net is only used for segmentation tasks.
Hi,
Can you please provide some sample code for deploying on Google cloud? I can't seem to make it work.
Thank you!
===============================
Log:
INFO 2019-01-14 22:37:36 -0500 master-replica-0 Running task with arguments: --cluster={"master": ["127.0.0.1:2222"]} --task={"type": "master", "index": 0} --job={ "scale_tier": "BASIC_GPU", "package_uris": ["gs://cyclegan/cyclegan_2019011422235/480d21ee7d352f60e443e2fbad629f32b1aaee5de205d8fd1fc762e0aed776fa/CycleGAN-tensorflow-0.0.0.tar.gz"], "python_module": "main.py", "args": ["--dataset_dir\u003dgs://cyclegan/Resized2/", "--epoch\u003d10", "--batch_size\u003d", "4", "--phase\u003dtrain", "--checkpoint_dir\u003dgs://cyclegan/logs3/", "--test_dir\u003dgs://karmacyclegan/logs3/"], "region": "asia-east1", "runtime_version": "1.2", "run_on_raw_vm": true}
INFO 2019-01-14 22:37:43 -0500 master-replica-0 Running module main.py.
INFO 2019-01-14 22:37:43 -0500 master-replica-0 Downloading the package: gs://cyclegan/cyclegan_2019011422235/480d21ee7d352f60e443e2fbad629f32b1aaee5de205d8fd1fc762e0aed776fa/CycleGAN-tensorflow-0.0.0.tar.gz
INFO 2019-01-14 22:37:43 -0500 master-replica-0 Running command: gsutil -q cp gs://cyclegan/cyclegan_2019011422235/480d21ee7d352f60e443e2fbad629f32b1aaee5de205d8fd1fc762e0aed776fa/CycleGAN-tensorflow-0.0.0.tar.gz CycleGAN-tensorflow-0.0.0.tar.gz
INFO 2019-01-14 22:37:45 -0500 master-replica-0 Installing the package: gs://cyclegan/cyclegan_2019011422235/480d21ee7d352f60e443e2fbad629f32b1aaee5de205d8fd1fc762e0aed776fa/CycleGAN-tensorflow-0.0.0.tar.gz
INFO 2019-01-14 22:37:45 -0500 master-replica-0 Running command: pip install --user --upgrade --force-reinstall --no-deps CycleGAN-tensorflow-0.0.0.tar.gz
INFO 2019-01-14 22:37:46 -0500 master-replica-0 Processing ./CycleGAN-tensorflow-0.0.0.tar.gz
INFO 2019-01-14 22:37:46 -0500 master-replica-0 Building wheels for collected packages: CycleGAN-tensorflow
INFO 2019-01-14 22:37:46 -0500 master-replica-0 Running setup.py bdist_wheel for CycleGAN-tensorflow: started
INFO 2019-01-14 22:37:47 -0500 master-replica-0 creating '/tmp/pip-wheel-wfXg5U/CycleGAN_tensorflow-0.0.0-cp27-none-any.whl' and adding '.' to it
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN-tensorflow/ops.py'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN-tensorflow/init.py'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN-tensorflow/model.py'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN-tensorflow/module.py'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN-tensorflow/utils.py'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN-tensorflow/main.py'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/DESCRIPTION.rst'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/metadata.json'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/top_level.txt'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/WHEEL'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/METADATA'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/RECORD'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 Running setup.py bdist_wheel for CycleGAN-tensorflow: finished with status 'done'
INFO 2019-01-14 22:37:47 -0500 master-replica-0 Stored in directory: /root/.cache/pip/wheels/8e/c5/0e/065505a9bea2174b701147091945c0aa410ea831750a81b3aa
INFO 2019-01-14 22:37:47 -0500 master-replica-0 Successfully built CycleGAN-tensorflow
INFO 2019-01-14 22:37:47 -0500 master-replica-0 Installing collected packages: CycleGAN-tensorflow
INFO 2019-01-14 22:37:47 -0500 master-replica-0 Successfully installed CycleGAN-tensorflow-0.0.0
INFO 2019-01-14 22:37:47 -0500 master-replica-0 Running command: pip install --user CycleGAN-tensorflow-0.0.0.tar.gz
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Processing ./CycleGAN-tensorflow-0.0.0.tar.gz
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Building wheels for collected packages: CycleGAN-tensorflow
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Running setup.py bdist_wheel for CycleGAN-tensorflow: started
INFO 2019-01-14 22:37:48 -0500 master-replica-0 creating '/tmp/pip-wheel-4XE7L2/CycleGAN_tensorflow-0.0.0-cp27-none-any.whl' and adding '.' to it
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN-tensorflow/ops.py'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN-tensorflow/init.py'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN-tensorflow/model.py'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN-tensorflow/module.py'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN-tensorflow/utils.py'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN-tensorflow/main.py'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/DESCRIPTION.rst'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/metadata.json'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/top_level.txt'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/WHEEL'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/METADATA'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 adding 'CycleGAN_tensorflow-0.0.0.dist-info/RECORD'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Running setup.py bdist_wheel for CycleGAN-tensorflow: finished with status 'done'
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Stored in directory: /root/.cache/pip/wheels/8e/c5/0e/065505a9bea2174b701147091945c0aa410ea831750a81b3aa
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Successfully built CycleGAN-tensorflow
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Installing collected packages: CycleGAN-tensorflow
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Found existing installation: CycleGAN-tensorflow 0.0.0
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Uninstalling CycleGAN-tensorflow-0.0.0:
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Successfully uninstalled CycleGAN-tensorflow-0.0.0
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Successfully installed CycleGAN-tensorflow-0.0.0
INFO 2019-01-14 22:37:48 -0500 master-replica-0 Running command: python -m main.py --dataset_dir=gs://cyclegan/Resized2/ --epoch=10 --batch_size= 4 --phase=train --checkpoint_dir=gs://cyclegan/logs3/ --test_dir=gs://cyclegan/logs3/
ERROR 2019-01-14 22:37:49 -0500 master-replica-0 /usr/bin/python: No module named main
ERROR 2019-01-14 22:37:49 -0500 master-replica-0 Command '['python', '-m', u'main.py', u'--dataset_dir=gs://cyclegan/Resized2/', u'--epoch=10', u'--batch_size=', u'4', u'--phase=train', u'--checkpoint_dir=gs://cyclegan/logs3/', u'--test_dir=gs://cyclegan/logs3/']' returned non-zero exit status 1
INFO 2019-01-14 22:37:49 -0500 master-replica-0 Module completed; cleaning up.
INFO 2019-01-14 22:37:49 -0500 master-replica-0 Clean up finished.
ERROR 2019-01-14 22:37:59 -0500 service The replica master 0 exited with a non-zero status of 1.
I tried to run this on windows 10 but i get this in the console i was wondering if anyone knows how to fix this:
C:\Users\Reece\Reverse>python main.py --dataset_dir=horse2zebra
C:\Users\Reece\Miniconda3\lib\site-packages\h5py_init_.py:36: FutureWarning: Conversion of the second argument of issubdtype fromfloat
tonp.floating
is deprecated. In future, it will be treated asnp.float64 == np.dtype(float).type
.
from ._conv import register_converters as _register_converters
2018-06-01 06:36:35.521182: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1356] Found device 0 with properties:
name: GeForce GTX 960 major: 5 minor: 2 memoryClockRate(GHz): 1.1775
pciBusID: 0000:01:00.0
totalMemory: 2.00GiB freeMemory: 1.64GiB
2018-06-01 06:36:35.553624: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1435] Adding visible gpu devices: 0
2018-06-01 06:36:38.678984: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-06-01 06:36:38.682738: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:929] 0
2018-06-01 06:36:38.685304: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:942] 0: N
2018-06-01 06:36:38.688370: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1404 MB memory) -> physical GPU (device: 0, name: GeForce GTX 960, pci bus id: 0000:01:00.0, compute capability: 5.2)
generatorA2B/g_e1_c/Conv/weights:0
generatorA2B/g_e1_bn/scale:0
generatorA2B/g_e1_bn/offset:0
generatorA2B/g_e2_c/Conv/weights:0
generatorA2B/g_e2_bn/scale:0
generatorA2B/g_e2_bn/offset:0
generatorA2B/g_e3_c/Conv/weights:0
generatorA2B/g_e3_bn/scale:0
generatorA2B/g_e3_bn/offset:0
generatorA2B/g_r1_c1/Conv/weights:0
generatorA2B/g_r1_bn1/scale:0
generatorA2B/g_r1_bn1/offset:0
generatorA2B/g_r1_c2/Conv/weights:0
generatorA2B/g_r1_bn2/scale:0
generatorA2B/g_r1_bn2/offset:0
generatorA2B/g_r2_c1/Conv/weights:0
generatorA2B/g_r2_bn1/scale:0
generatorA2B/g_r2_bn1/offset:0
generatorA2B/g_r2_c2/Conv/weights:0
generatorA2B/g_r2_bn2/scale:0
generatorA2B/g_r2_bn2/offset:0
generatorA2B/g_r3_c1/Conv/weights:0
generatorA2B/g_r3_bn1/scale:0
generatorA2B/g_r3_bn1/offset:0
generatorA2B/g_r3_c2/Conv/weights:0
generatorA2B/g_r3_bn2/scale:0
generatorA2B/g_r3_bn2/offset:0
generatorA2B/g_r4_c1/Conv/weights:0
generatorA2B/g_r4_bn1/scale:0
generatorA2B/g_r4_bn1/offset:0
generatorA2B/g_r4_c2/Conv/weights:0
generatorA2B/g_r4_bn2/scale:0
generatorA2B/g_r4_bn2/offset:0
generatorA2B/g_r5_c1/Conv/weights:0
generatorA2B/g_r5_bn1/scale:0
generatorA2B/g_r5_bn1/offset:0
generatorA2B/g_r5_c2/Conv/weights:0
generatorA2B/g_r5_bn2/scale:0
generatorA2B/g_r5_bn2/offset:0
generatorA2B/g_r6_c1/Conv/weights:0
generatorA2B/g_r6_bn1/scale:0
generatorA2B/g_r6_bn1/offset:0
generatorA2B/g_r6_c2/Conv/weights:0
generatorA2B/g_r6_bn2/scale:0
generatorA2B/g_r6_bn2/offset:0
generatorA2B/g_r7_c1/Conv/weights:0
generatorA2B/g_r7_bn1/scale:0
generatorA2B/g_r7_bn1/offset:0
generatorA2B/g_r7_c2/Conv/weights:0
generatorA2B/g_r7_bn2/scale:0
generatorA2B/g_r7_bn2/offset:0
generatorA2B/g_r8_c1/Conv/weights:0
generatorA2B/g_r8_bn1/scale:0
generatorA2B/g_r8_bn1/offset:0
generatorA2B/g_r8_c2/Conv/weights:0
generatorA2B/g_r8_bn2/scale:0
generatorA2B/g_r8_bn2/offset:0
generatorA2B/g_r9_c1/Conv/weights:0
generatorA2B/g_r9_bn1/scale:0
generatorA2B/g_r9_bn1/offset:0
generatorA2B/g_r9_c2/Conv/weights:0
generatorA2B/g_r9_bn2/scale:0
generatorA2B/g_r9_bn2/offset:0
generatorA2B/g_d1_dc/Conv2d_transpose/weights:0
generatorA2B/g_d1_bn/scale:0
generatorA2B/g_d1_bn/offset:0
generatorA2B/g_d2_dc/Conv2d_transpose/weights:0
generatorA2B/g_d2_bn/scale:0
generatorA2B/g_d2_bn/offset:0
generatorA2B/g_pred_c/Conv/weights:0
generatorB2A/g_e1_c/Conv/weights:0
generatorB2A/g_e1_bn/scale:0
generatorB2A/g_e1_bn/offset:0
generatorB2A/g_e2_c/Conv/weights:0
generatorB2A/g_e2_bn/scale:0
generatorB2A/g_e2_bn/offset:0
generatorB2A/g_e3_c/Conv/weights:0
generatorB2A/g_e3_bn/scale:0
generatorB2A/g_e3_bn/offset:0
generatorB2A/g_r1_c1/Conv/weights:0
generatorB2A/g_r1_bn1/scale:0
generatorB2A/g_r1_bn1/offset:0
generatorB2A/g_r1_c2/Conv/weights:0
generatorB2A/g_r1_bn2/scale:0
generatorB2A/g_r1_bn2/offset:0
generatorB2A/g_r2_c1/Conv/weights:0
generatorB2A/g_r2_bn1/scale:0
generatorB2A/g_r2_bn1/offset:0
generatorB2A/g_r2_c2/Conv/weights:0
generatorB2A/g_r2_bn2/scale:0
generatorB2A/g_r2_bn2/offset:0
generatorB2A/g_r3_c1/Conv/weights:0
generatorB2A/g_r3_bn1/scale:0
generatorB2A/g_r3_bn1/offset:0
generatorB2A/g_r3_c2/Conv/weights:0
generatorB2A/g_r3_bn2/scale:0
generatorB2A/g_r3_bn2/offset:0
generatorB2A/g_r4_c1/Conv/weights:0
generatorB2A/g_r4_bn1/scale:0
generatorB2A/g_r4_bn1/offset:0
generatorB2A/g_r4_c2/Conv/weights:0
generatorB2A/g_r4_bn2/scale:0
generatorB2A/g_r4_bn2/offset:0
generatorB2A/g_r5_c1/Conv/weights:0
generatorB2A/g_r5_bn1/scale:0
generatorB2A/g_r5_bn1/offset:0
generatorB2A/g_r5_c2/Conv/weights:0
generatorB2A/g_r5_bn2/scale:0
generatorB2A/g_r5_bn2/offset:0
generatorB2A/g_r6_c1/Conv/weights:0
generatorB2A/g_r6_bn1/scale:0
generatorB2A/g_r6_bn1/offset:0
generatorB2A/g_r6_c2/Conv/weights:0
generatorB2A/g_r6_bn2/scale:0
generatorB2A/g_r6_bn2/offset:0
generatorB2A/g_r7_c1/Conv/weights:0
generatorB2A/g_r7_bn1/scale:0
generatorB2A/g_r7_bn1/offset:0
generatorB2A/g_r7_c2/Conv/weights:0
generatorB2A/g_r7_bn2/scale:0
generatorB2A/g_r7_bn2/offset:0
generatorB2A/g_r8_c1/Conv/weights:0
generatorB2A/g_r8_bn1/scale:0
generatorB2A/g_r8_bn1/offset:0
generatorB2A/g_r8_c2/Conv/weights:0
generatorB2A/g_r8_bn2/scale:0
generatorB2A/g_r8_bn2/offset:0
generatorB2A/g_r9_c1/Conv/weights:0
generatorB2A/g_r9_bn1/scale:0
generatorB2A/g_r9_bn1/offset:0
generatorB2A/g_r9_c2/Conv/weights:0
generatorB2A/g_r9_bn2/scale:0
generatorB2A/g_r9_bn2/offset:0
generatorB2A/g_d1_dc/Conv2d_transpose/weights:0
generatorB2A/g_d1_bn/scale:0
generatorB2A/g_d1_bn/offset:0
generatorB2A/g_d2_dc/Conv2d_transpose/weights:0
generatorB2A/g_d2_bn/scale:0
generatorB2A/g_d2_bn/offset:0
generatorB2A/g_pred_c/Conv/weights:0
discriminatorB/d_h0_conv/Conv/weights:0
discriminatorB/d_h1_conv/Conv/weights:0
discriminatorB/d_bn1/scale:0
discriminatorB/d_bn1/offset:0
discriminatorB/d_h2_conv/Conv/weights:0
discriminatorB/d_bn2/scale:0
discriminatorB/d_bn2/offset:0
discriminatorB/d_h3_conv/Conv/weights:0
discriminatorB/d_bn3/scale:0
discriminatorB/d_bn3/offset:0
discriminatorB/d_h3_pred/Conv/weights:0
discriminatorA/d_h0_conv/Conv/weights:0
discriminatorA/d_h1_conv/Conv/weights:0
discriminatorA/d_bn1/scale:0
discriminatorA/d_bn1/offset:0
discriminatorA/d_h2_conv/Conv/weights:0
discriminatorA/d_bn2/scale:0
discriminatorA/d_bn2/offset:0
discriminatorA/d_h3_conv/Conv/weights:0
discriminatorA/d_bn3/scale:0
discriminatorA/d_bn3/offset:0
discriminatorA/d_h3_pred/Conv/weights:0C:\Users\Reece\Reverse>eedIt
'eedIt' is not recognized as an internal or external command,
operable program or batch file.
how could I change the code to adopt to the input images( 3 three channels ) and output images(1 channel)?
or both the input and output are 1 channel?
Say I Train my model on the horse2zebra dataset. I get a checkpoint and model etc.. lets call it BobSession.
Then I change trainB zebra folder to contain an entirely new dataset.
I begin to train this new Model... AliceSession.
Can I somehow transfer the generator of Bob-trainB into Alice-TrainB ?
I'm imagining some sort of tensorflow pruning / surgery operation of connecting up the model parts.
My program is running, but there is a problem with the path. There is such a problem:FileNotFoundError:
[Error 2]No such file or directory:'datasets/personReid/trainA_mask/trainA
Is everyone's path like this?
Hi . Thanks for the wonderful explanation. I am new in GAN. And I have a question.
I want to know how to modify the data label of the model, I want to replace it with a smoother label, thank you !
I trained the network on apple2orange dataset.
First of all, thanks! The output object is adapting the texture.
There are a few things about the result which are not plausible:
Could you please suggest some solution to both the problems or any other GAN which can be helpful.
Thanks!!
Is there any code about CycleGan that can run in Caffe? Cause there is only code can run in tensorflow. :)
what is more, when I run your code, there is no loss print, will it be reasonable to print ga2b_loss & gb2a_loss?
In the code,the image width and height seems equal to one number,If my images is 100*32,how should I do to change the image size.
When I try to run the script main.py for horse2zebra dataset which was downloaded as told in readme, I get a error saying Load Failed, once the graph is built. no further Information is given as why this is happening. Is there a bug in the script or some dependencies which are missing.
I am using tensorflow 1.3 gpu version
Hi
Thanks for sharing your code!
Have you tested the results and compared with the original version?
Hi, Thank you for sharing your code.
I was training. However, due to a computer error, training was stopped.
So I try to resume training, but it becomes fine-tuning.
In https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/docs/tips.md,
They use '--epoch_count ' option, but I can't find it in your code.
How do resume training??
When i try to run: CUDA_VISIBLE_DEVICES=0 python main.py --dataset_dir=my_dataset --load_size=512 --fine_size=512
, a error apear:
Traceback (most recent call last):
File "main.py", line 53, in <module>
tf.app.run()
File "/home/vsocr/py3/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "main.py", line 49, in main
model.train(args) if args.phase == 'train' \
File "/home/vsocr/workplace/CycleGAN-tensorflow/model.py", line 158, in train
feed_dict={self.real_data: batch_images, self.lr: lr})
File "/home/vsocr/py3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 895, in run
run_metadata_ptr)
File "/home/vsocr/py3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1100, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1, 511, 511, 6) for Tensor 'real_A_and_B_images:0', which has shape '(?, 512, 512, 6)'
And run: CUDA_VISIBLE_DEVICES=0 python main.py --dataset_dir=2k_v1 --load_size=513 --fine_size=512
, error:
tf.app.run()
File "/home/vsocr/py3/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "main.py", line 49, in main
model.train(args) if args.phase == 'train' \
File "/home/vsocr/workplace/CycleGAN-tensorflow/model.py", line 176, in train
self.sample_model(args.sample_dir, epoch, idx)
File "/home/vsocr/workplace/CycleGAN-tensorflow/model.py", line 218, in sample_model
feed_dict={self.real_data: sample_images}
File "/home/vsocr/py3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 895, in run
run_metadata_ptr)
File "/home/vsocr/py3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1100, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1, 256, 256, 6) for Tensor 'real_A_and_B_images:0', which has shape '(?, 512, 512, 6)'
My images input shape: (512, 512, 3)
.
It seems to be a bug in code. Can you show me right way to put parameters?
Recently start using this tensorflow implementation of CycleGAN. Does anybody knows the function of the imagepool (that is controlled by the max_size argument). It is kind of a buffer to store fake_A's and fake_B's to be fed to the discriminator. And if the buffer is full it randomly takes an old one, or a new one. But the old ones never get replaced during training, which seems odd. Any clues on this anyone?
Why the result of CycleGAN-tensorflow is so poor?
It is far from the original CycleGAN.
I run it with 200 epoch
Not really an issue, I'm just puzzled where max_size is used for. It serves as the size of an ImagePool pool and is used to store 'fake outputs'. Used here:
` # Update G network and record fake outputs
fake_A, fake_B, _, summary_str = self.sess.run(
[self.fake_A, self.fake_B, self.g_optim, self.g_sum],
feed_dict={self.real_data: batch_images, self.lr: lr})
self.writer.add_summary(summary_str, counter)
[fake_A, fake_B] = self.pool([fake_A, fake_B])
# Update D network
_, summary_str = self.sess.run(
[self.d_optim, self.d_sum],
feed_dict={self.real_data: batch_images,
self.fake_A_sample: fake_A,
self.fake_B_sample: fake_B,
self.lr: lr})
self.writer.add_summary(summary_str, counter)
`
Does the size influence training? Default it is set to 50. Any ideas on this?
Thanks!
Hello @xhujoy i am trying to run my own data on this model.It is possible as it only feed dataset. Thank you hope you can help me
Does it run on Nvidia Geforce if we just give CUDA=1 in the terminal while running the file
Line 32 of utils.py multiplies by np.random.rand. Maybe it should be np.random.rand()
Hi . Thanks for the wonderful explanation. I am new in GAN. And I have a question.
When I train cycleGAN, I found that the discriminator loss decreases and converges, but the generator loss always change not much. So, How to judge whether cycleGAN converges?
Hello!
I find the dimension of the output of the discriminator is # h4 is (32 x 32 x 1), and then the code calculate the loss :
# losses
self.g_loss = self.criterionGAN(self.DA_fake, tf.ones_like(self.DA_fake))
I am so confused, as I think the the dimension of the output of the discriminator should be 1.
Could you please give some hints?
THX
The main.py script offers some parameters that one can change, e.g. enable/disable flipping, image size etc. Some of these parameters (at least the ones mentioned before) are not passed to the actual image loading functions.
How do I put the data set's path
After running this code, I only got qualitative results. I want some quantitative results as well such as generator or discriminator loss or accuracy. How can I get these? Please help!
Could you please upload pre-trained model on apple2orange.
Thanks!!
Hi very nice work and thanks for sharing!
I noticed that there is no any normalization after the first convolution of Discriminator.
In my opinion this won't matter a lot.
But are there any theoretically reasons or empirical results making you choose so?
I'll be glad if you reply me~
And thank you again!
In the Train model stage, using
python main.py --dataset_dir=/horse2zebra
Will generate no result:
/sample, /checkpoints /tests will be created but contains 0 files.
In run Window
...
discriminatorA/d_bn2/offset:0
discriminatorA/d_h3_conv/Conv/weights:0
discriminatorA/d_bn3/scale:0
discriminatorA/d_bn3/offset:0
discriminatorA/d_h3_pred/Conv/weights:0
...
Did I miss anything?
Hello there!
As stated in subject, I tried modifying some config settings to allow the higher resolution to compute on GTX 1080 Ti that I have, with no luck. This is what I tried:
config = tf.ConfigProto(allow_soft_placement=True) config.gpu_options.allocator_type = 'BFC' config.gpu_options.per_process_gpu_memory_fraction = 0.40
Unfortunately, the tensor gets smaller but I still get OOM. Is there anything else I can try?
Thanks a lot in advance!
Great that this is on TensorFlow! I was wondering if it'd be possible to have multi-GPU support?
Generator loss and Discriminator loss become nan after some epochs and cycle gan start to generate black images.
when I executed the file, the program terminates after printing the variable name. anybody faced the same issue?
Hi,
thank you for sharing this implementation.
do you plan on adding support for the identity mapping loss as described in the original paper?
thanks
When I use the 'CUDA_VISIBLE_DEVICES=0 python main.py --dataset_dir=horse2zebra',there are many problems:
2018-05-29 09:49:48.450923: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-05-29 09:49:48.528092: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:898] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-05-29 09:49:48.528340: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.683
pciBusID: 0000:01:00.0
totalMemory: 10.91GiB freeMemory: 6.17GiB
2018-05-29 09:49:48.528354: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1312] Adding visible gpu devices: 0
2018-05-29 09:49:48.991358: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5950 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
generatorA2B/g_e1_c/Conv/weights:0
generatorA2B/g_e1_bn/scale:0
generatorA2B/g_e1_bn/offset:0
generatorA2B/g_e2_c/Conv/weights:0
generatorA2B/g_e2_bn/scale:0
generatorA2B/g_e2_bn/offset:0
generatorA2B/g_e3_c/Conv/weights:0
generatorA2B/g_e3_bn/scale:0
generatorA2B/g_e3_bn/offset:0
generatorA2B/g_r1_c1/Conv/weights:0
generatorA2B/g_r1_bn1/scale:0
generatorA2B/g_r1_bn1/offset:0
generatorA2B/g_r1_c2/Conv/weights:0
generatorA2B/g_r1_bn2/scale:0
generatorA2B/g_r1_bn2/offset:0
generatorA2B/g_r2_c1/Conv/weights:0
generatorA2B/g_r2_bn1/scale:0
generatorA2B/g_r2_bn1/offset:0
generatorA2B/g_r2_c2/Conv/weights:0
generatorA2B/g_r2_bn2/scale:0
generatorA2B/g_r2_bn2/offset:0
generatorA2B/g_r3_c1/Conv/weights:0
generatorA2B/g_r3_bn1/scale:0
generatorA2B/g_r3_bn1/offset:0
generatorA2B/g_r3_c2/Conv/weights:0
generatorA2B/g_r3_bn2/scale:0
generatorA2B/g_r3_bn2/offset:0
generatorA2B/g_r4_c1/Conv/weights:0
generatorA2B/g_r4_bn1/scale:0
generatorA2B/g_r4_bn1/offset:0
generatorA2B/g_r4_c2/Conv/weights:0
generatorA2B/g_r4_bn2/scale:0
generatorA2B/g_r4_bn2/offset:0
generatorA2B/g_r5_c1/Conv/weights:0
generatorA2B/g_r5_bn1/scale:0
generatorA2B/g_r5_bn1/offset:0
generatorA2B/g_r5_c2/Conv/weights:0
generatorA2B/g_r5_bn2/scale:0
generatorA2B/g_r5_bn2/offset:0
generatorA2B/g_r6_c1/Conv/weights:0
generatorA2B/g_r6_bn1/scale:0
generatorA2B/g_r6_bn1/offset:0
'
How can I solve the problem of 'weights=0'???????
I am currently working with this repo and it already produces very nice images.
However, I am confused how to generate rectangular images with the size 512x256 (or whatever what is not quadratic).
What do I have to consider? Which lines must be changed?
I appreciate any help/remarks and hints on this topic. Thanks in advance! @xhujoy
Hi,
After downloading the pre-trained set and untar-ing to the a 'checkpoint' folder, I try test run.
I get:
"[...]
discriminatorA/d_h3_pred/Conv/weights:0
[*] Reading checkpoint...
[!] Load failed...
Processing image: ./datasets/horse2zebra/testA\n02381460_1000.jpg
"
DIfficult to tell why the model won't load.
I'm on WIN10 if that matters, any advice wellcome
Thanks
Julien
Hi,
Thanks for contributing with this amazing tensorflow implementation.
I trained the horse2zebra model on my home GPU (980Ti) but after a day I decided to use the pretrained model instead given by the URL. Wow was it so much more accurate!
Would it be possible to add other pre-trained models to the project? Hosting them on a static drive somewhere? e.g. photo2monet/cezanne/ukiyoe etc...
I was looking at https://github.com/junyanz/CycleGAN and many pretrained models are provided but unfortunately I cannot find a suitable pytorch --> tensorflow model converter.
Thanks,
class ImagePool(object):
def __init__(self, maxsize=50):
self.maxsize = maxsize
self.num_img = 0
self.images = []
def __call__(self, image):
if self.maxsize == 0:
return image
if self.num_img < self.maxsize:
self.images.append(image)
self.num_img=self.num_img+1 //It seems you forgot this line
return image
if np.random.rand() > 0.5:
idx = int(np.random.rand()*self.maxsize)
tmp = copy.copy(self.images[idx])
self.images[idx] = image
return tmp
else:
return image
why is the variable reuse flag is set only for b-a-b conversion but not for a-b-a?
-Madhu.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.