Giter Site home page Giter Site logo

style2paints's Introduction

News

See also the Preview of Style2Paints V5.

Note that below are previous versions of style2paints. If you come from an introduction of V5 and are interested in V5, you do not need to download the V4 or V4.5. Note that V5 is still in preview and we have not released it yet.

Download Style2Paints V4.5

You can directly download the software (windows x64) at:

Again, this is style2paints V4.5, NOT style2paints V5!

Google Drive:

https://drive.google.com/open?id=1gmg2wwNIp4qMzxqP12SbcmVAHsLt1iRE

Baidu Drive (百度网盘):

https://pan.baidu.com/s/15xCm1jRVeHipHkiB3n1vzA

You do NOT need to install any complex things like CUDA and python. You can directly download it and then double click it, as if you were playing a normal video game.

Never hesitate to let me know if you have any suggestions or ideas. You may directly send emails to my private address [[email protected]] or [[email protected]].

Welcome to style2paints V4!

logo

Style2paints V4 is an AI driven lineart colorization tool.

Different from previous end-to-end image-to-image translation methods, style2paints V4 is the first system to colorize a lineart in real-life human workflow, and the outputs are layered.

Inputs:

● Linearts
● (with or without) Human hints
● (with or without) Color style reference images
● (with or without) Light location and color

Outputs:

● Automatic color flattening without lines (solid/flat/inherent/固有色/底色 color layer)
● Automatic color flattening with black lines
● Automatic colorization without lines
● Automatic colorization with black lines
● Automatic colorization with colored lines
● Automatic rendering (separated layer)
● Automatic rendered colorization

Style2paints V4 gives you results of the current highest quality. You are able to get separated layers from our system. These layers can be directly used in your painting workflow. Different from all previous AI driven colorization tools, our results are not single 'JPG/PNG' images, and in fact, our results are 'PSD' layers.

User Instruction: https://style2paints.github.io/

And we also have an official Twitter account.

Help human in their standard coloring workflow!

Most human artists are familiar with this workflow:

sketching -> color filling/flattening -> gradients/details adding -> shading

And the corresponding layers are:

lineart layers + flat color layers + gradient layers + shading layers

Style2paints V4 is designed for this standard coloring workflow! In style2paints V4, you can automatically get separated results from each step!

Examples

logo

Here we present some results in this ABCD format. Users only need to upload their sketch, select a style, and put a light source.

When the result is achieved immediately without any human color correction, we regard this result as fully automatic result. When the result needs some color correction, human can easily put some color hints on the canvas to guide the AI coloring process. In this case, we regard these results as semi-automatic results. If a result is semi-automatic, but the quantity of human color hint points is smaller than 10, we regard these results as almost automatic result. In this section, about half of the presented results are fully automatic result, and the others are all almost automatic result. Do notice that all the below results can be achieved with less than 15 clicks!

logo

logo

Real-life results

logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo

Know more about us!

User Instruction: https://style2paints.github.io/

And we also have an official Twitter account.

Acknowledgement

Thanks a lot to TaiZan. This project could not be achieved without his great help.

Lisence

All codes are released in Apache-2.0 License.

We preserve all rights on all pretrained deep learning models and binary releases.

Your colorized images are yours, and we do not add any extra lisences to colorized results. Use your colorized images in any commercial or non-commercial cases.

中文社区

我们有一个除了技术什么东西都聊的以技术交流为主的群。如果你一次加群失败,可以多次尝试: 816096787。

Previous Publications

Style2paints V1:

ACPR 2017:

@Article{ACPR2017ZLM,
  author  = {LvMin Zhang, Yi Ji and ChunPing Liu},
  title   = {Style Transfer for Anime Sketches with Enhanced Residual U-net and Auxiliary Classifier GAN},
  conference = {Asian Conference on Pattern Recognition (ACPR)},
  year    = {2017},
}

paper

Style2paints V2:

No Publications.

Style2paints V3:

TOG 2018:

@Article{ACMTOGTSC2018,
  author  = {LvMin Zhang, Chengze Li, Tien-Tsin Wong, Yi Ji and ChunPing Liu},
  title   = {Two-stage Sketch Colorization},
  journal = {ACM Transactions on Graphics},
  year    = {2018},
  volume  = {37},
  number  = {6},
  month   = nov,
  doi     = {https://doi.org/10.1145/3272127.3275090},
}

paper

Style2paints V4:

No Publications.

Style2paints V5 (Project SEPA, not released yet):

CVPR2021

@InProceedings{Filling2021zhang,
  author={Lvmin Zhang and Chengze Li and Edgar Simo-Serra and Yi Ji and Tien-Tsin Wong and Chunping Liu}, 
  booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, 
  title={User-Guided Line Art Flat Filling with Split Filling Mechanism}, 
  year={2021}, 
}

style2paints's People

Contributors

abbychau avatar alantian avatar dexhunter avatar hynor avatar lllyasviel avatar lrisviel avatar pelya avatar weichiachang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

style2paints's Issues

User Guidelines 使用说明

English Version is below to this chinese version

image

简单的说就是先上传一张参考图,再上传一张线稿,再点击上色就好了。

image

需要注意的是,在你上传参考图之后,你的参考图就会变成一个调色板,可以点击,然后就可以用选好的颜色在线稿上面画点。

image

具体用法是点击参考图上的颜色,然后点击线稿上面对应的点,然后就可以给AI颜色参考。

image

通过这样的点击,就可以给AI各种提示。

image

需要注意的是,AI对于这些点十分敏感,只需要轻轻的点击一个很小的点就能起到很大的效果,所以建议只点击几次鼠标。(上面那个图片中就是一个小的点,一般点这么小一个点就够了)

image

这四个选项非常重要,用法是先换选项卡再点击上色。

image

有的时候切换选项会产生巨大的改变。比如这样:

image
image

总之建议对于每张线稿都试试所有的V1,V2,V3,V4。

image
另外这个选项可以给图片降噪,建议试试看。

应该会有所改善。

Suggest to auto lower midtone for inputs

by using the following sketch and reference:
sep19_03120010262 sketch

sep19_03120010262 reference

will result in this image:

sep19_03120010262 fin

which does not seem bad but it is seeming to be limited by the certainty level of the sketch input due to the low contrast/blackness of the sketch.

This is the grayscale input used:

sep19_02025290662 sketch_greyscale

Actually we can simply lower down the midtone to give more global hint with scarifying very minor local details.
sep19_02025290662 sketch_greyscale_midtone

The result could be better:
sep19_03113994174 fin

Midtone suppressing could work in any original input that is already with quite good clairty as well.

It can seldom be harmful for a sketch:
587321_orig
587321_origmid

.

WebGL: CONTEXT_LOST_WEBGL: loseContext: context lost

hello

I'm encountred issue. maybe Browser-Side Error.
Chrome Firebug console logged"WebGL: CONTEXT_LOST_WEBGL: loseContext: context lost".

OS: Windows10 Pro x64 (Build 15063)
Chrome: 61.0.3163.100 (Official Build) 64-bit

running on style2paints and browser in both machine.

workarounds?

thanks!

custom color hints?

This might not be possible, but who knows. I learned from the nice video you made what the pen tool is for. However there are some restrictions on colors:

  • loading a new reference resets color hints
  • one cannot set color hints for colors that aren't in the reference

I was hoping that it might be possible to set a white sheet of reference, put color hints on the sketch, and have the NN operate only off the hints, for more targeted selection of colors.

Would this be possible?

Windows Bash

Hi, I was following the instructions on the Readme to get the server running (on Windows Bash). When I tried running python server.py I just got the following error...
All Python packages are up to date.

Traceback (most recent call last):
  File "server.py", line 14, in <module>
    import cv2
  File "/home/ResT_n/.local/lib/python2.7/site-packages/gevent/builtins.py", line 93, in __import__
    result = _import(*args, **kwargs)
  File "/home/ResT_n/.local/lib/python2.7/site-packages/cv2/__init__.py", line 9, in <module>
    from .cv2 import *
  File "/home/ResT_n/.local/lib/python2.7/site-packages/gevent/builtins.py", line 93, in __import__
    result = _import(*args, **kwargs)
ImportError: libSM.so.6: cannot open shared object file: No such file or directory

the ”Guide decoder 1” and ”Guide decoder 2”

As is depicted in the paper, you implemented two additional loss, in the ”Guide decoder 1” and ”Guide decoder 2”. But I can not figure out the structure of ”Guide decoder 1” and ”Guide decoder 2”. Could you tell me how to compute the ”Guide decoder 1” and ”Guide decoder 2” loss ?

Color hint mobile support?

It is currently impossible to do touch up the drawings using the color hint system on mobile, is their a chance this could resolved?

Input Images

I wonder have you tried that first decolorize a anime image then use your method to color it again and see what the result?

Why paired image for sketch is ground truth in loss function?

Hi!

The loss function in the paper is defined as following:
loss

I was wondering why the y is the paired domain of painting. From my understanding, the ground truth should be style image so the output will be more close to the style you want transferred to.

Does it make more sense to use like V(x) - G(x, y) in this case?

Input size vs. Output size

This is a dream come true for me, as I wanted something like this for years. Thank you so much.

From what I have done with it so far, it seems that the output image is dependent on my monitor resolution. Is there any way to make the output image the same size as the input image? (I wonder if the program automatically downsizes the input image.)

Thank you for reading my post.

Empty Graph Error

raise RuntimeError('The Session graph is empty. Add operations to the '
RuntimeError: The Session graph is empty. Add operations to the graph before calling run

Also how do I train the graph. What is the algorithm used and is code for training provided in the repository?

Running error

Hello guy, I tried to install all the packages and run python server.py cpu, then it returns the message 'No training configuration found in save file: '. In fact I have no idea how the program functions, could you help me? :)

paintstransfer.com down

i was going to provide examples of #28 occurring, but it seems the site is down and i don't have any examples downloaded before hand. the site has been down all day, not sure if you were aware or not

File missing?

python server.py

Using TensorFlow backend.
Traceback (most recent call last):
File "server.py", line 178, in
chainer.serializers.load_npz('google_net.net', google_net)
File "/usr/local/lib/python2.7/dist-packages/chainer/serializers/npz.py", line 132, in load_npz
with numpy.load(filename) as f:
File "/usr/local/lib/python2.7/dist-packages/numpy/lib/npyio.py", line 370, in load
fid = open(file, "rb")
IOError: [Errno 2] No such file or directory: 'google_net.net'

colorizing with green heavy images distorts quality?

i've been messing with this for the last week and its great, one major problem it seems is that any reference with alot of green, such as with nature scenery causes the colors to mess up pretty badly, even making most of the image a dark black, and even with touching it up the nerual network seems to fight it tooth and nail, perhaps it maybe a relative lack of training data compared to other colored images?
(sorry if this isn't the right place)

ResourceExhaustedError: Any way to optimize it?

Since the website is down I tried running this on my laptop but getting this error every time:
OOM when allocating tensor with shape[1,256,518,902]
I am also using bfc allocator but no luck.
GPU: Nvidia GTX 960 M (4 GB)
Is there any way to optimize it and make it run on my laptop?

Here is complete output:
C:\ProgramData\Anaconda3\lib\site-packages\h5py_init_.py:34: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
2018-03-09 23:55:55.434317: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2018-03-09 23:55:56.353643: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1030] Found device 0 with properties:
name: GeForce GTX 960M major: 5 minor: 0 memoryClockRate(GHz): 1.176
pciBusID: 0000:02:00.0
totalMemory: 4.00GiB freeMemory: 3.35GiB
2018-03-09 23:55:56.362316: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 960M, pci bus id: 0000:02:00.0, compute capability: 5.0)
C:\ProgramData\Anaconda3\lib\site-packages\keras\models.py:255: UserWarning: No training configuration found in save file: the model was not compiled. Compile it manually.
warnings.warn('No training configuration found in save file: '
Bottle v0.12.13 server starting up (using WSGIRefServer())...
Listening on http://0.0.0.0:8000/
Hit Ctrl-C to quit.

127.0.0.1 - - [09/Mar/2018 23:57:57] "GET / HTTP/1.1" 200 1926
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /style-mobile.css HTTP/1.1" 200 2574
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /src/settings.js HTTP/1.1" 200 2376
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /main.js HTTP/1.1" 200 7358
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /splash.png HTTP/1.1" 200 16188
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /src/project.js HTTP/1.1" 200 15412
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/import/01/01a59e074.json HTTP/1.1" 200 56857
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/texture/w.png HTTP/1.1" 200 131
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/texture/circle.png HTTP/1.1" 200 1695
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/texture/pencil.png HTTP/1.1" 200 2347
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/texture/eraser.png HTTP/1.1" 200 1628
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/texture/clear.png HTTP/1.1" 200 1065
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-internal/image/default_radio_button_off.png HTTP/1.1" 200 631
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-internal/image/default_radio_button_on.png HTTP/1.1" 200 847
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/texture/github.png HTTP/1.1" 200 2742
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-internal/image/default_toggle_normal.png HTTP/1.1" 200 1174
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-internal/image/default_toggle_checkmark.png HTTP/1.1" 200 493
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/texture/sketch.png HTTP/1.1" 200 91
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/texture/hint.png HTTP/1.1" 200 91
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/texture/result.png HTTP/1.1" 200 91
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/texture/loading.png HTTP/1.1" 200 2552
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/texture/right-arrow.png HTTP/1.1" 200 2065
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/icons/001.png HTTP/1.1" 200 49474
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/icons/002.png HTTP/1.1" 200 47019
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/icons/003.png HTTP/1.1" 200 50878
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/icons/004.png HTTP/1.1" 200 31390
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/icons/005.png HTTP/1.1" 200 102248
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/icons/006.png HTTP/1.1" 200 125063
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/icons/007.png HTTP/1.1" 200 123590
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/icons/008.png HTTP/1.1" 200 124480
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/icons/009.png HTTP/1.1" 200 132188
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/icons/010.png HTTP/1.1" 200 58897
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/icons/011.png HTTP/1.1" 200 73685
127.0.0.1 - - [09/Mar/2018 23:57:57] "GET /res/raw-assets/icons/012.png HTTP/1.1" 200 69262
received
sketchID: new
referenceID: no
sketchDenoise: true
resultDenoise: true
algrithom: quality
method: colorize
2018-03-09 23:58:43.186862: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.31GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-03-09 23:58:44.312520: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.07GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-03-09 23:58:44.340284: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.04GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-03-09 23:58:44.371173: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.03GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-03-09 23:58:44.411097: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.04GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-03-09 23:58:44.609860: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.02GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
process: 3.984015464782715
2018-03-09 23:58:45.710617: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.53GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-03-09 23:58:47.760293: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.75GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-03-09 23:58:48.047717: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.63GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-03-09 23:58:48.317449: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.68GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
paint: 4.139033794403076
2018-03-09 23:58:59.490450: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:273] Allocator (GPU_0_bfc) ran out of memory trying to allocate 456.29MiB. Current allocation summary follows.
2018-03-09 23:58:59.497622: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (256): Total Chunks: 869, Chunks in use: 869. 217.3KiB allocated for chunks. 217.3KiB in use in bin. 55.8KiB client-requested in use in bin.
2018-03-09 23:58:59.505725: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (512): Total Chunks: 316, Chunks in use: 316. 179.0KiB allocated for chunks. 179.0KiB in use in bin. 169.0KiB client-requested in use in bin.
2018-03-09 23:58:59.511378: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (1024): Total Chunks: 210, Chunks in use: 210. 231.5KiB allocated for chunks. 231.5KiB in use in bin. 224.0KiB client-requested in use in bin.
2018-03-09 23:58:59.520834: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (2048): Total Chunks: 154, Chunks in use: 154. 342.0KiB allocated for chunks. 342.0KiB in use in bin. 338.3KiB client-requested in use in bin.
2018-03-09 23:58:59.530064: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (4096): Total Chunks: 36, Chunks in use: 36. 178.5KiB allocated for chunks. 178.5KiB in use in bin. 176.1KiB client-requested in use in bin.
2018-03-09 23:58:59.539367: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (8192): Total Chunks: 3, Chunks in use: 3. 35.0KiB allocated for chunks. 35.0KiB in use in bin. 22.8KiB client-requested in use in bin.
2018-03-09 23:58:59.548509: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (16384): Total Chunks: 10, Chunks in use: 10. 220.0KiB allocated for chunks. 220.0KiB in use in bin. 204.8KiB client-requested in use in bin.
2018-03-09 23:58:59.557916: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (32768): Total Chunks: 7, Chunks in use: 7. 289.5KiB allocated for chunks. 289.5KiB in use in bin. 276.8KiB client-requested in use in bin.
2018-03-09 23:58:59.569923: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (65536): Total Chunks: 12, Chunks in use: 11. 939.0KiB allocated for chunks. 861.5KiB in use in bin. 720.0KiB client-requested in use in bin.
2018-03-09 23:58:59.578047: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (131072): Total Chunks: 30, Chunks in use: 29. 5.23MiB allocated for chunks. 5.02MiB in use in bin. 4.73MiB client-requested in use in bin.
2018-03-09 23:58:59.586689: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (262144): Total Chunks: 31, Chunks in use: 30. 10.29MiB allocated for chunks. 9.93MiB in use in bin. 9.10MiB client-requested in use in bin.
2018-03-09 23:58:59.594471: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (524288): Total Chunks: 44, Chunks in use: 43. 29.12MiB allocated for chunks. 28.28MiB in use in bin. 25.94MiB client-requested in use in bin.
2018-03-09 23:58:59.604122: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (1048576): Total Chunks: 47, Chunks in use: 45. 74.42MiB allocated for chunks. 71.39MiB in use in bin. 66.47MiB client-requested in use in bin.
2018-03-09 23:58:59.613319: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (2097152): Total Chunks: 58, Chunks in use: 57. 177.84MiB allocated for chunks. 175.42MiB in use in bin. 167.36MiB client-requested in use in bin.
2018-03-09 23:58:59.622843: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (4194304): Total Chunks: 18, Chunks in use: 18. 100.89MiB allocated for chunks. 100.89MiB in use in bin. 99.19MiB client-requested in use in bin.
2018-03-09 23:58:59.632966: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (8388608): Total Chunks: 32, Chunks in use: 29. 335.33MiB allocated for chunks. 297.21MiB in use in bin. 278.88MiB client-requested in use in bin.
2018-03-09 23:58:59.642260: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (16777216): Total Chunks: 4, Chunks in use: 3. 73.79MiB allocated for chunks. 55.28MiB in use in bin. 47.00MiB client-requested in use in bin.
2018-03-09 23:58:59.649994: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (33554432): Total Chunks: 9, Chunks in use: 9. 344.50MiB allocated for chunks. 344.50MiB in use in bin. 344.50MiB client-requested in use in bin.
2018-03-09 23:58:59.658693: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (67108864): Total Chunks: 2, Chunks in use: 0. 203.39MiB allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2018-03-09 23:58:59.669691: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (134217728): Total Chunks: 3, Chunks in use: 1. 582.39MiB allocated for chunks. 228.14MiB in use in bin. 228.14MiB client-requested in use in bin.
2018-03-09 23:58:59.677440: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:627] Bin (268435456): Total Chunks: 2, Chunks in use: 2. 1.16GiB allocated for chunks. 1.16GiB in use in bin. 684.43MiB client-requested in use in bin.
2018-03-09 23:58:59.686447: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:643] Bin for 456.29MiB was 256.00MiB, Chunk State:
2018-03-09 23:58:59.692998: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501840000 of size 1280
2018-03-09 23:58:59.697217: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501840500 of size 256
2018-03-09 23:58:59.702969: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501840600 of size 256
2018-03-09 23:58:59.707814: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501840700 of size 256
2018-03-09 23:58:59.712697: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501840800 of size 256
2018-03-09 23:58:59.718130: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501840900 of size 256
2018-03-09 23:58:59.723223: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501840A00 of size 256
2018-03-09 23:58:59.729522: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501840B00 of size 256
2018-03-09 23:58:59.734023: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501840C00 of size 256
2018-03-09 23:58:59.738907: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501840D00 of size 256
2018-03-09 23:58:59.743119: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501840E00 of size 256
2018-03-09 23:58:59.747841: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501840F00 of size 256
2018-03-09 23:58:59.752272: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501841000 of size 256
2018-03-09 23:58:59.757151: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501841100 of size 256
2018-03-09 23:58:59.761413: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501841200 of size 256
2018-03-09 23:58:59.766590: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501841300 of size 512
2018-03-09 23:58:59.772312: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501841500 of size 512
2018-03-09 23:58:59.777223: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501841700 of size 256
2018-03-09 23:58:59.784024: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501841800 of size 256
2018-03-09 23:58:59.788095: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501841900 of size 256
2018-03-09 23:58:59.791299: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501841A00 of size 256
2018-03-09 23:58:59.795217: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501841B00 of size 1024
2018-03-09 23:58:59.800237: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501841F00 of size 1024
2018-03-09 23:58:59.804213: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501842300 of size 256
2018-03-09 23:58:59.809314: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501842400 of size 256
2018-03-09 23:58:59.813357: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501842500 of size 256
2018-03-09 23:58:59.818553: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501842600 of size 256
2018-03-09 23:58:59.822627: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501842700 of size 2048
2018-03-09 23:58:59.828159: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501842F00 of size 2048
2018-03-09 23:58:59.837465: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501843700 of size 256
2018-03-09 23:58:59.842713: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501843800 of size 256
2018-03-09 23:58:59.850432: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501843900 of size 256
2018-03-09 23:58:59.856447: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501843A00 of size 256
2018-03-09 23:58:59.861403: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501843B00 of size 256
2018-03-09 23:58:59.867902: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501843C00 of size 256
2018-03-09 23:58:59.872387: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501843D00 of size 256
2018-03-09 23:58:59.877825: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501843E00 of size 256
2018-03-09 23:58:59.883775: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501843F00 of size 256
2018-03-09 23:58:59.889394: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501844000 of size 256
2018-03-09 23:58:59.895696: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501844100 of size 256
2018-03-09 23:58:59.900265: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501844200 of size 256
2018-03-09 23:58:59.905055: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501844300 of size 256
2018-03-09 23:58:59.909824: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501844400 of size 256
2018-03-09 23:58:59.914815: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501844500 of size 256
2018-03-09 23:58:59.920315: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501844600 of size 256
2018-03-09 23:58:59.925172: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501844700 of size 256
2018-03-09 23:58:59.931405: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501844800 of size 256
2018-03-09 23:58:59.935898: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501844900 of size 256
2018-03-09 23:58:59.941071: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501844A00 of size 256
2018-03-09 23:58:59.945585: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501844B00 of size 256
2018-03-09 23:58:59.951049: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501844C00 of size 256
2018-03-09 23:58:59.955839: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501844D00 of size 256
2018-03-09 23:58:59.960666: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501844E00 of size 256
2018-03-09 23:58:59.964838: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501844F00 of size 256
2018-03-09 23:58:59.970012: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501845000 of size 256
2018-03-09 23:58:59.974132: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501845100 of size 256
2018-03-09 23:58:59.979345: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:661] Chunk at 0000000501845200 of size 512
..............
TOO MANY THESE LINES
..............
2018-03-09 23:59:10.673386: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 2 Chunks of size 12032 totalling 23.5KiB
2018-03-09 23:59:10.677302: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 2 Chunks of size 18432 totalling 36.0KiB
2018-03-09 23:59:10.682140: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 2 Chunks of size 18944 totalling 37.0KiB
2018-03-09 23:59:10.686014: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 5 Chunks of size 24576 totalling 120.0KiB
2018-03-09 23:59:10.692148: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 27648 totalling 27.0KiB
2018-03-09 23:59:10.699235: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 2 Chunks of size 36864 totalling 72.0KiB
2018-03-09 23:59:10.704364: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 2 Chunks of size 37632 totalling 73.5KiB
2018-03-09 23:59:10.708749: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 3 Chunks of size 49152 totalling 144.0KiB
2018-03-09 23:59:10.714688: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 4 Chunks of size 65536 totalling 256.0KiB
2018-03-09 23:59:10.719015: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 4 Chunks of size 73728 totalling 288.0KiB
2018-03-09 23:59:10.724131: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 92672 totalling 90.5KiB
2018-03-09 23:59:10.728415: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 108288 totalling 105.8KiB
2018-03-09 23:59:10.733435: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 124160 totalling 121.3KiB
2018-03-09 23:59:10.737712: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 5 Chunks of size 131072 totalling 640.0KiB
2018-03-09 23:59:10.742721: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 140544 totalling 137.3KiB
2018-03-09 23:59:10.747357: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 7 Chunks of size 147456 totalling 1008.0KiB
2018-03-09 23:59:10.752373: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 2 Chunks of size 163840 totalling 320.0KiB
2018-03-09 23:59:10.756592: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 202752 totalling 198.0KiB
2018-03-09 23:59:10.761599: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 214528 totalling 209.5KiB
2018-03-09 23:59:10.765902: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 11 Chunks of size 221184 totalling 2.32MiB
2018-03-09 23:59:10.770826: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 258048 totalling 252.0KiB
2018-03-09 23:59:10.774975: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 3 Chunks of size 262144 totalling 768.0KiB
2018-03-09 23:59:10.780785: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 12 Chunks of size 294912 totalling 3.38MiB
2018-03-09 23:59:10.785237: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 3 Chunks of size 331776 totalling 972.0KiB
2018-03-09 23:59:10.790770: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 5 Chunks of size 393216 totalling 1.88MiB
2018-03-09 23:59:10.796213: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 434176 totalling 424.0KiB
2018-03-09 23:59:10.801052: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 5 Chunks of size 442368 totalling 2.11MiB
2018-03-09 23:59:10.807833: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 483328 totalling 472.0KiB
2018-03-09 23:59:10.812837: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 10 Chunks of size 524288 totalling 5.00MiB
2018-03-09 23:59:10.816810: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 8 Chunks of size 589824 totalling 4.50MiB
2018-03-09 23:59:10.820827: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 3 Chunks of size 614400 totalling 1.76MiB
2018-03-09 23:59:10.824720: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 2 Chunks of size 655360 totalling 1.25MiB
2018-03-09 23:59:10.828423: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 704512 totalling 688.0KiB
2018-03-09 23:59:10.834018: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 3 Chunks of size 737280 totalling 2.11MiB
2018-03-09 23:59:10.839377: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 10 Chunks of size 786432 totalling 7.50MiB
2018-03-09 23:59:10.846921: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 884736 totalling 864.0KiB
2018-03-09 23:59:10.856114: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 921600 totalling 900.0KiB
2018-03-09 23:59:10.860966: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 3 Chunks of size 983040 totalling 2.81MiB
2018-03-09 23:59:10.873627: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 999424 totalling 976.0KiB
2018-03-09 23:59:10.884121: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 3 Chunks of size 1048576 totalling 3.00MiB
2018-03-09 23:59:10.896078: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 1105920 totalling 1.05MiB
2018-03-09 23:59:10.911837: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 3 Chunks of size 1179648 totalling 3.38MiB
2018-03-09 23:59:10.918942: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 1196032 totalling 1.14MiB
2018-03-09 23:59:10.924117: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 6 Chunks of size 1228800 totalling 7.03MiB
2018-03-09 23:59:10.928004: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 2 Chunks of size 1327104 totalling 2.53MiB
2018-03-09 23:59:10.940812: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 1441792 totalling 1.38MiB
2018-03-09 23:59:10.946805: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 1548288 totalling 1.48MiB
2018-03-09 23:59:10.955775: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 1695744 totalling 1.62MiB
2018-03-09 23:59:10.962037: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 1769472 totalling 1.69MiB
2018-03-09 23:59:10.976069: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 2 Chunks of size 1806336 totalling 3.45MiB
2018-03-09 23:59:10.986550: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 1880064 totalling 1.79MiB
2018-03-09 23:59:10.994738: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 21 Chunks of size 1990656 totalling 39.87MiB
2018-03-09 23:59:11.007173: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 2095616 totalling 2.00MiB
2018-03-09 23:59:11.013847: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 9 Chunks of size 2097152 totalling 18.00MiB
2018-03-09 23:59:11.025849: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 6 Chunks of size 2211840 totalling 12.66MiB
2018-03-09 23:59:11.037319: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 5 Chunks of size 2359296 totalling 11.25MiB
2018-03-09 23:59:11.045656: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 2 Chunks of size 2457600 totalling 4.69MiB
2018-03-09 23:59:11.057011: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 2654208 totalling 2.53MiB
2018-03-09 23:59:11.063466: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 3112960 totalling 2.97MiB
2018-03-09 23:59:11.068127: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 3244032 totalling 3.09MiB
2018-03-09 23:59:11.072244: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 3538944 totalling 3.38MiB
2018-03-09 23:59:11.077723: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 3 Chunks of size 3686400 totalling 10.55MiB
2018-03-09 23:59:11.083995: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 28 Chunks of size 3981312 totalling 106.31MiB
2018-03-09 23:59:11.090131: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 5 Chunks of size 4718592 totalling 22.50MiB
2018-03-09 23:59:11.094342: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 5029888 totalling 4.80MiB
2018-03-09 23:59:11.099689: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 11 Chunks of size 6291456 totalling 66.00MiB
2018-03-09 23:59:11.104311: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 7962624 totalling 7.59MiB
2018-03-09 23:59:11.109324: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 6 Chunks of size 8388608 totalling 48.00MiB
2018-03-09 23:59:11.114862: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 6 Chunks of size 9437184 totalling 54.00MiB
2018-03-09 23:59:11.120205: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 9693440 totalling 9.24MiB
2018-03-09 23:59:11.126771: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 11534336 totalling 11.00MiB
2018-03-09 23:59:11.131637: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 5 Chunks of size 11796480 totalling 56.25MiB
2018-03-09 23:59:11.137180: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 2 Chunks of size 11943936 totalling 22.78MiB
2018-03-09 23:59:11.143718: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 12515072 totalling 11.93MiB
2018-03-09 23:59:11.148261: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 7 Chunks of size 12582912 totalling 84.00MiB
2018-03-09 23:59:11.153382: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 2 Chunks of size 18350080 totalling 35.00MiB
2018-03-09 23:59:11.157861: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 21261056 totalling 20.28MiB
2018-03-09 23:59:11.163508: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 5 Chunks of size 33554432 totalling 160.00MiB
2018-03-09 23:59:11.168811: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 2 Chunks of size 37748736 totalling 72.00MiB
2018-03-09 23:59:11.173804: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 2 Chunks of size 58982400 totalling 112.50MiB
2018-03-09 23:59:11.181218: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 239224832 totalling 228.14MiB
2018-03-09 23:59:11.185733: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 424181760 totalling 404.53MiB
2018-03-09 23:59:11.191057: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:679] 1 Chunks of size 824147712 totalling 785.97MiB
2018-03-09 23:59:11.195922: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:683] Sum Total of in-use chunks: 2.45GiB
2018-03-09 23:59:11.201292: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:685] Stats:
Limit: 3282324684
InUse: 2630940672
MaxInUse: 2791483136
NumAllocs: 6116
MaxAllocSize: 1063372544

2018-03-09 23:59:11.215161: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:277] *******************************************xxxx____***********************xxxxxxxxxx
2018-03-09 23:59:11.221729: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\framework\op_kernel.cc:1192] Resource exhausted: OOM when allocating tensor with shape[1,256,518,902]
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1323, in _do_call
return fn(*args)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1302, in _run_fn
status, run_metadata)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 473, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[1,256,518,902]
[[Node: model_1_2/conv4/convolution = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](model_1_2/leaky_re_lu_4/LeakyRelu/Maximum, conv4/kernel/read)]]
[[Node: mul_20/_6359 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_124_mul_20", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\bottle.py", line 862, in _handle
return route.call(**args)
File "C:\ProgramData\Anaconda3\lib\site-packages\bottle.py", line 1740, in wrapper
rv = callback(*a, **ka)
File "server.py", line 145, in do_paint
fin = go_tail(painting, noisy=(resultDenoise == 'true'))
File "C:\Users\Abdul Rahman\Desktop\style2paints-master\server\ai.py", line 140, in go_tail
ip3: x[None, :, :, :]
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 889, in run
run_metadata_ptr)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1120, in _run
feed_dict_tensor, options, run_metadata)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1317, in _do_run
options, run_metadata)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1336, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[1,256,518,902]
[[Node: model_1_2/conv4/convolution = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](model_1_2/leaky_re_lu_4/LeakyRelu/Maximum, conv4/kernel/read)]]
[[Node: mul_20/_6359 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_124_mul_20", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

Caused by op 'model_1_2/conv4/convolution', defined at:
File "server.py", line 4, in
from ai import *
File "", line 971, in _find_and_load
File "", line 955, in _find_and_load_unlocked
File "", line 665, in _load_unlocked
File "", line 678, in exec_module
File "", line 219, in _call_with_frames_removed
File "C:\Users\Abdul Rahman\Desktop\style2paints-master\server\ai.py", line 79, in
noise_tail_op = noise_tail(tf.pad(ip3 / 255.0, [[0, 0], [3, 3], [3, 3], [0, 0]], 'REFLECT'))[:, 3:-3, 3:-3, :] * 255.0
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py", line 619, in call
output = self.call(inputs, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py", line 2085, in call
output_tensors, _, _ = self.run_internal_graph(inputs, masks)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py", line 2236, in run_internal_graph
output_tensors = _to_list(layer.call(computed_tensor, **kwargs))
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\layers\convolutional.py", line 168, in call
dilation_rate=self.dilation_rate)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py", line 3335, in conv2d
data_format=tf_data_format)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 751, in convolution
return op(input, filter)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 835, in call
return self.conv_op(inp, filter)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 499, in call
return self.call(inp, filter)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 187, in call
name=self.name)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 630, in conv2d
data_format=data_format, name=name)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2956, in create_op
op_def=op_def)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1470, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[1,256,518,902]
[[Node: model_1_2/conv4/convolution = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](model_1_2/leaky_re_lu_4/LeakyRelu/Maximum, conv4/kernel/read)]]
[[Node: mul_20/_6359 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_124_mul_20", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

127.0.0.1 - - [09/Mar/2018 23:59:11] "POST /paint HTTP/1.1" 500 746

ability to zoom in sketches

this makes would make it easier for precise placement of points from the hint system, and would useful for high detail sketches

Using Docker

For anyone interested in relying on Docker I found it pretty straightforward, so wanted to share my minimal setup. Let me know if you eventually fancy a pull-request with all the required resources.

The following is for CPU server, but can be easily adapted.

Dockerfile

FROM gcr.io/tensorflow/tensorflow:latest-gpu-py3
# FROM gcr.io/tensorflow/tensorflow:latest-py3

WORKDIR /style2paints

ADD . /style2paints

RUN chmod -R 777 /style2paints

RUN pip install --trusted-host pypi.python.org -r /style2paints/requirementsCPU.txt

RUN apt-get update && apt-get install -y \
	libsm6 \
	libxrender1 \
	libfontconfig1 \
	libxext6

# Optional. Start server automatically
# RUN cd python /style2paints/server # first move into directory, otherwise the code won't be able to find the models
# RUN python server.py -cpu

requirements.txt

tensorflow
# tensorflow_gpu
keras
chainer
cupy
bottle
gevent
h5py
opencv-python

Instructions

Copy Dockerfile and requirements.txt inside the style2paints directory.
Make sure to have the models inside the server directory as required by the project.

From style2paints run

docker build -t style2paint-cpu .                              # build docker image
docker run -p 8000:8000 -it style2paint-cpu bash   # start container in interactive mode

From inside the container run (unless automatically started in Dockerfile)

cd server    # first move into directory, otherwise the code won't be able to find the models
python server.py -cpu    # start server in cpu mode

From the host machine go to http://localhost:8000/

A palette beside the reference picture is needed

Sometimes, the color of the reference graph can't satisfy me completely, I need some other colors to add. Although "limiter removal" can call out the color palette, and I modified the color partially,the whole color is totally different from the reference color graph.

What do `k_resize`, `d_resize`, `s_resize`, and `m_resize` do?

I'm trying to go through the code, but I can't figure out what these functions do. Obviously they perform resizing based on some parameter, but it's not clear what. So far as I've been able to tell
k_resize: Resize based on k, a semi-arbitrary value
m_resize: Resize to the minimum value of
s_resize: Resize parameter A to the shape of parameter B
d_resize: No clue

Blackscreened and stop working

in chrome after some actions may be its something like "limit" of custom instructions for coloring it just make full blackscreen and stop working.

Training: Algorithm and code

I have the data to train style2paints. But how do I train it. Can you explain the method in detail, to transfer style.

About cleaning the training dataset

Thanks for sharing of your brilliant work. I plan to implement something similar to your work and I notice that you mention the nico-opendata dataset.

I get the dataset but I found that the dataset is really a mess. Only a few of the images are illustrations of anime characters and most of them are scribbles. I guess you must have cleaned the dataset, would you mind sharing your advice for doing this?

A question about the last layer in encoder

May I ask why the size of final layer is 256 * 256 * 64 but not 256 * 256 * 16 ? From my perspective, the layer should become thinner and thinner through the decoder. Also, the picture in the paper shows the final layer is thinner than the previous one.

Error if the sketch is not reloaded before every colorization

If I press "colorize" without having loaded a new sketch, I get this python error in the console:

127.0.0.1 - - [2017-09-27 14:22:48] "GET /results/Sep_27_14_22_42__658.jpg?t=0.5077750082463517 HTTP/1.1" 200 49476
0.003000
received
Traceback (most recent call last):
  File "E:\Python35\lib\site-packages\bottle.py", line 862, in _handle
    return route.call(**args)
  File "E:\Python35\lib\site-packages\bottle.py", line 1740, in wrapper
    rv = callback(*a, **ka)
  File "server.py", line 258, in do_paint
    raw_sketch = from_png_to_jpg(sketchDataURL)
  File "server.py", line 395, in from_png_to_jpg
    color = map[:, :, 0:3].astype(np.float) / 255.0
TypeError: 'NoneType' object is not subscriptable

请教两个问题

hello, 想问两个问题
1、训练数据的素描图是怎么来的?
2、你们模型画的太赞了,这种三维的图是用什么软件画的?是visio吗

saveable/editable hint file?

this would be useful for not losing progress due to webgl being dropped or otherwise crashing, as well as other reasons(like maybe just seeing what would occur when the hint imput of one image is applied to another etc)
not sure if a transparent image or a csv/json text would be better, up to your digression

Lambda value & training

Hello, this repository is so cool!! thank you for your effort.

I have two questions,

  1. In the paper, you specify for alpha and beta values as 0.3 and 0.9.

However, you did not refer to lambda value in Eq 5.

image

What did you use lambda value?

  1. Your L1 loss consists of three equations as below.

image

How to train generator, guide decoder 1 and 2 ??

I think that generator, guide decoder 1 and 2 weights are updated independently by using the loss from each network.

i.e, image is used to update generator,
image is used to update guide decoder 1 (include from first to mid-level layers of generator), and
image is used to update guide decoder 2 (include from first to mid-level layers of generator).
This is right??

However, in the paper, I understand that sum of all l1 loss (the results of Eq 1) is used to update only the generator, or the generator and the guide decoder 1 and 2.

It is very confusing.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.