Giter Site home page Giter Site logo

apchenstu / facial_details_synthesis Goto Github PK

View Code? Open in Web Editor NEW
624.0 28.0 131.0 119.34 MB

[ICCV2019 Oral] Photo-Realistic Facial Details Synthesis from Single Image

License: MIT License

Python 1.94% GLSL 0.06% C++ 63.92% PowerShell 0.10% CMake 0.58% C 31.81% Objective-C 1.59%

facial_details_synthesis's Introduction

This is the code repo for Facial Details Synthesis From Single Input Image. [Paper] [Supplemental Material] [Video]

This repository consists of 5 individual parts: DFDN, emotionNet, landmarkDetector, proxyEstimator and faceRender.

  • DFDN is used to estimate displacement map, and its network architecture is based on junyanz's pix2pix
  • For landmarkDetector and FACS-based expression detector (you can choose between this and emotionNet), we use a simplified version of openFace
  • proxyEstimator is used to generate proxy mesh using expression/emotion prior. It is modified based on patrikhuber's fantastic work eos
  • faceRender is used for interactive rendering

We would like to thank each of the related projects for their great work.

Facial Details Synthesis

We present a single-image 3D face synthesis technique that can handle challenging facial expressions while recovering fine geometric details. Our technique employs expression analysis for proxy face geometry generation and combines supervised and unsupervised learning for facial detail synthesis. On proxy generation, we conduct emotion prediction to determine a new expression-informed proxy. On detail synthesis, we present a Deep Facial Detail Net (DFDN) based on Conditional Generative Adversarial Net (CGAN) that employs both geometry and appearance loss functions. For geometry, we capture 366 high-quality 3D scans from 122 different subjects under 3 facial expressions. For appearance, we use additional 163K in-the-wild face images and apply image-based rendering to accommodate lighting variations. Comprehensive experiments demonstrate that our framework can produce high-quality 3D faces with realistic details under challenging facial expressions.

Features

  • Functionality
    • Proxy estimation with expression/emotion prior
    • Facial details prediction, i.e. winkles
    • Renderer for results (proxy mesh + normalMap/displacementMap)
  • Input: Single image or image folder
  • Output: Proxy mesh & texture, detailed displacementMap and normalMap
  • OS: Windows 10

Set up environment

  1. Install windows version of Anaconda Python3.7 and pytorch
  2. [Optional] Install tensorflow and keras if you want to use emotion prior (emotionNet)

Released version

  1. Download the released package.

    Release v0.1.0 [Google Drive, OneDrive]

  2. Download models and pre-trained weights.

    DFDN checkpoints [Google Drive, OneDrive] unzip to ./DFDN/checkpoints

    landmark models [Google Drive, OneDrive] unzip to ./landmarkDetector

    [Optional] emotionNet checkpoints [Google Drive, OneDrive] unzip to ./emotionNet/checkpoints

  3. Install BFM2017

    • Install eos-py by pip install --force-reinstall eos-py==0.16.1

    • Download BFM2017 and copy model2017-1_bfm_nomouth.h5 to ./proxyEstimator/bfm2017/

    • Run python convert-bfm2017-to-eos.py to generate bfm2017-1_bfm_nomouth.bin in ./proxyEstimator/bfm2017/ folder

  4. Have fun!

Usage

  • For proxy estimation,

    python proxyPredictor.py -i path/to/input/image -o path/to/output/folder [--FAC 1][--emotion 1]
    
    • For batch processing, you can set -i to a image folder.

    • For prior features, you can optional choose one of those two priors: FACS-based expression prior, --FAC 1, emotion prior, --emotion 1.

    example: python proxyPredictor.py -i ./samples/proxy -o ./results

  • For facial details estimation,

    python facialDetails.py -i path/to/input/image -o path/to/output/folder
    

    example:

    python facialDetails.py -i ./samples/details/019615.jpg -o ./results

    python facialDetails.py -i ./samples/details -o ./results

  • note: we highly suggest you crop input image to a square size.

Compiling

We suggest you directly download the released package for convenience. If you are interested in compiling the source code, please go through the following guidelines.

  1. First, clone the source code,

    git clone https://github.com/apchenstu/Facial_Details_Synthesis.git --recursive

  2. cd to the root of each individual model then start compiling,

    landmarkDetector

    • Executing the download_libraries.ps1 and download_models.ps1 with PowerShell script.

    • Open OpenFace.sln using Visual Studio and compile the code.

      After compiling, the excuse file would located in /x64/Release/FaceLandmarkImg.exe

    textureRender

    • install with

       mkdir build && cd build
       cmake -A X64 -D CMAKE_PREFIX_PATH=../thirds ../src
      
    • Open textureRender.sln using Visual Studio and compile the code.

      After compiling, the excuse file would located in Release/textureRender.exe

    proxyEstimator

    • install vcpkg

    • install package under vcpkg folder: ./vcpkg install opencv boost --triplet x64-windows

    • Install with,

      mkdir build && cd build
      cmake .. -A X64 -DCMAKE_TOOLCHAIN_FILE=[vcpkg root]\scripts\buildsystems\vcpkg.cmake
      
    • Open eos.sln using Visual Studio and compile the code.

      After compiling, the excuse file would located in Release/eso.exe

      For more details, please refer to this repo.

    faceRender

    • Install with

       mkdir build && cd build
       cmake -A X64 -D CMAKE_PREFIX_PATH=../thirds ../src
      
    • Open hmrenderer.sln using Visual Studio and compile the code.

      After compiling, the excuse file would located in build\Release

      Note: The visualizer currently only supports mesh + normalMap, but will also support displacementMap in the near future.

    After compiling, please download DFDN checkpoints, unzip to ./DFDN/checkpoints. Then you are free to use.

Others

On the way .....

Q & A

  1. Proxy result is different with showing in the paper?

    It's because the released version are using a lower resolution input and a different expression dictionary, which are more robust in general case. Please try this if you want to obtain similar results as in the paper.

Citation

If you find this code useful to your research, please consider citing:

@inproceedings{chen2019photo,
  title={Photo-Realistic Facial Details Synthesis from Single Image},
  author={Chen, Anpei and Chen, Zhang and Zhang, Guli and Mitchell, Kenny and Yu, Jingyi},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  pages={9429--9439},
  year={2019}
}

facial_details_synthesis's People

Contributors

apchenstu avatar gg-z avatar lansburych avatar msurguy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

facial_details_synthesis's Issues

error with pip install eos-py==0.16.1

作者您好,非常感谢您的分享。我的系统是win10,由于您的代码需要0.16.1版本的eos-py,而我pip该版本以及该版本之前的eos-py都会报错,如下:
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--config', 'Release', '--', '/m']' returned non-zero exit status 1
而0.16.1版本之后的eos-py我都可以通过pip成功安装,但是这样就会遇到#14的问题,有什么办法解决吗

Value range of shape displacement

Hello Anpei,

Thanks a lot for the great work, very impressive!
I am trying to generate the final detailed mesh from 'result.displacementmap.png' and 'result.obj'. I can see that your network predicts the shape displacement in the range of [-1,1], and you finally scale them into [0,255].
I wonder what is the correct scale factor if I want to apply this displacement to the proxy mesh? Obviously, none of the aforementioned ranges is valid.

Many thanks,
Shiyang

CUDA out of memory

Hi,anpei.
I ran " python facialDetails.py -i ./samples/details/019615.jpg -o ./results --emotion 1"
the error came out:CUDA out of memory
My gpu card is GTX 1080 8G,How many memory do you need?
Can I set any parameters like batch size to reduce memory?

render and nose problems

Hi,Could I just do render_texture?Because I find that it only depends on textureRender.exe and image path.But when I execute render_texture function, it can only get result.isomap which is not right.Could you please tell me the reason? On the other hand, the result of nose is not like shows in paper.I know it has occlusions,how do you solve this and make it show like paper's result? One is result.isomap which only did render_texture and the other has nose problem like following. Looking forward to hearing from you! @apchenstu
result isomap
image

Model loading error while running facialDetails.py

I'm sorry to distrubing you. There is an error when I running facialDetails.py I wonder if it's related to the version of pytorch which I use is 0.4.0 with cuda91. Here is the error

Traceback (most recent call last):
File "facialDetails.py", line 307, in
main(args)
File "facialDetails.py", line 233, in main
DFDN = {'forehead': create_model(args)}
File "F:\pycharmProgram\released_v0.1\DFDN\models\models.py", line 10, in create_model
model.initialize(opt)
File "F:\pycharmProgram\released_v0.1\DFDN\models\pix2pix_model.py", line 79, in initialize
self.load_network(self.netG, 'G', opt.which_epoch)
File "F:\pycharmProgram\released_v0.1\DFDN\models\base_model.py", line 53, in load_network
network.load_state_dict(torch.load(save_path))
File "D:\Anaconda3\envs\py3.6\lib\site-packages\torch\nn\modules\module.py", line 721, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for UnetGenerator:
Unexpected key(s) in state_dict: "model.model.1.model.2.num_batches_tracked", "model.model.1.model.3.model.2.num_batches_tracked", "model.model.1.model.3.model.3.model.2.num_batches_tracked", "model.model.1.model.3.model.3.model.3.model.2.num_batches_tracked", "model.model.1.model.3.model.3.model.3.model.3.model.2.num_batches_tracked", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.2.num_batches_tracked", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.3.model.4.num_batches_tracked", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.6.num_batches_tracked", "model.model.1.model.3.model.3.model.3.model.3.model.6.num_batches_tracked", "model.model.1.model.3.model.3.model.3.model.6.num_batches_tracked", "model.model.1.model.3.model.3.model.6.num_batches_tracked", "model.model.1.model.3.model.6.num_batches_tracked", "model.model.1.model.6.num_batches_tracked".

Waiting for your answer, thank you

.box file not created

Hello Anpei,

I am trying to follow the instruction and run the sample file. however, I got the following errors. I recompiled the FaceLandmarkImg.exe and moved everything from that file to replace all the same name files in landmarkDetector folder.

this is the result I got after I try to run the sample code for facialDetails.py. where could I get or make the .box file?

Thank you very much,
Linkun

(base) C:\Users\LINK\OneDrive - University of Waterloo\SYDE 671 project\released_v0.1>python facialDetails.py -i ./samples/details/019615.jpg -o ./results
Load preTrain model done.
Load preTrain model done.
Loading the model
Reading the landmark detector/tracker from: model/main_ceclm_general.txt
Reading the landmark detector module from: model\cen_general.txt
Reading the PDM module from: model\pdms/In-the-wild_aligned_PDM_68.txt....Done
Reading the Triangulations module from: model\tris_68.txt....Done
Reading the intensity CEN patch experts from: model\patch_experts/cen_patches_0.25_of.dat....Done
Reading the intensity CEN patch experts from: model\patch_experts/cen_patches_0.35_of.dat....Done
Reading the intensity CEN patch experts from: model\patch_experts/cen_patches_0.50_of.dat....Done
Reading the intensity CEN patch experts from: model\patch_experts/cen_patches_1.00_of.dat....Done
Reading part based module....left_eye_28
Reading the landmark detector/tracker from: model\model_eye/main_clnf_synth_left.txt
Reading the landmark detector module from: model\model_eye\clnf_left_synth.txt
Reading the PDM module from: model\model_eye\pdms/pdm_28_l_eye_3D_closed.txt....Done
Reading the intensity CCNF patch experts from: model\model_eye\patch_experts/left_ccnf_patches_1.00_synth_lid_.txt....Done
Reading the intensity CCNF patch experts from: model\model_eye\patch_experts/left_ccnf_patches_1.50_synth_lid_.txt....Done
Done
Reading part based module....right_eye_28
Reading the landmark detector/tracker from: model\model_eye/main_clnf_synth_right.txt
Reading the landmark detector module from: model\model_eye\clnf_right_synth.txt
Reading the PDM module from: model\model_eye\pdms/pdm_28_eye_3D_closed.txt....Done
Reading the intensity CCNF patch experts from: model\model_eye\patch_experts/ccnf_patches_1.00_synth_lid_.txt....Done
Reading the intensity CCNF patch experts from: model\model_eye\patch_experts/ccnf_patches_1.50_synth_lid_.txt....Done
Done
Reading the landmark validation module....Done
Model loaded
Reading the AU analysis module from: AU_predictors/main_static_svms.txt
Reading the AU predictors from: AU_predictors\AU_all_static.txt... Done
Reading the PDM from: AU_predictors\In-the-wild_aligned_PDM_68.txt... Done
Reading the triangulation from:AU_predictors\tris_68_full.txt... Done
Reading the MTCNN face detector from: model/mtcnn_detector/MTCNN_detector.txt
Reading the PNet module from: model/mtcnn_detector\PNet.dat
Reading the RNet module from: model/mtcnn_detector\RNet.dat
Reading the ONet module from: model/mtcnn_detector\ONet.dat
Error: Could not open the image: C:\Users\LINK\OneDrive
===> Landmarks detection done.

===> estimating proxy of C:\Users\LINK\OneDrive - University of Waterloo\SYDE 671 project\released_v0.1\samples\details\019615.jpg
Traceback (most recent call last):
File "C:\Users\LINK\Anaconda3\lib\shutil.py", line 550, in move
os.rename(src, real_dst)
FileNotFoundError: [WinError 2] 系统找不到指定的文件。: './landmarkDetector\processed\019615.box' -> 'C:\Users\LINK\OneDrive - University of Waterloo\SYDE 671 project\released_v0.1\results\019615\019615.box'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "facialDetails.py", line 303, in
main(args)
File "facialDetails.py", line 250, in main
move_landmark(args.landmark_exe_path,save_path,base_name[0])
File "facialDetails.py", line 51, in move_landmark
shutil.move(os.path.join(landmark_exe_path,'processed',name+'.box'),os.path.join(save_path,name+'.box'))
File "C:\Users\LINK\Anaconda3\lib\shutil.py", line 564, in move
copy_function(src, real_dst)
File "C:\Users\LINK\Anaconda3\lib\shutil.py", line 263, in copy2
copyfile(src, dst, follow_symlinks=follow_symlinks)
File "C:\Users\LINK\Anaconda3\lib\shutil.py", line 120, in copyfile
with open(src, 'rb') as fsrc:
FileNotFoundError: [Errno 2] No such file or directory: './landmarkDetector\processed\019615.box'

(base) C:\Users\LINK\OneDrive - University of Waterloo\SYDE 671 project\released_v0.1>

Regarding the environment setup

Hi Anpei,

Thank you so much for releasing the code of your great work!

I am currently working on setup the environment of your project and not so sure about the version of the required packages. Would you kindly specify the version of packages in the 'set the environment' section of the README file (e.g. exact version of pytorch/TF tested on)? It would be extremely helpful for followers to re-run the code.

Thank you again for sharing the material!

Question regarding code

Hello,
first of all thanks for publishing this awesome project @apchenstu.

I was looking through the code of the facialDetails.py file and have a question regarding this python script.
During creation of the patches you stitch them together using the weights and accumulate the weights in the count array. But why are you dividing each pixel by the weights in this two lines?:

mask = count > 0
img[mask] /= count[mask]

My guess is that the result is somehow "normalized" but how exactly is it normalized? Such that it is in a range between 0 and 255?

No any obj files are generated

Hello Anpei,

I couldn't generate any obj file under the result directory by running a snapshot of code downloaded on 11/19/2019. All pretrained models and binaries are downloaded following the README.md.

Is the obj file generated inside fit-model.exe?

Thanks,
Mike

textureRender not output the result without error

Thank you for sharing your great work first.

I`m trying to set up the code and successfully get the result of the "landmarkDetector" and "proxyEstimator".

But textureRenderer doesn't return any result with no errors.

From print function(sorry, I`m not familiar with OpenGL language), it seems that the code is not proceeding after the glFinish(); which is in line 36 of face.cpp.

I had checked all arguments of textureRender.exe. But I couldn`t find any problem with these (obj seems fine when I display it with Meshlab).

I built the source with Visual Studio 2017 and run with:
textureRender.exe "F:\\Dev\\PyTorchProjects\\2019_ICCV_A. Chen\\Facial_Details_Synthesis\\imgs_p\\000001\\result.obj" 3 "F:\\Dev\\PyTorchProjects\\2019_ICCV_A. Chen\\Facial_Details_Synthesis\\imgs_p\\000001\\result.isomap.png" 0 "F:\\Dev\\PyTorchProjects\\2019_ICCV_A. Chen\\Facial_Details_Synthesis\\imgs\\000001.jpg" "F:\\Dev\\PyTorchProjects\\2019_ICCV_A. Chen\\Facial_Details_Synthesis\\imgs_p\\000001\\result.affine_from_ortho.txt" ./src/shaders

Any suggestions about this?
Thanks.

How can i get the Albedo map and Specular map

I am really sorry to raise the question. I am not major in this research direction,but i am really interset in your work. Can you tell me how can i get the Albedo map and Specular map from these codes. Too much files to find out ,thx very much!

textureRender.exe errors.

After downloading the requested files, I was able to successfully run convert-bfm2017-to-eos.py and create the respective .bin file.

But when trying to run both facilDetails.py and proxyPredictor.py, as per the provided example, I keep getting some erros from textureRender.exe

At facialDetals.py, loadimage, img = Image.open(dataroot). I can't get the requested result.isomap.png file to be generated and no other error message is displayed.
When running textureRender.exe, i cant get the savePath +'.isomap.png' file to be generated. Output of os.system(cmd) is -1. Ive tried different paths for both the executable file, savename and shaders folders, but still cant get the output image to be generated.

Im running these codes through a Visual Studio 2019 solution and had to make only one modification to run it this far, which was adding these to fit_model:
base_path = os.path.dirname(os.path.abspath(imagePath)) + "\\..\\..\\proxyEstimator\\bfm2017\\" file_path = ' -p '+base_path+'ibug_to_bfm2017-1_bfm_nomouth.txt -c '+base_path+'bfm2017-1_bfm_nomouth_model_contours.json'+ \ ' -e '+base_path+'bfm2017-1_bfm_nomouth_edge_topology.json -m '+base_path+'bfm2017-1_bfm_nomouth.bin'

Any clue how to solve this?
Also tried to run just with a regular anaconda environment with no success. Tried both python 3.7 and 3.6 with no success.

These are my requirements.txt:
cycler==0.10.0 decorator==4.4.2 eos-py==0.16.1 h5py==2.10.0 imageio==2.8.0 kiwisolver==1.2.0 matplotlib==3.2.1 networkx==2.4 numpy==1.18.2 Pillow==7.1.1 pip==20.0.2 pyparsing==2.4.7 python-dateutil==2.8.1 PyWavelets==1.1.1 PyYAML==5.3.1 scikit-image==0.16.2 scipy==1.4.1 setuptools==39.0.1 six==1.14.0 torch==1.2.0+cu92 torchvision==0.4.0+cu92

AttributeError: 'Image' object has no attribute 'ndim'

Thank you for your work. I run facialDetails.py with the following error:

Traceback (most recent call last):
File "facialDetails.py", line 307, in
main(args)
File "facialDetails.py", line 284, in main
displacementMap, normalMap = predict_details(save_texture_path, args)
File "facialDetails.py", line 171, in predict_details
testset, areas = loadimage(image_root,args)
File "facialDetails.py", line 134, in loadimage
hsv = np.array(rgb2hsv(img))
File "D:\python37\lib\site-packages\skimage\color\colorconv.py", line 249, in
input_is_one_pixel = rgb.ndim == 1
AttributeError: 'Image' object has no attribute 'ndim'

What is the cause of this error and how can I solve it?Thank you.

google colab compile error

I tried to compile the FaceLandmarkImg.exe file in google colab but after running this I got this error

!/usr/local/cuda/bin/nvcc -o FaceLandmarkImg.exe ./main.cpp

nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
./main.cpp:37:10: fatal error: dlib/image_processing/frontal_face_detector.h: No such file or directory
 #include <dlib/image_processing/frontal_face_detector.h>
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.

I followed these instructions in order to compile C++ in colab
https://medium.com/@iphoenix179/running-cuda-c-c-in-jupyter-or-how-to-run-nvcc-in-google-colab-663d33f53772

The output of textureRender.exe is missing

Thanks for sharing the code, but I cannot get the result from the textureRender. The following image is the result I got after executing proxyPredictor.py, but result.isomap.png is missing:

image

I also copied "glew32.dll" and "opencv_world_330.dll" to the folder where textureRender resides in, but I still cannot get the result.
I tested the code in anaconda environment, and the versions of dependent packages are as follows:
Python 3.6.9
eos-py 0.16.1
torch 1.3.1
torchvision 0.2.2
tensorflow-gpu 2.0.0
And my graphics card is RTX 2080. My operating system is Windows 10.

Any help will be appreciated. Thanks in advance!

LOG (Cannot open file [D:\Folder_1\init_facial_fidelity_capture\results\019615\result.obj] )

When I am executing the following command in the PyCharm terminal

python facialDetails.py -i ./samples/details/019615.jpg -o ./results

then in the output, I am getting the following error:

Load preTrain model done.
Load preTrain model done.
Loading the model
Model loaded
Starting tracking
D:\PycharmProjects\init_facial_fidelity_capture\samples\details\019615.jpg
===> Landmarks detection done.

===> estimating proxy of D:\PycharmProjects\init_facial_fidelity_capture\samples\details\019615.jpg
Using expression prior from Facial Action Coding features.
Error loading the Morphable Model: Failed to read 16 bytes from input stream! Read 9
===> predicting details of D:\PycharmProjects\init_facial_fidelity_capture\samples\details\019615.jpg

LOG (Cannot open file [D:\PycharmProjects\init_facial_fidelity_capture\results\019615\result.obj])
~In file (e:\face2face\code\release_v1\src\face_rendering\src\includes\ind\loaders.cc) , line (81)
ERROR ()
~In file (e:\face2face\code\release_v1\src\face_rendering\src\includes\ind\loaders.cc) , line (83)

Please help me in solving this.

Why is always running out of my GPU memory?

I download your project .
My environment list:
win 10
i5 9400f
rtx2060 6GB
when I run your project it's always remind me running out of my GPU memory.
What platform can run your project?Please tell me !

Question about generate proxy?

(Pytorch) D:\FacialDetailsSynthesis\released_v0.1>python proxyPredictor.py -i ./samples/proxy -o ./results
Attempting to read from directory: D:\FacialDetailsSynthesis\released_v0.1\samples\proxy
Loading the model
Model loaded
Starting tracking
D:\FacialDetailsSynthesis\released_v0.1\samples\proxy\000001.jpg
D:\FacialDetailsSynthesis\released_v0.1\samples\proxy\000005.jpg
D:\FacialDetailsSynthesis\released_v0.1\samples\proxy\000007.jpg
D:\FacialDetailsSynthesis\released_v0.1\samples\proxy\000008.jpg
D:\FacialDetailsSynthesis\released_v0.1\samples\proxy\000011.jpg
===> Landmarks detection done.

===> estimating proxy of D:\FacialDetailsSynthesis\released_v0.1\samples\proxy\000001.jpg
Error loading the Morphable Model: Failed to read 16 bytes from input stream! Read 9
===> estimating proxy of D:\FacialDetailsSynthesis\released_v0.1\samples\proxy\000005.jpg
Error loading the Morphable Model: Failed to read 16 bytes from input stream! Read 9
===> estimating proxy of D:\FacialDetailsSynthesis\released_v0.1\samples\proxy\000007.jpg
Error loading the Morphable Model: Failed to read 16 bytes from input stream! Read 9
===> estimating proxy of D:\FacialDetailsSynthesis\released_v0.1\samples\proxy\000008.jpg
Error loading the Morphable Model: Failed to read 16 bytes from input stream! Read 9
===> estimating proxy of D:\FacialDetailsSynthesis\released_v0.1\samples\proxy\000011.jpg
Error loading the Morphable Model: Failed to read 16 bytes from input stream! Read 9

Hi, apchenstu:
what's the problem? I can't know how to solve it.
Look forward your reply!

proxy Predictor Using Emotion Issues

Thanks for sharing your Facial_Details_Synthesis code.
I got a Problem on Proxy Predictor by using emotion prior.


Error Information:

D:\file_Qian\environment\released_v0.1>python proxyPredictor.py -i ../imagefolder -o ../results --emotion 1
Using TensorFlow backend.
2020-05-05 08:38:40.403957: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2020-05-05 08:38:43.591496: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2020-05-05 08:38:43.640138: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1650 computeCapability: 7.5
coreClock: 1.56GHz coreCount: 16 deviceMemorySize: 4.00GiB deviceMemoryBandwidth: 119.24GiB/s
2020-05-05 08:38:43.649424: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2020-05-05 08:38:43.685206: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2020-05-05 08:38:43.723567: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll
2020-05-05 08:38:43.744489: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll
2020-05-05 08:38:43.798967: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll
2020-05-05 08:38:43.827650: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll
2020-05-05 08:38:43.904791: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-05-05 08:38:43.910319: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-05-05 08:38:43.915798: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2020-05-05 08:38:43.922650: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1650 computeCapability: 7.5
coreClock: 1.56GHz coreCount: 16 deviceMemorySize: 4.00GiB deviceMemoryBandwidth: 119.24GiB/s
2020-05-05 08:38:43.933750: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2020-05-05 08:38:43.938871: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2020-05-05 08:38:43.944149: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll
2020-05-05 08:38:43.950168: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll
2020-05-05 08:38:43.955576: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll
2020-05-05 08:38:43.961239: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll
2020-05-05 08:38:43.967510: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-05-05 08:38:43.973514: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-05-05 08:38:46.827408: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-05-05 08:38:46.833201: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0
2020-05-05 08:38:46.835981: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N
2020-05-05 08:38:46.842118: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2915 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1650, pci bus id: 0000:01:00.0, compute capability: 7.5)
Loaded emotion model from disk
Attempting to read from directory: D:\file_Qian\environment\imagefolder
Loading the model
Model loaded
Starting tracking
D:\file_Qian\environment\imagefolder\sample2.jpg
===> Landmarks detection done.

===> estimating proxy of D:\file_Qian\environment\imagefolder\sample2.jpg
Using expression piror from emotion features.

Traceback (most recent call last):
File "proxyPredictor.py", line 198, in
main(args)
File "proxyPredictor.py", line 170, in main
features_emotion = inferanceOneImg(args, img_name, feature_model)
File "proxyPredictor.py", line 45, in inferanceOneImg
return feature_model.predict(img).reshape((-1))
File "C:\Users\LEGION\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\training.py", line 1441, in predict
x, _, _ = self._standardize_user_data(x)
File "C:\Users\LEGION\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\training.py", line 579, in _standardize_user_data
exception_prefix='input')
File "C:\Users\LEGION\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\training_utils.py", line 135, in standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (1, 64, 64, 2, 1)



Issue:1

ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (1, 64, 64, 2, 1)

it was like somehow my code dismissed dimension.

I used a 700 X 700 jpg as one input. And firstly i thought I set a wrong environment.
And reinstalled the environment.seems like it didn't work.

Issue:2

And i have another question,on another issues, I saw it can don't use downsample to get a finer output.But i'm not familiar to the code. I want to ask which part should i change to not downsample.

Cannot estimate normal map and displacement map and cannot check the output

Thank you for sharing the code.

I tried it and fixed some bugs like belows because your code doesn't work.

Fixed bugs

facialDetails.py: around line 39

def load_image(img_path, target_size):
    img = Image.load(img_path).convert('LA')
    if img is None:
        print(img_path)
    im = np.asarray(img.resize(img, target_size,Image.ANTIALIAS))
    # im = np.asarray(img.resize(img, target_size,, Image.ANTIALIAS))

facialDetails.py: around line 128

def loadimage(data root, args):
   img = Image.open(dataroot)
   img = img.resize((args.imageW, args.imageW))
   # img = Image.resize(Image.open(dataroot), (args.imageW, args.imageW))
   ...

facialDetails.py: around line 280

   displacementMap = (displacementMap+1)/2*65535
   displacementMap = displacementMap.astype("uint16")
   array_buffer = displacementMap.tobytes()
   img = Image.new("I", displacementMap.T.shape)
   img.frombytes(array_buffer, "raw", "I;16")
   save_path = os.path.join(args.output_path, base_name[0], 'result.displacementmap.png')
   img.save(save_path)
   # Image.fromarray(displacementMap).save(save_path)

After that, I added dlls not included in your released package to ./textureRender

  • glew32.dll
  • opencv_world330.dll

Finally, I run the below command, the code worked well other than estimation parts and rendering parts.

python facialDetails.py -i ./samples/details/019615.jpg -o ./results

Error of estimation parts

I think the weight files of DFDN are wrong because the output displacement map and normal map contain many noisy patches caused by not trained networks.
Could you check them?

Error of rendering parts

And I cannot check the output with hmrenderer.exe because the material file is wrong.
It doesn't contain the normal map and the displacement map.

result.mtl

newmtl FaceTexture
map_Kd result.isomap.png

Moreover, the following part of facialDetails.py doesn't work well.

 if args.visualize:
    args.face_render_path = os.path.abspath(args.face_render_path)
    cmd = '%s/hmrenderer.exe %s %s %s'%(args.face_render_path,save_obj_path+'.obj',save_path,args.face_render_path+'/shaders')

It showed the warning.

LOG (WARN: Material file [ result.mtl ] not found.
WARN: Failed to load material file(s). Use default material.
)
~In file (e:\face2face\code\release_v1\src\face_rendering\src\includes\ind\loaders.cc) , line (81)

I think the path is wrong.

Could check them?

Best Regards,

Read Morphable Model ERROR.

I create bfm2017-1_bfm_nomouth.bin by running: python convert-bfm2017-to-eos.py successfully, while it then fails when running:
python .\proxyPredictor.py -i .\samples\proxy\000001.jpg -o .\results\

error message:
[Error loading the Morphable Model: Failed to read 16 bytes from input stream! Read 9]

Quite appreciate for any possible helps!

Cannot fit landmarks

Hello,

First of all, thanks for sharing your work.
It seems that I cannot estimate the proxy mesh no matter the input image (even your paper's sample in Figure 1).
Can you imagine what may be wrong?
The log I'm getting is the following:

Load preTrain model done.
Loading the model
Model loaded
Starting tracking
"img_path"
===> Landmarks detection done.

===> estimating proxy of "img_path"
! ! Couldn't detect faces in the : "img_path"

Best regards,
Antonis

fit-model.exe error

it needs jpeg.dll, glew32.dll, msvcr90.dll, msvcp90.dll, tiff.dll, cudart64_100.dll
It crashes at the start, then after I copy them from the internet

could you put your copies of dlls in a release?

A problem in running facialDetails.py

Thanks for publishing the project!
But some problems occur when I run the code on my computer.

When I try
python facialDetails.py -i ./samples/details/019615.jpg -o ./results --batchSize 10
it returns
RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM
How can I solve this problem?

LOG (Cannot open file [D:\Folder_1\init_facial_fidelity_capture\results\019615\result.obj] )

When I am executing the following command in the PyCharm terminal

python facialDetails.py -i ./samples/details/019615.jpg -o ./results

then in the output, I am getting the following error:

Load preTrain model done.
Load preTrain model done.
Loading the model
Model loaded
Starting tracking
D:\PycharmProjects\init_facial_fidelity_capture\samples\details\019615.jpg
===> Landmarks detection done.

===> estimating proxy of D:\PycharmProjects\init_facial_fidelity_capture\samples\details\019615.jpg
Using expression piror from Facial Action Coding features.
Error loading the Morphable Model: Failed to read 16 bytes from input stream! Read 9
===> predicting details of D:\PycharmProjects\init_facial_fidelity_capture\samples\details\019615.jpg

LOG (Cannot open file [D:\PycharmProjects\init_facial_fidelity_capture\results\019615\result.obj])
~In file (e:\face2face\code\release_v1\src\face_rendering\src\includes\ind\loaders.cc) , line (81)
ERROR ()
~In file (e:\face2face\code\release_v1\src\face_rendering\src\includes\ind\loaders.cc) , line (83)

Please help me in solving this.

3D face reconstruction

Hi Anpei,

    Thank you for sharing your great work. and I am interested in the process of reconstructing 3D face.

    Could you share with me how you produce these amazing 3d face models for training?
    According to my knowledgement, you have used light stage technique but the cameras you used are far less than that. 
    So how could you produce high quality 3d face model? Do the models have some holes or something like bubble as noises on face?
    Hope for your answers.

Alex

An Error about Emotion Net.

I get an error when I use this command python .\proxyPredictor.py -i ".\results\test_01.png" -o .\results\ --emotion 1 :

ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (1, 64, 64, 2, 1)

Facial details estimation

Appreciate for your project!
I try to use "Released version" to reconstruct face with details. I followed the guidance and ran the code python facialDetails.py -i ./samples/details/019615.jpg -o ./results liked the example did. Everythings seemed to go right except
LOG (WARN: Material file [ result.mtl ] not found. WARN: Failed to load material file(s). Use default material. returned.
However, I still got a reconstructed face showed in a render window. But the resonstructed face had no details such as forehead.
When I ran the code python facialDetails.py -i ./samples/details -o ./results ,the returned logs are as follow:
捕获

I wonder whether I got something wrong or for some reasons such as privacy or commercial reasons, this released version can't reconstruct a face with details. I want to compare our project with the state of the art work, I hope to get your answer.

DFDN checkpoints can not be download

Thank you for sharing your code. When I want to download the DFDN checkpoint fron one drive, I get message that says it is not available. I wonder is the address for DFDN model be updated?

Proxy model fitting results not looking good?

Hi, apchenstu.
This is my proxy estimation result of a sample in the paper, I dont know what went wrong. Could you help me find the mistake? Looking forward to hearing from you! Thank you.
result isomap
result

Found bugs including GPU out of memory

Very nice contribution from you! thanks and I find
[1] need to swith from LA to L mode in line#35 of proxyPredictor.py
img = Image.open(img_path).convert('L')
[2] can not run under Tensorflow 2.0 currently
[3] can run proxy estimation base on two priors: --FAC 1 and --emotion 1, but can not get displacementMap, normalMap successfully when I run facialDetails.py , will encounter the issue below,
File "G:\Facial_Details_Synthesis\released_v0.1\DFDN\models\networks.py", line 317, in forward
return torch.cat([self.model(x), x], 1)
RuntimeError: CUDA out of memory. Tried to allocate 800.00 MiB (GPU 0; 6.00 GiB total capacity; 3.65 GiB already allocated; 798.37 MiB free; 4.00 KiB cached)
What's the minimal GPU VRAM for it ?

cpu supports?

Dear authors, thank you for your great work. I would like to try the code on my computer. However, I am not sure if the code could be run in the cpu mode. Thank you.

CUDNN_STATUS_BAD_PARAM

when I run python facialDetails.py -i ./samples/details/019615.jpg -o ./results . it return RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM,how should I do?

Some detal about code implement?

at first, i do not trust the reuslt in u paper, but when i read your code,i find that your just use bfm17 and a samrt vert fitting alg, i konw why you can do such a amazing work!
niubi!!!!!

facial detail not smooth

Hi, apchenstu.
Thank you for sharing this code.

I compiled the source code and generate the mesh geometry from the build tools by running facialDetail.py. And the surface seems strange like follow figure, it is not smooth. Is there something I did wrong. Because my machine has not 6 GB gpu memory, so I changed the batch_size for test to 16.

The out put mesh geometry
edfd67920d75a5bea212b6e57c05241

The rendered model, texture and original file like following:
4fe52e094597d0049e6810389640328

hmrenderer.exe got an error

Appreciate for this awesome project!
I got an error when I run "python facialDetails.py -i ./samples/details -o ./results", and it's caused by creating obj file failed:

LOG (Cannot open file [F:\wfx\wfx\results\019615\result.obj]
)
~In file (e:\face2face\code\release_v1\src\face_rendering\src\includes\ind\loaders.cc) , line (81)

ERROR ()
~In file (e:\face2face\code\release_v1\src\face_rendering\src\includes\ind\loaders.cc) , line (83)

A question about the proxy estimation

I followed the steps you show in the Related version.
But when I try to follow the Usage part of proxy estimation and input the following words in my computer, I got the following error.
The words are : python proxyPredictor.py -i ./samples/proxy -o ./results
The errors warning are the following 👍
===> estimating proxy of D:\facialreconstruction\exp1\released_v0.1\samples\proxy\000011.jpg
Error loading the Morphable Model: Failed to read 16 bytes from input stream! Read 9

Every picture in file proxy got the same problem. I want to know how to avoid these and got the proxy estimation results.

Could you please help me? Waiting for you answer~ XD

Why the faceRender.exe is not support displacementMap?

I find the part of facialDetail.exe,

    #######################  predict details  ##################################
    print('===> predicting details of %s '%img_name)
    save_texture_path = os.path.join(args.output_path, base_name[0], 'result.isomap.png')
    displacementMap, normalMap = predict_details(save_texture_path, args)

    displacementMap = (displacementMap+1)/2*65535
    displacementMap = displacementMap.astype("uint16")
    array_buffer = displacementMap.tobytes()
    img = Image.new("I", displacementMap.T.shape)
    img.frombytes(array_buffer, "raw", "I;16")
    save_path = os.path.join(args.output_path, base_name[0], 'result.displacementmap.png')
    img.save(save_path)

    normalMap = (normalMap+1)/2*255
    save_path = os.path.join(args.output_path, base_name[0], 'result.normalmap.png')
    normalMap = normalMap.astype('uint8')
    Image.fromarray(normalMap).save(save_path)

    if args.visualize:
        args.face_render_path = os.path.abspath(args.face_render_path)
        cmd = '%s/hmrenderer.exe %s %s %s'%(args.face_render_path,save_obj_path+'.obj',save_path,args.face_render_path+'/shaders')
        os.system(cmd)
    print('\n')

I also find the src of faceRender.exe is also not use displacementMap, code following

void parseArgs(int argc, char** argv) {
kShaderDir_ = "../src/shaders";
kObjPath = std::string(argv[1]);
kNrmPath = std::string(argv[2]);
if(argc>3)
kShaderDir_ = argv[3];
}

So the cmd just links to normalmap, not to displacementMap. So it not correspond to paper's section 4 Deep Facial Detail Synthesis, use estimated displacementMap to correct proxy.
So there have some way to use displacementMap in faceRender.exe, or new version of faceRender.exe?

can not create processed/019615.box'

./landmarkDetector
Load preTrain model done.
Load preTrain model done.
sh: 1: FaceLandmarkImg.exe: not found
===> Landmarks detection done.

===> estimating proxy of /home/work/workspace/work2022/study/3dFace/Facial_Details_Synthesis/src/samples/details/019615.jpg
Traceback (most recent call last):
File "/usr/lib/python3.6/shutil.py", line 550, in move
os.rename(src, real_dst)
FileNotFoundError: [Errno 2] No such file or directory: './landmarkDetector/processed/019615.box' -> '/home/work/workspace/work2022/study/3dFace/Facial_Details_Synthesis/src/results/019615/019615.box'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "facialDetails.py", line 338, in
main(args)
File "facialDetails.py", line 280, in main
move_landmark(args.landmark_exe_path, save_path, base_name[0])
File "facialDetails.py", line 58, in move_landmark
name+'.box'), os.path.join(save_path, name+'.box'))
File "/usr/lib/python3.6/shutil.py", line 564, in move
copy_function(src, real_dst)
File "/usr/lib/python3.6/shutil.py", line 263, in copy2
copyfile(src, dst, follow_symlinks=follow_symlinks)
File "/usr/lib/python3.6/shutil.py", line 120, in copyfile
with open(src, 'rb') as fsrc:
FileNotFoundError: [Errno 2] No such file or directory: './landmarkDetector/processed/019615.box'

If cmake version is the newest, the cmd of section "Compiling/proxyEstmator/Step3 cmake .." should change.

Don't know the reason, when I used cmake 3.22.4 to compile proxyEstimator, the step3's cmd "-DCMAKE_TOOLCHAIN_FILE" always show error information. Finally, I find the problem is that "-DCMAKE_TOOLCHAIN_FILE" should be embraced with "". Use this cmd in Compiling/proxyEstmator/Step3 will be right.

mkdir build && cd build
cmake .. -A X64 "-DCMAKE_TOOLCHAIN_FILE=[vcpkg root]\scripts\buildsystems\vcpkg.cmake"

Tip: [vcpkg root] not a position really exist, but your disk positon of vcpkg downloaded, e.g. E:\vcpkg\scripts\buildsystems\vcpkg.cmake.

the final result.obj includes details?

Hi, anpei,thanks for your amazing work!
when I run "python facialDetails.py -i ./samples/details/019615.jpg -o ./results"
I got
image
it seems right,but the result.obj show some problem in meshlab
image
image
1\There is a mismatch between the texture and the model around the nose;
2\The model doesn't have winkles.
3\The model doesn't look like the result in your paper,can you tell me what parameter you use?
Should I use faceRender to overlay the result?Does hmrenderer.exe provides any args to save the obj with winkles?

about textureRender.exe, and hmrenderer.exe warning of result.mtl, --emotion

thank your sharing your wonderful job, and i still got some questions,any help will be great appreciated,
when run:
python facialDetails.py -i samples\details\013578.jpg -o samples\out_details
there will a window pop-up and shut down(i don't know what it's, textureRender.exe?), then hmrender.exe will pop-up with warning.
捕获4

but i got files like yours‘ and the warning
LOG (WARN: Material file [ result.mtl ] not found. WARN: Failed to load material file(s). Use default material. ) ~In file (e:\face2face\code\release_v1\src\face_rendering\src\includes\ind\loaders.cc) , line (81)
(#10), and those files can open with paint 3D with testure(very slowly)捕获2,
in meshlab(i don't how to cover texture ,but open files quickly),#12
捕获

does the warning matter?and i change the PIL libs to PIL 6.0.0,the warning still there,and result.mtl is there with .obj

only run textureRender.exe not working in conda prompt(just break down with error: "textureRender.exe stop working"):
in folder released_v0.1 run:
textureRender\textureRender.exe samples\out_details\000008\result.obj samples\out_details\000008\result.isomap.png 0 samples\proxy\000008.jpg samples\out_details\000008\result.affine_from_ortho.txt textureRender\shaders
or in folder released_v0.1\textureRender run:
textureRender.exe ..\samples\out_details\000008\result.obj ..\samples\out_details\000008\result.isomap.png 0 ..\samples\proxy\000008.jpg ..\samples\out_details\000008\result.affine_from_ortho.txt shaders

only run hmrenderer.exe in conda prompt is same waring:
faceRender\hmrenderer.exe samples\out_details\000008\result.obj samples\out_details\000008\result.normalmap.png faceRender\shaders

when run:
python proxyPredictor.py -i samples\proxy -o samples\out_proxy+emotion --emotion 1
break down with error: "python stop working"(like textureRender.exe)
python version is 3.6.9, because i can't install eos-py==0.16.1 with pyhton3.7.3(#5)
output in conda prompt:

Using TensorFlow backend.
WARNING: Logging before flag parsing goes to stderr.
W0915 12:09:47.706274 14808 deprecation_wrapper.py:119] From d:\Miniconda3\envs\python36\lib\site-packages\keras\backend\tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

W0915 12:09:47.985552 14808 deprecation_wrapper.py:119] From d:\Miniconda3\envs\python36\lib\site-packages\keras\backend\tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

W0915 12:09:48.102215 14808 deprecation_wrapper.py:119] From d:\Miniconda3\envs\python36\lib\site-packages\keras\backend\tensorflow_backend.py:245: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

W0915 12:09:48.105208 14808 deprecation_wrapper.py:119] From d:\Miniconda3\envs\python36\lib\site-packages\keras\backend\tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.

W0915 12:09:48.107202 14808 deprecation_wrapper.py:119] From d:\Miniconda3\envs\python36\lib\site-packages\keras\backend\tensorflow_backend.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

2019-09-15 12:09:48.211278: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2019-09-15 12:09:48.231910: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library nvcuda.dll
2019-09-15 12:09:48.648618: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce GTX 1070 with Max-Q Design major: 6 minor: 1 memoryClockRate(GHz): 1.2655
pciBusID: 0000:01:00.0
2019-09-15 12:09:48.659840: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2019-09-15 12:09:48.677600: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2019-09-15 12:09:53.927988: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-09-15 12:09:53.935198: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187]      0
2019-09-15 12:09:53.939248: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0:   N
2019-09-15 12:09:53.959566: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6376 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 with Max-Q Design, pci bus id: 0000:01:00.0, compute capability: 6.1)
W0915 12:09:54.951899 14808 deprecation_wrapper.py:119] From d:\Miniconda3\envs\python36\lib\site-packages\keras\backend\tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.

W0915 12:09:55.019749 14808 deprecation_wrapper.py:119] From d:\Miniconda3\envs\python36\lib\site-packages\keras\backend\tensorflow_backend.py:3976: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.

W0915 12:09:55.026697 14808 deprecation.py:506] From d:\Miniconda3\envs\python36\lib\site-packages\keras\backend\tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

Details about the training code

First of all, thank you very much for sharing the code.I would like to ask some details about the training code.There are also specific details about training data and training lables generation.Can you give me some guidance?Email address: [email protected]
Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.