mrtornado24 / fenerf Goto Github PK
View Code? Open in Web Editor NEW[CVPR 2022] FENeRF: Face Editing in Neural Radiance Fields
License: MIT License
[CVPR 2022] FENeRF: Face Editing in Neural Radiance Fields
License: MIT License
Could you please tell me whether the CHANNELS_SEG=18 in the training code is determined according to the number of categories according to the mask diagram or according to other bases?Thanks for your answer!
Dear Author:
Great work!!! But How can I open the .mrc file?
You have done a very meaningful work, I want to learn. Can you provide the source code?
请问在用新的portrait进行Inversion的时候,渲染出来的结果很差:
python inverse_render_double_semantic.py exp_name 、/checkpoint/315000_generator.pth --image_path /dataset/cnn/0/com_imgs/0.jpg --seg_path /dataset/cnn/0/parsing_celebahq/masks1024x1024/0.jpg --background_mask --image_size 128 --latent_normalize --lambda_seg 1. --lambda_img 0.2 --lambda_percept 1. --lock_view_dependence True --recon
其中的image和mask如下图所示:
但是渲染出来的结果
请问做inverse latent code的时候输入的都是一张正面人脸图像嘛?
如果说我有这个人的几张不同角度的图片,做inverse的时候得到的latent code、重建出来的结果会不会更好??
如果是的话 请问在inverse_render_double_semantic里边具体要怎么操作呢??
when I try to run run_UI.py,something wrong happens.
QObject::moveToThread: Current thread (0x5555f1280ce0) is not the object's thread (0x5555f3a25330).
Cannot move to target thread (0x5555f1280ce0)
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/junshen/anaconda3/envs/fenerf/lib/python3.7/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: xcb, eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl.
Thanks for your excellent work! Curious about the details of the learnable feature grid (e_coord), what is the dimension of it and do you keep it the same dimension when rendering images of different resolutions?
Looking forward to your reply.
Hi,
Do you plan to release the code recently?
Best
Thanks for your impressive work! Will you release the code soon?
Thank you for your great work, I have a few questions about FID:
I used eval_metrics.py to calculate FID on the pre-trained model, but something wrong, then I use pytorch-fid to calculate FID between CelebA-Mask dataset and images generated by pre-trained model, but the value, which is 80, looks wrong. Could you please provide suggestions about the calculating of FID?
Hi, great work? Could you specify the curriculum for FFHQ or is it the same as CelebA-HQ? Thank you
amazing work, please create a colab for inference
Hi!
I really thank you for sharing code and pth files.
However, pretrained discriminator files are not exist.
Do you have plane to upload these files?
when i run the script to invert an image nothing seems to happen
!python inverse_render_double_semantic.py exp_name /content/FENeRF/wo_latent_grid/200000_generator.pth --image_path data/examples/image.jpg --seg_path data/examples/mask_edit.png --background_mask --image_size 128 --latent_normalize --lambda_seg 1. --lambda_img 0 --lambda_percept 0 --lock_view_dependence True --recon --load_checkpoint True --checkpoint_path freq_phase_offset_$exp_name.pth
Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.0], spatial [off]
Loading model from: /usr/local/lib/python3.7/dist-packages/lpips/weights/v0.0/vgg.pth
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:258: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.
"Argument interpolation should be of type InterpolationMode instead of int. "
Are the provided models trained on CelebA?
If so, will you release the models trained on FFHQ?
Thank you for your time. Very amazing work !!
D:\anaconda\lib\site-packages\torch\functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Traceback (most recent call last):
File "D:/work/vgan/FENeRF-main/inverse_render_double_semantic.py", line 582, in
checkpoint_path = run_inverse_render(opt, opt.image_path, opt.seg_path)
File "D:/work/vgan/FENeRF-main/inverse_render_double_semantic.py", line 404, in run_inverse_render
loss += opt.lambda_norm * norm_loss
TypeError: can't multiply sequence by non-int of type 'float'
On your project website, you show a demo of "Style Mixing of latent space", where you provide source 1 and source 2 image (i.e. the Figure 7 of your paper).
Can you please guide me on how I can use your code to reproduce that, by providing my own images?
Hello author, I don't quite understand your readme.md,
Where. / checkpoints.
I can't find this. Can I render the model directly without training the model? Can it run on the CPU?
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/x_fahkh/.conda/envs/gmpi/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1071, in _call_impl
result = forward_call(*input, **kwargs)
File "/proj/cvl/users/x_fahkh/mn/debug-dmpi/gmpi/prepsem/Bisnet.py", line 23, in forward
x = F.relu(self.bn(x))
File "/home/x_fahkh/.conda/envs/gmpi/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1071, in _call_impl
result = forward_call(*input, **kwargs)
File "/home/x_fahkh/.conda/envs/gmpi/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 178, in forward
self.eps,
File "/home/x_fahkh/.conda/envs/gmpi/lib/python3.7/site-packages/torch/nn/functional.py", line 2279, in batch_norm
_verify_batch_size(input.size())
File "/home/x_fahkh/.conda/envs/gmpi/lib/python3.7/site-packages/torch/nn/functional.py", line 2247, in _verify_batch_size
raise ValueError("Expected more than 1 value per channel when training, got input size {}".format(size))
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 128, 1, 1])
When I tried to provide the segmentation Map for FFHQ I am getting this error.
Can you Please have a look at this
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.