Giter Site home page Giter Site logo

zju3dv / neumesh Goto Github PK

View Code? Open in Web Editor NEW
375.0 26.0 12.0 12.04 MB

Code for "MeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing", ECCV 2022 Oral

Home Page: https://zju3dv.github.io/neumesh/

License: MIT License

Python 100.00%
3d-vision nerf neural-rendering

neumesh's Introduction

NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing

NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing

[Bangbang Yang, Chong Bao]Co-Authors, Junyi Zeng, Hujun Bao, Yinda Zhang, Zhaopeng Cui, Guofeng Zhang.

ECCV 2022 Oral

Installation

We have tested the code on Python 3.8.0 and PyTorch 1.8.1, while a newer version of pytorch should also work. The steps of installation are as follows:

  • create virtual environmental: conda env create --file environment.yml
  • install pytorch 1.8.1: pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html
  • install open3d development version: pip install [open3d development package url]
  • install FRNN, a fixed radius nearest neighbors search implemented on CUDA.

Data

We use DTU data of NeuS version and NeRF synthetic data.

P.S. Please enable the intrinsic_from_cammat: True for hotdog, chair, mic if you use the provided NeRF synthetic dataset.

Train

Here we show how to run our code on one example scene. Note that the data_dir should be specified in the configs/*.yaml.

  1. Train the teacher network (NeuS) from multi-view images.
python train.py --config configs/neus_dtu_scan63.yaml
  1. Extract a triangle mesh from a trained teacher network.
python extract_mesh.py --config configs/neus_dtu_scan63.yaml --ckpt_path logs/neus_dtuscan63/ckpts/latest.pt --output_dir out/neus_dtuscan63/mesh
  1. Train NeuMesh from multi-view images and the teacher network. Note that the prior_mesh, teacher_ckpt, teacher_config should be specified in the neumesh*.yaml
python train.py --config configs/neumesh_dtu_scan63.yaml

Evaluation

Here we provide all pre-trained models of DTU and NeRF synthetic dataset.

You can evaluate images with the trained models.

python -m render --config configs/neumesh_dtu_scan63.yaml   --load_pt logs/neumesh_dtuscan63/ckpts/latest.pt --camera_path spiral --background 1 --test_frame 24 --spiral_rad 1.2

P.S. If the time of inference costs too much, --downscale can be enabled for acceleration.

Manipulation

Please refer to editing/README.md.

Citing

@inproceedings{neumesh,
    title={NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing},
    author={{Chong Bao and Bangbang Yang} and Zeng Junyi and Bao Hujun and Zhang Yinda and Cui Zhaopeng and Zhang Guofeng},
    booktitle={European Conference on Computer Vision (ECCV)},
    year={2022}
}

Note: joint first-authorship is not really supported in BibTex; you may need to modify the above if not using CVPR's format. For the SIGGRAPH (or ACM) format you can try the following:

@inproceedings{neumesh,
    title={NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing},
    author={{Bao and Yang} and Zeng Junyi and Bao Hujun and Zhang Yinda and Cui Zhaopeng and Zhang Guofeng},
    booktitle={European Conference on Computer Vision (ECCV)},
    year={2022}
}

Acknowledgement

In this project we use parts of the implementations of the following works:

We thank the respective authors for open sourcing their methods.

neumesh's People

Contributors

chobao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neumesh's Issues

Texture swapping problem

Hi, thank you so much for providing such an amazing job!

I want to ask some questions about Texture swapping.

If I want to customize the content of the texture swapping, how should I do point annotating in Blender and export the data?
Handy to provide the blender script you use?

Does the texture_swapping_dtu_scan63.json file contain all the information needed to perform texture swapping? Sorry, I didn't understand the full flow of texture swapping from reading the code. So, I'm also curious about the respective meanings of the parameters in the json file (eg "T_s_m" and "corr").

About the benefits of NeuMesh over the extracted geometry from neural fields

Hi, thanks for the great work and code.

I have a question about this work. To support texture editing or geometry editing, a naive solution is to train a NeuS on the captured multi-view images, extract the mesh from the NeuS, and perform UV unwrapping. This way, we can import the assets to a 3D CG software, e.g. Blender, and perform all kinds of geometry and texture editing shown in the NeuMesh paper. Does NeuMesh achieve better rendering quality than the naive solution, or has other benefits?

vertex color when extracting mesh

hi, its an amazing job!
well, I am a little confused about why the [0,0,0] as directions fed into the framework and get the vertices color when we extract mesh.
code.

Question for rendering

U got the mesh of the objects, Why not just sample the points which near the mesh? like Tri-MipRF ICCV 23

Coordinate frame and other questions

Hiya,

You guys have released code that deals with the DTU dataset. Could you please clarify the global coordinate frame and the camera coordinates that the code expects?
This would be a great boon in training on our own datasets - especially the synthetic nerf dataset.

So the questions would be in the world coordinates is z-up?
In the camera coordinates, what are the coordinate ranges after perspective transform (-1 to 1, or 0 to 1).
In the camera, is +z the forward direction or is it -z?

Other question are:

  1. Why is there no object_bounding_radius? it is set for nuemesh, but the get_neumesh_model expects it under data?
  2. Why do you set perturb to False?
  3. What is white_background?

Hope you can answer.
Cheers,
Wamiq

SIREN activations

Noticed that you have validated to use SIREN activations in the SDF network, are there more insights about how SIREN improves the performance?

May I ask when will the full code be released

First of all thank you for your amazing work. But I found that some files seem to be missing, such as mesh_utils.py in utils, which means that the current code cannot extract mesh by itself, right? May I ask when you can release all these codes, and finally thank you again!

Rendering time

Hello,

I was wondering how much the render time is. I use your render.py script with the default params and it takes me about 24hrs to render the final video on an RTX 3090.

Is this the expected time? Or is the FRNN falling-back to a brute-force cpu-based method?
How exactly would I test if the FRNN library has been compiled properly?

There are a few tests inside the repo here but they are based on meshes and data that does no exist in the repo.

ETA on training code

Hiya,

Apologies for being too demanding, but do you have an estimated timeline for when the training code will be release?

Failing which, is there a way for me to contact you directly so that I can write a training script that matches yours (if the release is too far away).

Regards,

Release of Training Code

This is very impressive work and thanks for your sharing of the code! I wonder when the training code can be released? Thanks very much!

Questions about Texture Painting.

Hi there. Thanks for your great research work on NeRF editing. I am currently trying to implement the texture painting application according to the paper but got unsatisfying results. It seems that the optimization got stuck at a local minimum. Here are some results.
Painted image:
a

Selected vertices:
a1

Optimization process(about 10000 steps):
test1

Are there any thoughts to improve it?

Basic Usage

Hey!

Thanks for the code release. I was trying to get code to run on an Ubuntu 20.04 machine with CUDA 11.1, Python 3.8 and Pytorch 1.8.0.

I compiled FRNN and built the environment. I downloaded the data from the NeuS Google Drive. But this is the error that I get. Not sure what I need to do

`python -m render --config configs/neumesh_dtu_scan63.yaml --load_pt ./checkpoints/dtu_scan63/latest.pt --camera_path spiral --num_views 90 --background 1 --dataset_split entire --test_frame 24 --spiral_rad 1.2
=> Parse extra configs: []
=> Use cuda devices: [0]
NeuMesh distance method frnn
geometry_dim: 32, input_ch_fg: 160
color_dim: 32, input_ch_ft: 160
input_d_dim: 1, input_view_dim: 3, input_ch_d: 17, input_ch_view: 27, input_ch_pts: 177, input_ch_color: 204
2022-08-08 16:11:31,608-rk0-render.py#285:=> Use ckpt:./checkpoints/dtu_scan63/latest.pt
Traceback (most recent call last):
File "/home/parawr/anaconda3/envs/neumesh/lib/python3.8/runpy.py", line 192, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/parawr/anaconda3/envs/neumesh/lib/python3.8/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/parawr/Projects/adobe/neumesh/render.py", line 327, in
main_function(config)
File "/home/parawr/Projects/adobe/neumesh/render.py", line 289, in main_function
render_function(args, model, render_kwargs_test, render_fn, ckpt_file)
File "/home/parawr/Projects/adobe/neumesh/render.py", line 102, in render_function
dataset = get_data(args, downscale=args.downscale)
File "/home/parawr/Projects/adobe/neumesh/dataio/init.py", line 36, in get_data
dataset = SceneDataset(**cfgs)
File "/home/parawr/Projects/adobe/neumesh/dataio/DTU.py", line 26, in init
assert os.path.exists(data_dir), "Data directory is empty"
AssertionError: Data directory is empty

`

Any clue as to what might be wrong?

Texture painting problem

Hi, thank you so much for such an awesome work.

I am currently conducting experiments on texture painting. I am confused that I could not get the satisfying result like the dtu_scan105 with white glasses. Actually I only changed the white glasses in the painted image into yellow ones.
Below is the painted image:
22fa853bd99f3a74f35561863ef8aa9
And here comes the painting result:
00010000_0

Although all the parameters are the same as those in the white-glasses example, it seems that the image loss failed to decrease adequately. I am wondering if there are any specific settings in texture painting, such as hyper-parameters.

Question for the mesh

Each vertex contains a sdf feature and texture feature. But the shape of the objects can be represented by mesh, why do we need the sdf feature?

Thx

Deforming Mesh

Hi! I have a query about the deformation of meshes with the current inference-only codebase.

Since we have a scan63 mesh in the repository. If we deform it externally by keeping the vertex count same, does the trained set of weights account for the deformation provided the deformation is relatively small? Or do we need to do anything additional?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.