Giter Site home page Giter Site logo

henzler / neuraltexture Goto Github PK

View Code? Open in Web Editor NEW
106.0 6.0 20.0 1.7 MB

Learning a Neural 3D Texture Space from 2D Exemplars [CVPR 2020]

Home Page: https://geometry.cs.ucl.ac.uk/projects/2020/neuraltexture/

License: MIT License

Python 89.09% C++ 1.33% Cuda 9.32% Shell 0.26%
computer-vision computer-graphics machine-learning texture-synthesis deep-learning

neuraltexture's Introduction

Neural Texture

Official code repository for the paper:

Learning a Neural 3D Texture Space from 2D Exemplars [CVPR, 2020]

Henzler, J. Mitra, Ritschel

[Paper] [Project page]

Data

We downloaded all our textures from https://www.textures.com/. Due to licensing reasons we cannot provide the data for training, however, we provide pre-trained models under trained_models for the classes wood, grass, marble, rust_paint.

Inference

In order to evaluate textures, add the desired texture to the corresponding folder under datasets/<class_name>/test and use one of the pre-trained models under trained_models/ and run the evaluation (see instructions below). We already provide some exemplars.

Training

For training you will need to provide data sets under datasets/<your_folder> and provide two subdirectories: train and test. We provide test exemplars for wood, grass, marble and rust_paint. If you would like to train using these classes please add a train folder containing training data.

Prerequisites

  • Ubuntu 18.04
  • cuDNN 7
  • CUDA 10.1
  • python3+
  • pyTorch 1.4
  • Download pretrained models (optional)

Install dependencies

cd code/
pip install -r requirements.txt

cd custom_ops/noise
# build cuda code for noise sampler
TORCH_CUDA_ARCH_LIST=<desired version> python setup.py install

Download pre-trained models

sh download_pretrained_models.sh

Logs

To visualise pre-trained training logs run the following:

tensorboard --logdir=./trained_models

Usage

Config file

The config files are located in code/configs/neural_texture. In the following we give an explanation for the most important variables:

dim: 2 # choose between 2 and 3 for 2D and 3D.
dataset:
  path: '../datasets/wood' # set path 
  use_single: -1 # -1 = train entire data set | 0,1,2,... = for single training

Training

cd code/
python train_neural_texture.py --config_path=<path/to/config> --job_id=<your_id>

The default config_path is set to configs/neural_texture/config_default.yaml. The default job_id is set to 1.

Inference

cd code/
python test_neural_texture.py --trained_model_path=path/to/models

The default trained_model_path is set to ../trained_models. The results are saved under trained_model_path/{model}/results

Bibtex

If you use the code, please cite our paper:

@inproceedings{henzler2020neuraltexture,
    title={Learning a Neural 3D Texture Space from 2D Exemplars},
    author={Henzler, Philipp and Mitra, Niloy J and Ritschel, Tobias},
    booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}
    month={June},
    year={2020}
}
Side Note

Unlike reported in the paper the encoder network in this implementation uses a ResNet architecture as it stabilises training.

neuraltexture's People

Contributors

henzler avatar madhawav avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

neuraltexture's Issues

Compatibility with PyTorch 1.5 and later

Hi @henzler,

Have you tested training new models on PyTorch versions newer than 1.4?

I am doing a follow-up work based on this project and I have noticed that the model training performance degrades noticeably when I train in PyTorch 1.5 or later. Inference on models trained using PyTorch 1.4, via recent versions of PyTorch works just fine. I am guessing this issue is related to the noise operator implementation I obtained from this codebase.

The direction of training in 3D

Hello, I am wondering if it is possible to train separate directions in 3D. I.e., you have a 3D texture that is different in three major axes, and you have different 2d pictures corresponding to each direction. Is there any way I can do that?

Generation of 3D volume

Hi, I read the paper, and it is said you can generate 3D texture.
But in the code, I only found the normal 2D image, stripe and zoom and interpolation. Did the 3D texture not implemented yet?

Training Fails on Validation Sanity Check

Hi,
When I try to train on a new dataset, it fails with the following error.

[PYTHON_ENV_PATH]/neuraltexture/bin/python -u [PROJECT_ROOT]/code/train_neural_texture.py
[PYTHON_ENV_PATH]/lib/python3.8/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
[PYTHON_ENV_PATH]/lib/python3.8/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
[PYTHON_ENV_PATH]/lib/python3.8/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
[PYTHON_ENV_PATH]/lib/python3.8/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
[PYTHON_ENV_PATH]/lib/python3.8/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
[PYTHON_ENV_PATH]/lib/python3.8/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
Use pytorch 1.4.0
Load config: configs/neural_texture/config_default.yaml
INFO:lightning:GPU available: True, used: True
INFO:lightning:CUDA_VISIBLE_DEVICES: [0]
[PYTHON_ENV_PATH]/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:23: RuntimeWarning: You have defined a `val_dataloader()` and have defined a `validation_step()`, you may also want to define `validation_epoch_end()` for accumulating stats.
  warnings.warn(*args, **kwargs)
[PYTHON_ENV_PATH]/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:23: RuntimeWarning: You have defined a `test_dataloader()` and have defined a `test_step()`, you may also want to define `test_epoch_end()` for accumulating stats.
  warnings.warn(*args, **kwargs)
Validation sanity check: 0it [00:00, ?it/s]Traceback (most recent call last):
  File "[PROJECT_ROOT]/neuraltexture/code/train_neural_texture.py", line 47, in <module>
    trainer.fit(system)
  File "[PYTHON_ENV_PATH]/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 765, in fit
    self.single_gpu_train(model)
  File "[PYTHON_ENV_PATH]/lib/python3.8/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 492, in single_gpu_train
    self.run_pretrain_routine(model)
  File "[PYTHON_ENV_PATH]/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 896, in run_pretrain_routine
    eval_results = self._evaluate(model,
  File "[PYTHON_ENV_PATH]/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 322, in _evaluate
    eval_results = model.validation_end(outputs)
  File "[PROJECT_ROOT]/neuraltexture/code/systems/s_core.py", line 33, in validation_end
    for key in outputs[0].keys():
IndexError: list index out of range

Process finished with exit code 1

Additional Information

  • My dataset: As a sanity check, I use all the test images provided by you as my dataset. Thus, I have a folder called "all" in the "datasets" directory which has two sub-directories "train" and "test". I have copied all the test images provided by you into both of these directories.
  • The working directory is "[PROJECT_ROOT]/code".
  • My Operating System is Ubuntu 16.04.
  • PyTorch Lightning 0.7.5 is installed.

My "config_default.yml" Is shown below:

version_name: neuraltexture_all_2d_single
device: cuda
n_workers: 8
n_gpus: 1
dim: 2
noise:
  octaves: 8
logger:
  log_files_every_n_iter: 1000
  log_scalars_every_n_iter: 100
  log_validation_every_n_epochs: 1
image:
  image_res: &image_res 128 # (height, width)
texture:
  e: &texture_e 64 # encoding size
dataset:
  name: datasets.images
  path: '../datasets/all'
  use_single: -1 # -1 = all, 0,1,2 for single
system:
  block_main:
    model_texture_encoder:
      model_params:
        name: models.neural_texture.encoder
        type: 'ResNet'
        shape_in:  [[3, *image_res, *image_res]]
        bottleneck_size: 8
    model_texture_mlp:
      model_params:
        name: models.neural_texture.mlp
        type: 'MLP'
        n_max_features: 128
        n_blocks: 4
        dropout_ratio: 0.0
        non_linearity: 'relu'
        bias: True
        encoding: *texture_e
    optimizer_params:
      name: 'adam'
      lr: 0.0001
      weight_decay: 0.0001
    scheduler_params:
      name: 'none'
    loss_params:
      style_weight: 1.0
      style_type: 'mse'
train:
  epochs: 3
  bs: 16
  accumulate_grad_batches: 1
  seed: 41127

Your help is much appreciated.

Tensor shape mismatch on "space" trained models - test_neural_texture.py

Hi,

Whenever I run the script "test_neural_texture.py", it fails on trained_models that has the term "space" in the filename (such as ../trained_models/neural_texture/version_468753_neuraltexture_rust_paint_2d_space/). Thus, I had to remove those directories from the trained_model directory to completely run the script. I believe these "space" models synthesize textures that encode style statistics from multiple input images.

Log output

Use pytorch 1.4.0
Load config: ../trained_models/neural_texture/version_468753_neuraltexture_rust_paint_2d_space/logs/config.txt
INFO:lightning:GPU available: True, used: True
INFO:lightning:CUDA_VISIBLE_DEVICES: [0]
checkpoint loaded ../trained_models/neural_texture/version_468753_neuraltexture_rust_paint_2d_space/checkpoints/neural_texture_ckpt_epoch_1.ckpt
[PATH TO CONDA ENV]/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:23: UserWarning: Checkpoint directory ../trained_models/neural_texture/version_468753_neuraltexture_rust_paint_2d_space/checkpoints exists and is not empty with save_top_k != 0.All files in this directory will be deleted when a checkpoint is saved!
  warnings.warn(*args, **kwargs)
Testing:  33%|████████████                        | 1/3 [00:01<00:02,  1.49s/it]Traceback (most recent call last):
  File "[PROJECT ROOT]/code/systems/s_neural_texture.py", line 261, in test_step
    image_out_inter = self.forward(z_texture_interpolated, position, seed)
  File "[PROJECT ROOT]/code/systems/s_neural_texture.py", line 66, in forward
    transform_coeff, z_encoding = torch.split(weights, [self.p.texture.t, self.p.texture.e], dim=1)
  File "[PATH TO CONDA ENV]/lib/python3.8/site-packages/torch/functional.py", line 77, in split
    return tensor.split(split_size_or_sections, dim)
  File "[PATH TO CONDA ENV]/lib/python3.8/site-packages/torch/tensor.py", line 377, in split
    return super(Tensor, self).split_with_sizes(split_size, dim)
RuntimeError: start (32) + length (64) exceeds dimension size (94).

I wonder what I should do to overcome this issue.

P.S.: I am using the pre-trained models and test images provided by you. No train images are placed.

downloading pretrained-models

hi I was trying to download pretrained models that you guys provided but the access was denied.
I was wondering if you guys limited the downloading on purpose and not providing it anymore

thank you

Incorrect Inference Results

Hi there @henzler @madhawav
I cloned the repo as is and only made the changes mentioned in this pull request

I also got the weights as mentioned in the shell script.

I ran

But the results are horrible. I think maybe the weights were updated or some structure changed?
I have kept the versions as mentioned in the requirements.txt
Input:-
input

Outputs:-

0_out

1_out

2_out

Problem with test routine in interpolation mode

First of all, this is great work, amazing!

While playing around with your code I encountered an issue relating to the synthesis of interpolated samples.

First of all, s_neural_texture.py line 254: z_texture_interpolated = z_texture_interpolated[:, :-2] does not work, since it causes a dimension mismatch later on. I don't know that it is supposed to do so I commented it out.

Next, s_neural_texture.py line 68+69 only make sense in non-interpolation mode, I therefore prepended the line if z_encoding.shape[2]==1: to mitigate this.

Finally, transform_coord() in neural_texture_helper.py is also not handling interpolation mode properly. I replaced the lines 105-107 with the following:

inter = (t_coeff.shape[2] != 1)
    if inter:
        t_coeff = t_coeff.reshape(bs, octaves, dim, dim, h, w)
        t_coeff = t_coeff.permute(0, 1, 4, 5, 2, 3)
    else:
        t_coeff = t_coeff.reshape(bs, octaves, dim, dim).unsqueeze(2).unsqueeze(2)

        t_coeff = t_coeff.expand(bs, octaves, h, w, dim, dim)

An unrelated question: is it possible to somehow run interference on cpu? I am a tensorflow guy and not familiar with pytorch, but it seems that your custom noise sampler is not cpu capable. Is there a way to run it on the cpu?

Cheers,
tiziano

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.