Giter Site home page Giter Site logo

intrinsics-network's People

Contributors

jannerm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

intrinsics-network's Issues

Confusing concept of "self-supervised learning"

This paper is named self supervised intrinsic image decomposition. However, it means self-supervised transfer from a fully-supervised pre-trained model. This is kind of misleading for me, as I haven't be sure about this until I check the codes.

tensor sizes don't match

Hi!
Thank you for the hard work
I'm trying to run the decomposer, but I somehow get the runtime error "normed = normals / (magnitude + 1e-6)", one tensor is of size 3 and the other 60, in the models/primitives.py file.
Wouldn't it be better to permute the dimension before:
"magnitude = magnitude.repeat(3,1,1,1).permute(1, 0, 2, 3)"
I use python 3.6 from anaconda and pytorch 0.3.0

Arguments dismatch

Traceback (most recent call last):
File "shader.py", line 50, in
pipeline.visualize_shader(shader, val_loader, save_path )
File "/home/xuwh/xu/intrinsics-network-master/pipeline/visualization.py", line 52, in visualize_shader
grid = torchvision.utils.make_grid(images, nrow=3, padding=0).cpu().numpy().transpose(1,2,0)
File "/home/xuwh/.local/lib/python2.7/site-packages/torchvision/utils.py", line 35, in make_grid
tensor = torch.stack(tensor, dim=0)
File "/home/xuwh/.local/lib/python2.7/site-packages/torch/functional.py", line 58, in stack
return torch.cat(inputs, dim)
TypeError: cat received an invalid combination of arguments - got (list, int), but expected one of:

  • (sequence[torch.cuda.FloatTensor] seq)
  • (sequence[torch.cuda.FloatTensor] seq, int dim)
    didn't match because some of the arguments have invalid types: (list, int)

Any solution? Thanks in advance. @jannerm @FangYang970206 @Nighteye

Error downloading datasets

The links in download_dataset.sh seem to be broken. I get this error

screen shot 2018-11-04 at 1 36 48 pm

Can you please update the links? I am downloading just one dataset hence using this command

./download_data.sh {car}

Thank you

RuntimeError: $ Torch: not enough memory

when running the shader, I get the error in the following pictures.
image
I do not have GPU now , so I run in CPU(I just change the cuda to tensor).In order to make the model fastly run , I only use one category---motorbike ,and set num_train=1000 num_val is same to you . In the first epoch,I got the error! My RAM is 8G .I think this should be enough! I feel strange!
Any ideas where it comes from? Thank you very much!

Albedo and Shading incorrect in Composer output

Hello,

While I'm trying to run the composer with the same code, I'm getting a good reconstruction but we can see that the albedo image has shading artifacts and the shading image is mostly white and flat.
I trained it for 108 epochs with each epoch having the schedule - 10_shape,10_shading,reflectance,20_lighting

https://drive.google.com/file/d/1CpjuBO7jbZvlkDE3zOcoBVoXfQC5j2ft/view?usp=sharing

I could see overfitting in the albedo, shading and reconstruction loss. Hence I added dropout in the decomposer and trained it for 500 epochs. Then I trained the composer network with dropout added in decomposer and shader and made lights multiplier equal to 1.5. This was with the same schedule as above for 300 epochs The overfitting reduced but the network learned poorly on the albedo but did relatively well on shape and shading from before. The albedo output from the decomposer is good but not from the composer. As we can see below:

https://drive.google.com/file/d/1bYfbIxrFp4zPSS02feMNfdzh0Xtrmxsj/view?usp=sharing

So now I'm training the albedo decoder of the composer independently. Kindly help me with what is going wrong. In the end, I aim to be performing self-supervised training with images of my own.

For normal .obj files rendering

hi dear Michael,

I am using your rendering code to render intrinsic images from normal .obj dataset like faces. The raw files (.obj and .mtl, .png) are in the attached zip file. I found for common obj mesh, they do not have many "Mesh" groups like in ShapeNet. So for ShapeNet, I can render intrinsic images without problems, but for normal obj files, I cannot load the color/texture in composite/albedo with your code. The rendered result is here:
Composite:
image
Albedo:
image
Depth:
image

Could you please help to check or have some suggestions on how to modify your codes to make it also worked for normal obj files? Thanks sooo much!!!

0365489.zip

AssertionError: MaskedSelect can't differentiate the mask

To pytorch version 0.3.0 users, there is a bug related to autograd. Lines 63-64 of file pipeline/utils.py, img[mask] /= 2. and img[mask] += .5, trigger the following "AssertionError: MaskedSelect can't differentiate the mask".
As suggested in https://github.com/yunjey/StarGAN/issues/12, it is a bug of version 0.3.0 that can be fixed by replacing these lines by
img = img * (1. - (mask == 1).float() / 2.)
img = img + (mask == 1).float() * .5

It is also suggested in https://discuss.pytorch.org/t/get-error-message-maskedfill-cant-differentiate-the-mask/9129 to use the .detach() method.

Expected output of shader.py

Hi,
I was able to run shader.py for 500 epochs. On looking in visualization.py, I think the output should be 3 columns where the first column contains the shape images, second contains the shading predictions by the network and the third contains the groundtruth shading images.

Is the understanding right? Since in the image, we can see the output after running for 500 epochs but the second column does not seem to be a reasonable shading output. It also does not look like the lighting sphere as I can see light from the same direction.

Kindly help me interpret the image and tell me where I'm going wrong in my understanding.
Also my pytorch version is 1.1.0

499

Pre trained models

Hi,

Could you share the trained model so that we could test directly on new images?

RuntimeError, tensor size mismatch

I get the following error when running the decomposer:

Traceback (most recent call last):
File "decomposer.py", line 62, in
train_losses = trainer.train()
File "/home/nietog/Projects/intrinsics-network/pipeline/DecomposerTrainer.py", line 42, in train
err = self.__epoch()
File "/home/nietog/Projects/intrinsics-network/pipeline/DecomposerTrainer.py", line 30, in __epoch
loss.backward()
File "/home/nietog/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 167, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/home/nietog/anaconda3/lib/python3.6/site-packages/torch/autograd/init.py", line 99, in backward
variables, grad_variables, retain_graph)
File "/home/nietog/anaconda3/lib/python3.6/site-packages/torch/autograd/function.py", line 91, in apply
return self._forward_cls.backward(self, *args)
File "/home/nietog/anaconda3/lib/python3.6/site-packages/torch/autograd/functions/tensor.py", line 454, in backward
print(grad_output.clone().masked_fill
(mask, 0))
RuntimeError: The expanded size of the tensor (1) must match the existing size (32) at non-singleton dimension 1. at /opt/conda/conda-bld/pytorch_1512387374934/work/torch/lib/THC/generic/THCTensor.c:323

Any idea where it comes from? Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.