Giter Site home page Giter Site logo

kwea123 / casmvsnet_pl Goto Github PK

View Code? Open in Web Editor NEW
269.0 269.0 30.0 40.45 MB

Cascade Cost Volume for High-Resolution Multi-View Stereo and Stereo Matching using pytorch-lightning

License: GNU General Public License v3.0

Python 21.36% Jupyter Notebook 72.94% MATLAB 5.70%
3d-reconstruction blendedmvs cascade-cost-volume depth-prediction dtu-dataset multi-view-stereo mvsnet pytorch pytorch-lightning tanks-and-temples

casmvsnet_pl's Introduction

Hi 👋, I'm AI葵(AI Aoi), a researcher and VTuber of computer vision!

  • 🔭 I’m currently working on novel view synthesis, multi-view stereo and unity VR/AR/MR integration

  • 💬 Ask me about anything related to deep learning in computer vision, Unity VR

  • In any language: English/中文/français/日本語

  • Chat with me!

  • Sponsor me! (paypal or github sponsor)

My skills

cpp csharp unity java opencv python pytorch tensorflow

kwea123 kwea123

Connect with me

mail quei-an-chen-612266143 uc7ulsmuu_gigpqngb4sqswq uc7ulsmuu_gigpqngb4sqswq

casmvsnet_pl's People

Contributors

kwea123 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

casmvsnet_pl's Issues

debugging with pdb

I wanted to debug this program with pdb, I ran command below

python -m pdb train.py \
   --dataset_name dtu \
   --root_dir $DTU_DIR \
   --num_epochs 16 --batch_size 2 \
   --depth_interval 2.65 --n_depths 8 32 48 --interval_ratios 1.0 2.0 4.0 \
   --optimizer adam --lr 1e-3 --lr_scheduler cosine \
   --exp_name exp

and set a breakpoint by

b dataset/dtu.py:148

and continue running the program with typing c
Normally,the program should pause on dataset/dtu.py:148, but actually,the message below was printed and the program was restarted.

The program finished and will be restarted

But, when I set a breakpoint in model/mvsnet.py,the program paused as expected.
I am confused what is the difference between dataset/dtu.py and model/mvsnet.py when debugging with pdb,could you please help me out.

Test on Tanks and Temples

Hi, thanks for your excellent works.

I run the code of origin CasMVSNet, and test on Tanks and Temples with their pretrained model.

Then I found that if I don't adjust the number of planes(which is "48,32,8"), the result point cloud looks bad. And after I adjusting the number of the plane of the coarsest level to 160(or 192), the result point cloud is much better.

And I noticed that in your eval.py code, the number of planes is still "48,32,8", and your release had mentioned that Fusion results for all scans using default parameters. What's more, the point cloud you provide focus on a smaller scene range, while it contains much more points than what I got in origin CasMVSNet(after I adjust the number of the plane of the coarsest level).

I wonder how you achieve the wonderful result. Thank you very much!

What's more, I notice that the strategy which you set the depth_interval of scan in Tanks and Temples seems doesn't fit the relationship of n_depths(48×4=192) and interval (according to your comment). Could this be the reason?

Confidence map

Hi,
I see that in your code, you somehow collect the confidence map based on the predicted probability of depth. As I understand this probability tensor contains the probability for each sampled depth value along the ray for each pixel right ? Can u explain more what you have done to get this confidence map. Thanks !

with torch.no_grad():

ImportError when run eval.py

Hi, I use RTX3090 and python==3.8.10, pytorch==1.7.1, cudatoolkit==11.0.3, cudnn==8.0.5, pytorch-lightning==0.7.5 and others meet the requirements.txt. But I encountered this error:

File "eval.py", line 11, in
from models.mvsnet import CascadeMVSNet
File "/models/mvsnet.py", line 4, in
from .modules import *
File "/models/modules.py", line 4, in
from inplace_abn import InPlaceABN
File "/anaconda3/envs/cascade/lib/python3.8/site-packages/inplace_abn/init.py", line 1, in
from .abn import ABN, InPlaceABN, InPlaceABNSync
File "/anaconda3/envs/cascade/lib/python3.8/site-packages/inplace_abn/abn.py", line 8, in
from .functions import inplace_abn, inplace_abn_sync
File "/anaconda3/envs/cascade/lib/python3.8/site-packages/inplace_abn/functions.py", line 8, in
from . import _backend
ImportError: /anaconda3/envs/cascade/lib/python3.8/site-packages/inplace_abn/_backend.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN2at11leaky_relu_ERNS_6TensorERKN3c106ScalarE

What should I do to run this test on DTU dataset?
Thank you for your time.

Pretrained BlendedMVS model

@kwea123, your results trained with BlendedMVS look enticing -- is your model available for download? Perhaps I'm missing something in plain sight...

Also, if you have a moment can you confirm that you used Yao Yao's large image set when training? Any training notes would be wonderful.

Thanks again,

-Kevin

Choosing depth interval for Tanks and Temples dataset

Hi,

  1. I wonder if u have trained your model on Tanks and Temples dataset. I have read your dataloader on this dataset for evaluation's purposes but I see that you hand-tuned the depth interval for each scene for the best results I guess. However, I am writing my own dataloader on this dataset and I wonder how to choose optimal depth_interval value for the whole dataset for training ?
    I know it is possible to use different depth_interval for each scene but does it work though ?

  2. I see that in the BlendMVS dataloader you are scaling the depth_min value. I wonder should I do something like that with Tanks and Temples dataset ? I check a few cam file and I see these depth_min value are quite similar (0.04 ~ 1.99) and it's not like BlendMVS dataset but just wanna ask to make sure.

about training problem

Thanks for your great work.I want to train the network, how many gpus did you use when training?

Testing with colmap poses

Hi kwea123 :)
I tried to use colmap poses to test scan9 in dtu dataset. In particular, I directly dropped images in scan9 into colmap and used outputs of colmap as the intrinsic and extrinsic parameters. The depth range is about 5 to 9. However in dtu the range is around 400 to 900. Unfortuantely, I got very bad results. I do not know where is the error.
Do you have some suggestions ?

By the way, I also wonder whether the performance is still acceptable with low texture objects.
Thank you very much.

Does this method work on real-captured images ?

Hi,
I am just wondering if I can use pretrained MVS or BlendMVS model on mobile phone captured images.
Also I wonder how can I get the poses as input to the network ? Please help me on this.

testing on tanks datasets

I find that the pair information of 99th images from the tanks advanced is None. How to solve the problem?

捕获

About the CasMVSNet

Hi, recently I read the paper "Deep Stereo using Adaptive Thin Volume Representation
with Uncertainty Awareness" published on 27 Nov 2019, see the paper. I think CasMVSNet is so similar to that paper.

question about purpose of scale_factor in blendedmvs.py

Hi, thank you very much for your code at first!
I have some questions about scale_factor used in ./datasets/blendedmvs.py
the locations are:

if scan not in self.scale_factors:
# use the first cam to determine scale factor
self.scale_factors[scan] = 100/depth_min
depth_min *= self.scale_factors[scan]
extrinsics[:3, 3] *= self.scale_factors[scan]

and
depth *= self.scale_factors[scan]

I think you are scaling up the scene in BlendedMVS and I also know the unit in BlendMVS is uncertain.
However, I feel it is not necessary to scale up. I think the essence of CasMVSNet is to find correspondances among multiple views and it only depends on images. Scaling up does not help find correspondances. (I think the depth map is a form of correspondance)

Therefore, I want to know why you think it is necessary and did you see improvements after adding scale_factor?

A question when test on dtu

Hi, I have a question when I test on dtu dataset.
I specify self.img_wh of DTUDataset as (640, 512)(in other words, same as training image size). I think intrinsics read from self.root_dir/Cameras/{vid:08d}_cam.txtwill be same as intrinsics read from self.root_dir/Cameras/train/{vid:08d}_cam.txt after line 64 in dtu.py.
However, they are not same.

I print intrinsics after line 64:

[[289.233      0.        82.3204  ]
 [  0.       307.5392    66.034134]
 [  0.         0.         1.      ]]

while intrinsics read from self.root_dir/Cameras/train/{vid:08d}_cam.txt is:

361.54125      0.0      82.900625 
0.0      360.3975      66.383875 
0.0      0.0      1.0  

I don't figure out that what's wrong with my thoughts. Could you give me some hints on it? Please forgive me if it is a stupid question. Very thanks for your kindness.

the influence of batchsize with Inplace-ABN

Excelent work with Inplace-ABN to reduce the memory cost.

Now I want to train cascade with larger resolution of 1152*864 for dtu dataset to find out whether larger resolution will improve the 3D reconstruction performance. However, the batchsize can only be set as one for larger input. I want to ask you whether the batchsize of Inplace-ABN will influence the performance and have you ever trained cascade_pl with larger resolution of dtu dataset, will the lager resolution improve the 3D reconstruction performance of cascade_pl?

Another question is about the evaluations of dtu dataset with accuracy and completeness. Why for accuracy and completeness the lower is the better? In the paper of dtu dataset (http://roboimagedata2.compute.dtu.dk/data/text/multiViewCVPR2014.pdf), I find that ( there is a tradeoff between completeness and accuracy with [19] being the most accu�rate and [2] being the most complete.) in the paper. Therefore, I'm confused about the evaluations of dtu dataset with accuracy and completeness whether the lower is the better in the most MVS papers.

Last question is about the process of images in testing phase. I find that you just resize the images to the size for testing while in RMVSNet they will first resize images to approximate size and then crop the images to needed size. What's the difference of the two processing ways and which one is better for random size for testing?

Results on DTU

Hi, thank you for your contributions first. How can I get the results like abs_err | acc_1mm | acc_2mm | acc_4mm in your project?

Size mismatch when running with release model trained on BlendedMVS data

While the .ply output I compute using your release model have very similar point counts to your .ply model releases, the overall quality is quite different, as shown below. As shown, the former is clearly captures more detail.

@kwea123 , do these results for DTU's scan9 data set look reasonable? I have not changed the default image width/height set in the code (e.g., 1152x864).

I also attempted to run with the release model trained on BlendedMVS data https://github.com/kwea123/CasMVSNet_pl/releases/tag/1.5, but that led to the size mismatch noted below:

Trained on DTU: _ckpt_epoch_10.ckpt
25.37 M points
scan9_DTU

Release output: Reference model
25.42 M points
scan_9_ref

Trained on BlendedMVS: epoch.15.ckpt

RuntimeError: Error(s) in loading state_dict for CascadeMVSNet: size mismatch for cost_reg_1.conv0.conv.weight: copying a param with shape torch.Size([8, 8, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([8, 16, 3, 3, 3]). size mismatch for cost_reg_2.conv0.conv.weight: copying a param with shape torch.Size([8, 8, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([8, 32, 3, 3, 3]).

About the Inplace-ABN

Hi, thanks for sharing the code tricks!
As you say, the GPU memory is reducing about 15% when we replace BN+ReLU with Inplace-ABN.
I wonder to know that it works only during training, or both the training and testing.

Looking forward to your reply.
Many Thanks!!!

About homography warping

I find that the homography warping codes of cascade_pl and original cascade are different. Is the homography warping code of original cascade wrong?
cascade_pl:
1
original cascade:
2

About Inplace-abn

Excellent work!
I notice during the training, you used the 'from inplace-abn import InPlaceABN' but in test, it turns to 'from inplace-abn import ABN'. Is there any difference between these two kinds?

small mistakes in /datasets/tanks.py I think

Hi, first of all, THANKS a lot for your excellent work.
but I think there are some mistakes when loading data from TanksDataset.
line 68 in tanks.py should be: with open(os.path.join(self.root_dir, self.split, scan, 'pair.txt')) as f:
line 82 in tanks.py should be: proj_mat_filename = os.path.join(self.root_dir, self.split, scan, f'cams/{vid:08d}_cam.txt')
line 132 in tanks.py should be: img_filename = os.path.join(self.root_dir, self.split, scan, f'images/{vid:08d}.jpg')

In one word: I feel like you miss self.split when load data from TanksDataset.

Reference image

How can I know which picture is used as the reference image for the reconstructed point cloud?

Training error on BlendedMVS dataset.

Hi,

I am now testing the training of BlendedMVS dataset and getting the following error. (I am doing it with pytorch 1.8.1, because previous version does not support RTX 3080 TI).

  File "/home/vge/Documents/Han/CasMVSNet_pl/models/mvsnet.py", line 223, in forward
    depth_values = rearrange(init_depth_min, 'b -> b 1') + \
  File "/home/vge/miniconda3/envs/mvs/lib/python3.8/site-packages/einops/einops.py", line 424, in rearrange
    return reduce(tensor, pattern, reduction='rearrange', **axes_lengths)
  File "/home/vge/miniconda3/envs/mvs/lib/python3.8/site-packages/einops/einops.py", line 376, in reduce
    raise EinopsError(message + '\n {}'.format(e))
einops.EinopsError:  Error while processing rearrange-reduction pattern "b -> b 1".
 Input tensor shape: torch.Size([2, 1]). Additional info: {}.
 Expected 1 dimensions, got 2

The part of the code is

                    depth_values = rearrange(init_depth_min, 'b -> b 1') + \
                                   rearrange(depth_interval_l, 'b -> b 1') * \
                                   rearrange(torch.arange(0, D,

where init_depth_min is of size [2, 1].
I think this is correct, because my batch size is 2. But einops still complains about it.

My command line is

--dataset_name blendedmvs --root_dir ***/BlendedMVS --num_epochs 16 --batch_size 2 --depth_interval 192.0 --n_depths 8 32 48 --interval_ratios 1.0 2.0 4.0 --optimizer adam --lr 1e-3 --lr_scheduler cosine --exp_name exp --num_gpus 1

Is it possible to be the error of the library einops? I have not used it before.

Thanks,
Han

Training loss

Hi, thank you for your code! I have some questions:

  1. Is there any change between your code and the original code of cascade-stereo?
  2. what is the final training loss of Cascade MVSNet on DTU dataset?

Blended checkpoint (15) worse performance than DTU checkpoint (10) on smaller objects

Hi,

Thanks for the lighting code. It is very useful to understand the cascade mvsnet. I've noticed worse performance on smaller objects with the epoch 15 checkpoint. I'm testing my own dataset run through colmap, and then through the colmap2mvsnet script. All settings are the same other than num_groups which is set to 8 for the blended checkpoint.

Have you seen similar issues? Is there anything I need to change other than num_groups setting? For the images, I'm running n_views = 5. The depth range is between 3.0 and 7.0. The original Casmvs and UCSNet process the images correctly. Only the blended checkpoint has issues. The images are of a anime character. Here is an examples of the good and bad depthmaps.

Thanks

depth_epoch_10_scale_factor_200
depth_epoch_15_scale_factor_200

evaluation metric

Hi, kwea123, thanks for sharing your implementation. I find in this paper, they use acc and completeness. I understand why they use both. I check the original paper "Large-Scale Data for Multiple-View Stereopsis" and their descriptions are:

– Accuracy is measured as the distance from the MVS
reconstruction to the structured light reference, encapsulating the quality of the reconstructed MVS points.
– Completeness is measured as the distance from the reference to theMVSreconstruction, encapsulating how much of the surface is captured by the MVS reconstruction.
....
These distances are measured by comparing structured light and MVS-reconstructed 3D point clouds. More specifically, we measure the distance from every point in one point cloud to the closest point in the other point cloud and then we record statistics about the distribution of these. We chose to characterize these empirical probability distribution functions (PDFs) by their mean and median, after removing observations with distances above 20 mm. The latter was done so that a few large outliers would not dominate the result.

so I don't understand why match every point in one point cloud to the closest point in the other point . Do we need to align these two point clouds first ? Even they are aligned, how can it ensure the matching points are correct corresponding points?

thanks again.
cheers!

Fusing depth maps estimated for Tanks and Temples dataset on 2080Ti

Hi,
Thank you for your open-source code. CasMVSNet contributes a lot to the efficient multi-view depth estimation with limited GPU memory.

I am using the provided fusibile method to fuse full-resolution depth maps estimated for Tanks and Temples dataset. However, the fusion procedure breaks and seems to require more memories.
Is there any way to fuse the depth maps with 11GB GPU ram for the Tanks and Temples dataset?

Calculate real depth

Hello, I learn your code, it's very interesting!
How do you convert predicted depth in CasMVSNet_pl to real depth and get abs error near 4 mm?
And can you get me advice how to convert predicted depth (nerf_pl) to real depth? (I have abs error 10-20 mm in dtu dataset)

About Homography Warping

Hi, i find the implemented homo_warp different with fomula referenced in the paper and hard to proof those two to be mathmatically equal:
code:
image

formula:
image

Could you please give a reference algorithm of your code implementation?

How to remove the InplaceABN part

I have problems compiling the InplaceABN module, and I find that with InplaceABN, the Pytorch version is bounded. Thus, I want to remove the InplaceABN module and use half-precision to replace it. How should I do? Thank you~

Creating my own datasets

Hello, thank you for your open source. Since I use the model of public dataset training, the effect is not good when testing my own dataset. What can I do to build my own dataset? How do I get the Depth and Rectified images in the dataset?

Evaluate on DTU

When I load the checkpoint on DTU provided by you (https://github.com/kwea123/CasMVSNet_pl/releases/download/v1.0/_ckpt_epoch_10.ckpt), the result I obtain on DTU's validation set is different than the result provided by you.

{'test/abs_err': tensor(4.3819, device='cuda:0'),
'test/acc_1mm': tensor(0.7146, device='cuda:0'),
'test/acc_2mm': tensor(0.8374, device='cuda:0'),
'test/acc_4mm': tensor(0.9029, device='cuda:0'),
'test/infer_time': 0.21366805917513113,
'test/loss': tensor(16.9188, device='cuda:0'),
'test_abs_err': tensor(4.3819, device='cuda:0'),
'test_loss': tensor(16.9188, device='cuda:0')}

This is the train.py slightly modified by me:

import os, sys
from opt import get_opts
import torch

from torch.utils.data import DataLoader
from datasets import dataset_dict

# models
from models.mvsnet import CascadeMVSNet
from inplace_abn import InPlaceABN

from torchvision import transforms as T

# optimizer, scheduler, visualization
from utils import *

# losses
from losses import loss_dict

# metrics
from metrics import *

# pytorch-lightning
from pytorch_lightning.callbacks import ModelCheckpoint
from pytorch_lightning import LightningModule, Trainer
from pytorch_lightning.loggers import TestTubeLogger
import time


class MVSSystem(LightningModule):
    def __init__(self, hparams):
        super(MVSSystem, self).__init__()
        self.hparams = hparams
        # to unnormalize image for visualization
        self.unpreprocess = T.Normalize(mean=[-0.485 / 0.229, -0.456 / 0.224, -0.406 / 0.225],
                                        std=[1 / 0.229, 1 / 0.224, 1 / 0.225])

        self.loss = loss_dict[hparams.loss_type](hparams.levels)
        self.eval_time = []
        self.test_time = []

        self.model = CascadeMVSNet(n_depths=self.hparams.n_depths,
                                   interval_ratios=self.hparams.interval_ratios,
                                   num_groups=self.hparams.num_groups,
                                   norm_act=InPlaceABN,
                                   hparams=hparams)

        # if num gpu is 1, print model structure and number of params
        if self.hparams.num_gpus == 1:
            # print(self.model)
            print('number of parameters : %.2f M' %
                  (sum(p.numel() for p in self.model.parameters() if p.requires_grad) / 1e6))

        # load model if checkpoint path is provided
        if self.hparams.ckpt_path != '':
            print('Load model from', self.hparams.ckpt_path)
            load_ckpt(self.model, self.hparams.ckpt_path, self.hparams.prefixes_to_ignore)

    def decode_batch(self, batch):
        imgs = batch['imgs']
        proj_mats = batch['proj_mats']
        depths = batch['depths']
        masks = batch['masks']
        init_depth_min = batch['init_depth_min']
        depth_interval = batch['depth_interval']
        return imgs, proj_mats, depths, masks, init_depth_min, depth_interval

    def forward(self, imgs, proj_mats, init_depth_min, depth_interval, log=None, uniform_divide=False):
        return self.model(imgs, proj_mats, init_depth_min, depth_interval, log, uniform_divide)

    def setup(self, stage=None):
        if stage == 'fit' or stage is None:
            dataset = dataset_dict[self.hparams.dataset_name]
            self.train_dataset = dataset(root_dir=self.hparams.root_dir,
                                         split='train',
                                         n_views=self.hparams.n_views,
                                         levels=self.hparams.levels,
                                         depth_interval=self.hparams.depth_interval)
            self.val_dataset = dataset(root_dir=self.hparams.root_dir,
                                       split='val',
                                       n_views=self.hparams.n_views,
                                       levels=self.hparams.levels,
                                       depth_interval=self.hparams.depth_interval)
        elif stage == 'test':
            dataset = dataset_dict[self.hparams.dataset_name]
            self.test_dataset = dataset(root_dir=self.hparams.root_dir,
                                        split='val',
                                        n_views=self.hparams.n_views,
                                        levels=self.hparams.levels,
                                        depth_interval=self.hparams.depth_interval)

    def configure_optimizers(self):
        self.optimizer = get_optimizer(self.hparams, self.model)
        scheduler = get_scheduler(self.hparams, self.optimizer)

        return [self.optimizer], [scheduler]

    def train_dataloader(self):
        return DataLoader(self.train_dataset,
                          shuffle=True,
                          num_workers=4,
                          batch_size=self.hparams.batch_size,
                          pin_memory=True)

    def val_dataloader(self):
        return DataLoader(self.val_dataset,
                          shuffle=False,
                          num_workers=4,
                          batch_size=self.hparams.batch_size,
                          pin_memory=True)

    def test_dataloader(self):
        return DataLoader(self.test_dataset,
                          shuffle=False,
                          num_workers=2,
                          batch_size=2,
                          pin_memory=True)

    def training_step(self, batch, batch_nb):
        log = {'lr': get_learning_rate(self.optimizer)}
        imgs, proj_mats, depths, masks, init_depth_min, depth_interval = \
            self.decode_batch(batch)
        results, log = self(imgs, proj_mats, init_depth_min, depth_interval, log=log)
        log['train/loss'] = loss = self.loss(results, depths, masks)

        with torch.no_grad():
            if batch_nb == 0:
                img_ = self.unpreprocess(imgs[0, 0]).cpu()  # batch 0, ref image
                depth_gt_ = visualize_depth(depths['level_0'][0])
                depth_pred_ = visualize_depth(results['depth_0'][0] * masks['level_0'][0])
                prob = visualize_prob(results['confidence_0'][0] * masks['level_0'][0])
                stack = torch.stack([img_, depth_gt_, depth_pred_, prob])  # (4, 3, H, W)
                self.logger.experiment.add_images('train/image_GT_pred_prob',
                                                  stack, self.global_step)

            depth_pred = results['depth_0']
            depth_gt = depths['level_0']
            mask = masks['level_0']
            log['train/abs_err'] = abs_err = abs_error(depth_pred, depth_gt, mask).mean()
            log['train/acc_1mm'] = acc_threshold(depth_pred, depth_gt, mask, 1).mean()
            log['train/acc_2mm'] = acc_threshold(depth_pred, depth_gt, mask, 2).mean()
            log['train/acc_4mm'] = acc_threshold(depth_pred, depth_gt, mask, 4).mean()

        return {'loss': loss,
                'progress_bar': {'train_abs_err': abs_err},
                'log': log
                }

    def validation_step(self, batch, batch_nb):
        log = {}
        imgs, proj_mats, depths, masks, init_depth_min, depth_interval = \
            self.decode_batch(batch)
        eval_start_time = time.time()
        results, log = self(imgs, proj_mats, init_depth_min, depth_interval, log=log)
        self.eval_time.append(time.time() - eval_start_time)
        log['val_loss'] = self.loss(results, depths, masks)

        if batch_nb == 0:
            img_ = self.unpreprocess(imgs[0, 0]).cpu()  # batch 0, ref image
            depth_gt_ = visualize_depth(depths['level_0'][0])
            depth_pred_ = visualize_depth(results['depth_0'][0] * masks['level_0'][0])
            prob = visualize_prob(results['confidence_0'][0] * masks['level_0'][0])
            stack = torch.stack([img_, depth_gt_, depth_pred_, prob])  # (4, 3, H, W)
            self.logger.experiment.add_images('val/image_GT_pred_prob',
                                              stack, self.global_step)

        depth_pred = results['depth_0']
        depth_gt = depths['level_0']
        mask = masks['level_0']

        log['val_abs_err'] = abs_error(depth_pred, depth_gt, mask).sum()
        log['val_acc_1mm'] = acc_threshold(depth_pred, depth_gt, mask, 1).sum()
        log['val_acc_2mm'] = acc_threshold(depth_pred, depth_gt, mask, 2).sum()
        log['val_acc_4mm'] = acc_threshold(depth_pred, depth_gt, mask, 4).sum()
        log['mask_sum'] = mask.float().sum()

        return log

    def validation_epoch_end(self, outputs):
        mean_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
        mask_sum = torch.stack([x['mask_sum'] for x in outputs]).sum()
        mean_abs_err = torch.stack([x['val_abs_err'] for x in outputs]).sum() / mask_sum
        mean_acc_1mm = torch.stack([x['val_acc_1mm'] for x in outputs]).sum() / mask_sum
        mean_acc_2mm = torch.stack([x['val_acc_2mm'] for x in outputs]).sum() / mask_sum
        mean_acc_4mm = torch.stack([x['val_acc_4mm'] for x in outputs]).sum() / mask_sum

        return {'progress_bar': {'val_loss': mean_loss,
                                 'val_abs_err': mean_abs_err},
                'log': {'val/loss': mean_loss,
                        'val/abs_err': mean_abs_err,
                        'val/acc_1mm': mean_acc_1mm,
                        'val/acc_2mm': mean_acc_2mm,
                        'val/acc_4mm': mean_acc_4mm,
                        'val/infer_time': sum(self.eval_time) / len(self.eval_time)
                        }
                }

    def test_step(self, batch, batch_nb):
        log = {}
        imgs, proj_mats, depths, masks, init_depth_min, depth_interval = \
            self.decode_batch(batch)
        test_start_time = time.time()
        results, log = self(imgs, proj_mats, init_depth_min, depth_interval, log=log)
        self.test_time.append(time.time() - test_start_time)
        log['test_loss'] = self.loss(results, depths, masks)

        if batch_nb == 0:
            img_ = self.unpreprocess(imgs[0, 0]).cpu()  # batch 0, ref image
            depth_gt_ = visualize_depth(depths['level_0'][0])
            depth_pred_ = visualize_depth(results['depth_0'][0] * masks['level_0'][0])
            prob = visualize_prob(results['confidence_0'][0] * masks['level_0'][0])
            stack = torch.stack([img_, depth_gt_, depth_pred_, prob])  # (4, 3, H, W)
            self.logger.experiment.add_images('test/image_GT_pred_prob',
                                              stack, self.global_step)

        depth_pred = results['depth_0']
        depth_gt = depths['level_0']
        mask = masks['level_0']

        log['test_abs_err'] = abs_error(depth_pred, depth_gt, mask).sum()
        log['test_acc_1mm'] = acc_threshold(depth_pred, depth_gt, mask, 1).sum()
        log['test_acc_2mm'] = acc_threshold(depth_pred, depth_gt, mask, 2).sum()
        log['test_acc_4mm'] = acc_threshold(depth_pred, depth_gt, mask, 4).sum()
        log['mask_sum'] = mask.float().sum()

        return log

    def test_epoch_end(self, outputs):
        mean_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
        mask_sum = torch.stack([x['mask_sum'] for x in outputs]).sum()
        mean_abs_err = torch.stack([x['test_abs_err'] for x in outputs]).sum() / mask_sum
        mean_acc_1mm = torch.stack([x['test_acc_1mm'] for x in outputs]).sum() / mask_sum
        mean_acc_2mm = torch.stack([x['test_acc_2mm'] for x in outputs]).sum() / mask_sum
        mean_acc_4mm = torch.stack([x['test_acc_4mm'] for x in outputs]).sum() / mask_sum

        return {'progress_bar': {'test_loss': mean_loss,
                                 'test_abs_err': mean_abs_err},
                'log': {'test/loss': mean_loss,
                        'test/abs_err': mean_abs_err,
                        'test/acc_1mm': mean_acc_1mm,
                        'test/acc_2mm': mean_acc_2mm,
                        'test/acc_4mm': mean_acc_4mm,
                        'test/infer_time': sum(self.test_time) / len(self.test_time)
                        }
                }


if __name__ == '__main__':
    hparams = get_opts()
    system = MVSSystem(hparams)
    checkpoint_callback = ModelCheckpoint(filepath=os.path.join(f'ckpts/{hparams.exp_name}',
                                                                '{epoch:02d}'),
                                          monitor='val/acc_2mm',
                                          mode='max',
                                          save_top_k=5, )

    logger = TestTubeLogger(
        save_dir="logs",
        name=hparams.exp_name,
        debug=False,
        create_git_tag=False
    )

    save_train_py(os.getcwd(), os.path.join(os.getcwd(), 'saved_scripts', hparams.exp_name))

    trainer = Trainer(max_epochs=hparams.num_epochs,
                      checkpoint_callback=checkpoint_callback,
                      logger=logger,
                      # early_stop_callback=None,
                      weights_summary=None,
                      progress_bar_refresh_rate=1,
                      gpus=hparams.num_gpus,
                      distributed_backend='ddp' if hparams.num_gpus > 1 else None,
                      num_sanity_val_steps=0 if hparams.num_gpus > 1 else 5,
                      benchmark=True,
                      precision=16 if hparams.use_amp else 32,
                      amp_level='O1')

    if hparams.eval_ckpts:
        system.freeze()
        trainer.test(system)
    else:
        trainer.fit(system)

This is my command:

python train.py --ckpt_path /export/data/lwangcg/CasMVSNet_pl/pretrained/dtu/_ckpt_epoch_10.ckpt --num_epochs 16 --batch_size 2 --depth_interval 2.65 --n_depths 8 32 48 --interval_ratios 1.0 2.0 4.0 --exp test_official_dtu_baseline_20210304_1204 --eval_ckpts True

Testing on DTU

I only found the training and validation set of DTU from the readme file https://drive.google.com/file/d/1eDjh-_bxKKnEuz5h-HXS7EDJn59clx6V/view. Now I can train and validate the model, but when I try to run the testing code, there is an error:

FileNotFoundError: [Errno 2] No such file or directory: '/export/data/lwangcg/CasMVSNet_pl/dtu_training/mvs_training/dtu/Rectified/scan4/rect_032_3_r5000.png'

I found that in /dtu/Rectified/, there is no folder with the name scanx. But in /dtu/Eval, Rectified.zip contains such folders, thus I unzip it and move these folders to /dtu/Rectified/. Then these errors disappear, but another problem appears:

depths = batch['depths'], KeyError: 'depths'

I notice that in dtu.py, the returned batch has no attribute 'depths' while testing, but when training it has. I do not know how to make the testing code run now. Could you kindly give me some help?

Increasing point generation for views with varying depths

Given your approach to depth fusion and my trials with 'CasMVSNet_pl', model completeness seems best when each view's range of depths falls within a global consensus range.

For example, in the case of DTU data, the depths contributed from each view are similar and the model completeness is quite good. However, when I use custom images with widely varying per-view depths, much of the total depth is truncated during point generation.

Do you have any advice for modifying depth fusion for greater completeness for scenes with intrinsically high depth, composed of photos for which depth varies greatly between views?

How to run the code on Pytorch version>=1.6?

When I try to run this code on Pytorch version>=1.6, I met the following problem (It works well on Pytorch 1.4, however, I have to run the code in an environment where Pytorch version>=1.6)

(base) -bash-4.2$ cd inplace_abn
(base) -bash-4.2$ ls
CODE_OF_CONDUCT.md  CONTRIBUTING.md  equation.svg  include  inplace_abn  inplace_abn.png  LICENSE  licenses.csv  MANIFEST.in  README.md  requirements.txt  scripts  setup.cfg  setup.py  src
(base) -bash-4.2$ python setup.py install
running install
running bdist_egg
running egg_info
creating inplace_abn.egg-info
writing inplace_abn.egg-info/PKG-INFO
writing dependency_links to inplace_abn.egg-info/dependency_links.txt
writing top-level names to inplace_abn.egg-info/top_level.txt
writing manifest file 'inplace_abn.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'inplace_abn.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build
creating build/lib.linux-x86_64-3.7
creating build/lib.linux-x86_64-3.7/inplace_abn
copying inplace_abn/__init__.py -> build/lib.linux-x86_64-3.7/inplace_abn
copying inplace_abn/abn.py -> build/lib.linux-x86_64-3.7/inplace_abn
copying inplace_abn/functions.py -> build/lib.linux-x86_64-3.7/inplace_abn
copying inplace_abn/group.py -> build/lib.linux-x86_64-3.7/inplace_abn
copying inplace_abn/_version.py -> build/lib.linux-x86_64-3.7/inplace_abn
running build_ext
building 'inplace_abn._backend' extension
creating /export/data/lwangcg/inplace_abn/build/temp.linux-x86_64-3.7
creating /export/data/lwangcg/inplace_abn/build/temp.linux-x86_64-3.7/src
Emitting ninja build file /export/data/lwangcg/inplace_abn/build/temp.linux-x86_64-3.7/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/4] c++ -MMD -MF /export/data/lwangcg/inplace_abn/build/temp.linux-x86_64-3.7/src/utils.o.d -pthread -B /data/lwangcg/anaconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/usr/local/cuda-10.1/include -I/usr/local/cuda-10.1/include -fPIC -DWITH_CUDA=1 -I/export/data/lwangcg/inplace_abn/include -I/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include -I/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/TH -I/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/export/data/lwangcg/anaconda3/include/python3.7m -c -c /export/data/lwangcg/inplace_abn/src/utils.cpp -o /export/data/lwangcg/inplace_abn/build/temp.linux-x86_64-3.7/src/utils.o -O3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
[2/4] c++ -MMD -MF /export/data/lwangcg/inplace_abn/build/temp.linux-x86_64-3.7/src/inplace_abn_cpu.o.d -pthread -B /data/lwangcg/anaconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/usr/local/cuda-10.1/include -I/usr/local/cuda-10.1/include -fPIC -DWITH_CUDA=1 -I/export/data/lwangcg/inplace_abn/include -I/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include -I/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/TH -I/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/export/data/lwangcg/anaconda3/include/python3.7m -c -c /export/data/lwangcg/inplace_abn/src/inplace_abn_cpu.cpp -o /export/data/lwangcg/inplace_abn/build/temp.linux-x86_64-3.7/src/inplace_abn_cpu.o -O3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
[3/4] /usr/local/cuda-10.1/bin/nvcc -DWITH_CUDA=1 -I/export/data/lwangcg/inplace_abn/include -I/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include -I/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/TH -I/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/export/data/lwangcg/anaconda3/include/python3.7m -c -c /export/data/lwangcg/inplace_abn/src/inplace_abn_cuda.cu -o /export/data/lwangcg/inplace_abn/build/temp.linux-x86_64-3.7/src/inplace_abn_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14
FAILED: /export/data/lwangcg/inplace_abn/build/temp.linux-x86_64-3.7/src/inplace_abn_cuda.o
/usr/local/cuda-10.1/bin/nvcc -DWITH_CUDA=1 -I/export/data/lwangcg/inplace_abn/include -I/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include -I/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/TH -I/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/export/data/lwangcg/anaconda3/include/python3.7m -c -c /export/data/lwangcg/inplace_abn/src/inplace_abn_cuda.cu -o /export/data/lwangcg/inplace_abn/build/temp.linux-x86_64-3.7/src/inplace_abn_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14
/usr/local/GNU/gcc-8.3.0/include/c++/8.3.0/bits/basic_string.tcc: In instantiation of ‘static std::basic_string<_CharT, _Traits, _Alloc>::_Rep* std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_S_create(std::basic_string<_CharT, _Traits, _Alloc>::size_type, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’:
/usr/local/GNU/gcc-8.3.0/include/c++/8.3.0/bits/basic_string.tcc:578:28:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&, std::forward_iterator_tag) [with _FwdIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’
/usr/local/GNU/gcc-8.3.0/include/c++/8.3.0/bits/basic_string.h:5052:20:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct_aux(_InIterator, _InIterator, const _Alloc&, std::__false_type) [with _InIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’
/usr/local/GNU/gcc-8.3.0/include/c++/8.3.0/bits/basic_string.h:5073:24:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&) [with _InIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’
/usr/local/GNU/gcc-8.3.0/include/c++/8.3.0/bits/basic_string.tcc:656:134:   required from ‘std::basic_string<_CharT, _Traits, _Alloc>::basic_string(const _CharT*, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’
/usr/local/GNU/gcc-8.3.0/include/c++/8.3.0/bits/basic_string.h:6725:95:   required from here
/usr/local/GNU/gcc-8.3.0/include/c++/8.3.0/bits/basic_string.tcc:1067:1: error: cannot call member function ‘void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’ without object
       __p->_M_set_sharable();
 ^     ~~~~~~~~~
/usr/local/GNU/gcc-8.3.0/include/c++/8.3.0/bits/basic_string.tcc: In instantiation of ‘static std::basic_string<_CharT, _Traits, _Alloc>::_Rep* std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_S_create(std::basic_string<_CharT, _Traits, _Alloc>::size_type, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’:
/usr/local/GNU/gcc-8.3.0/include/c++/8.3.0/bits/basic_string.tcc:578:28:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&, std::forward_iterator_tag) [with _FwdIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’
/usr/local/GNU/gcc-8.3.0/include/c++/8.3.0/bits/basic_string.h:5052:20:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct_aux(_InIterator, _InIterator, const _Alloc&, std::__false_type) [with _InIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’
/usr/local/GNU/gcc-8.3.0/include/c++/8.3.0/bits/basic_string.h:5073:24:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&) [with _InIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’
/usr/local/GNU/gcc-8.3.0/include/c++/8.3.0/bits/basic_string.tcc:656:134:   required from ‘std::basic_string<_CharT, _Traits, _Alloc>::basic_string(const _CharT*, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’
/usr/local/GNU/gcc-8.3.0/include/c++/8.3.0/bits/basic_string.h:6730:95:   required from here
/usr/local/GNU/gcc-8.3.0/include/c++/8.3.0/bits/basic_string.tcc:1067:1: error: cannot call member function ‘void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’ without object
[4/4] c++ -MMD -MF /export/data/lwangcg/inplace_abn/build/temp.linux-x86_64-3.7/src/inplace_abn.o.d -pthread -B /data/lwangcg/anaconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/usr/local/cuda-10.1/include -I/usr/local/cuda-10.1/include -fPIC -DWITH_CUDA=1 -I/export/data/lwangcg/inplace_abn/include -I/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include -I/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/TH -I/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/export/data/lwangcg/anaconda3/include/python3.7m -c -c /export/data/lwangcg/inplace_abn/src/inplace_abn.cpp -o /export/data/lwangcg/inplace_abn/build/temp.linux-x86_64-3.7/src/inplace_abn.o -O3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149,
                 from /export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                 from /export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                 from /export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                 from /export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7,
                 from /export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                 from /export/data/lwangcg/inplace_abn/src/inplace_abn.cpp:3:
/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
 #pragma omp parallel for if ((end - begin) >= grain_size)

ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1515, in _run_ninja_build
    env=env)
  File "/export/data/lwangcg/anaconda3/lib/python3.7/subprocess.py", line 487, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "setup.py", line 75, in <module>
    cmdclass={"build_ext": BuildExtension}
  File "/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/setuptools/__init__.py", line 145, in setup
    return distutils.core.setup(**attrs)
  File "/export/data/lwangcg/anaconda3/lib/python3.7/distutils/core.py", line 148, in setup
    dist.run_commands()
  File "/export/data/lwangcg/anaconda3/lib/python3.7/distutils/dist.py", line 966, in run_commands
    self.run_command(cmd)
  File "/export/data/lwangcg/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/setuptools/command/install.py", line 67, in run
    self.do_egg_install()
  File "/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/setuptools/command/install.py", line 109, in do_egg_install
    self.run_command('bdist_egg')
  File "/export/data/lwangcg/anaconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/export/data/lwangcg/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 172, in run
    cmd = self.call_command('install_lib', warn_dir=0)
  File "/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 158, in call_command
    self.run_command(cmdname)
  File "/export/data/lwangcg/anaconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/export/data/lwangcg/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/setuptools/command/install_lib.py", line 11, in run
    self.build()
  File "/export/data/lwangcg/anaconda3/lib/python3.7/distutils/command/install_lib.py", line 107, in build
    self.run_command('build_ext')
  File "/export/data/lwangcg/anaconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/export/data/lwangcg/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 84, in run
    _build_ext.run(self)
  File "/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
    _build_ext.build_ext.run(self)
  File "/export/data/lwangcg/anaconda3/lib/python3.7/distutils/command/build_ext.py", line 340, in run
    self.build_extensions()
  File "/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 649, in build_extensions
    build_ext.build_extensions(self)
  File "/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 194, in build_extensions
    self.build_extension(ext)
  File "/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 205, in build_extension
    _build_ext.build_extension(self, ext)
  File "/export/data/lwangcg/anaconda3/lib/python3.7/distutils/command/build_ext.py", line 534, in build_extension
    depends=ext.depends)
  File "/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 478, in unix_wrap_ninja_compile
    with_cuda=with_cuda)
  File "/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1233, in _write_ninja_file_and_compile_objects
    error_prefix='Error compiling objects for extension')
  File "/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1529, in _run_ninja_build
    raise RuntimeError(message)
RuntimeError: Error compiling objects for extension

When I run on Pytorch 1.5 and 1.51. Although I can compile inplace-abn, however, I met the following problem:

(base) bash-4.2$ CUDA_VISIBLE_DEVICES=0,1,2,3,4,5  python train.py    --dataset_name dtu    --root_dir /export/data/lwangcg/CasMVSNet_pl/dtu_training/mvs_training/dtu/    --num_epochs 16 --batch_size 2    --depth_interval 2.65 --n_depths 8 32 48 --interval_ratios 1.0 2.0 4.0    --optimizer adam --lr 1e-3 --lr_scheduler cosine    --exp_name exp --num_gpus 6
Traceback (most recent call last):
  File "train.py", line 9, in <module>
    from models.mvsnet import CascadeMVSNet
  File "/export/data/lwangcg/CasMVSNet_pl/models/mvsnet.py", line 4, in <module>
    from .modules import *
  File "/export/data/lwangcg/CasMVSNet_pl/models/modules.py", line 4, in <module>
    from inplace_abn import InPlaceABN
  File "/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/inplace_abn-1.1.1.dev1+g845bf23-py3.7-linux-x86_64.egg/inplace_abn/__init__.py", line 1, in <module>
    from .abn import ABN, InPlaceABN, InPlaceABNSync
  File "/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/inplace_abn-1.1.1.dev1+g845bf23-py3.7-linux-x86_64.egg/inplace_abn/abn.py", line 8, in <module>
    from .functions import inplace_abn, inplace_abn_sync
  File "/export/data/lwangcg/anaconda3/lib/python3.7/site-packages/inplace_abn-1.1.1.dev1+g845bf23-py3.7-linux-x86_64.egg/inplace_abn/functions.py", line 8, in <module>
    from . import _backend
ImportError: /export/data/lwangcg/anaconda3/lib/python3.7/site-packages/inplace_abn-1.1.1.dev1+g845bf23-py3.7-linux-x86_64.egg/inplace_abn/_backend.cpython-37m-x86_64-linux-gnu.so: undefined symbol: THPVariableClass

Zero points in output .ply

When running CasMVSNet_pl on my own data, I'm able to run inference and point fusion without errors, but the resulting .ply contains zero points:

format binary_little_endian 1.0
element vertex 0
property float x
property float y
property float z
property uchar red
property uchar green
property uchar blue
end_header

Using the same input, here's a reference snapshot of a .ply result computed with the basis Cascade Stereo method:

image

Depth computation looks fine, but perhaps during fusion the camera intrinsics u0 + v0 in matrix K are not preserved?
image

Any thoughts are welcome!

Inference from DTU data

I see from 'dtu.py' expects the DTU training image format, e.g.:
img_filename = os.path.join(self.root_dir, f'Rectified/{scan}/rect_{vid+1:03d}_{light_idx}_r5000.png')

Also, I see for each image 'item' the code expects to load 'depth_visual' (PNG) and 'depth_map' (PFM).

Happily, I have no trouble invoking 'eval.py' to running inference on the DTU training images (640x512 PNG, e.g. 'rect_001_0_r5000.png'). I can obtain .ply files from 'eval.py' for all of the DTU scenes.

However, how can we substitute the training images for DTU full resolution images (1,600x1,200)?

I'm likely missing something easy. I've tried substituting a new scene in '.../datasets/lists/dtu/text.txt' and changing the above loading format to correspond to the provided full resolution DTU images (e.g., '00000001.PNG'):

img_filename = os.path.join(self.root_dir, f'Rectified/{scan}/{vid+1:08d}.PNG')

However, as we aren't provided with full resolution DTU 'depth_visual' (PNG) and 'depth_map' (PFM), loading halts, of course.

What do you suggest? I'm trying to reproduce your DTU results provided in the 'Release' links. Thanks!

自采数据深度图估计效果很差

@kwea123 你好,很感谢你的开源,我用手机自采数据并用colmap稀疏重建去畸变后得到了相机参数,再用yaoyao提供的转换代码https://github.com/YoYo000/MVSNet/blob/master/mvsnet/colmap2mvsnet.py得到网络可用的输入。
但最终重建效果很差。
以下分别是原图、深度图、融合的点云:
原图深度图点云
想请问:
1、用colmap稀疏重建时是否要改动参数(我用的默认参数)
2、colmap2mvsnet转化出来的相机参数明显有问题(深度范围?)
3、用casmvsnet进行深度估计时我是改动的tanks.py的数据入口(路径之类的改动),选取的depth_interval参数为1.5e-2,重建出来点云大概在2、3M左右,而dtu、belended数据集重建都在20M以上

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.