jiaw-z / fcstereo Goto Github PK
View Code? Open in Web Editor NEW[CVPR'22] Revisiting Domain Generalized Stereo Matching Networks from a Feature Consistency Perspective
License: MIT License
[CVPR'22] Revisiting Domain Generalized Stereo Matching Networks from a Feature Consistency Perspective
License: MIT License
.
Hi,
Should the label (Line 75) for stereo contrastive loss be a one-hot vector? It seems the label in the code is a all-zero vector.
On the other hand, when we generate random shift for the negative samples (Line 151~Line 164), why we split "n_neg" as "shift1" and "shift2"?
VERY NICE WORK!!!
It is really expensive to reproduce the model with the same perfermance as the one in the paper, So do you have a plan to release some the trained model checkpoint?
thank you.
Evertime I run the code for training, I get this error moment after the 10th epoch is completed. I am using single node with a single gpu.
Hi , Thanks a lot for your great work,I mitigated SSW and SCF to my own Project with similar backbone. But the SSW loss is a NAN, so I debuged it and found it's 'num_sensitive_sum' being zero after ‘mask_matrix’ multiplied by self.reversal_eye.
Even if I set 'num_sensitive_sum' to 0.0001 , the SWW loss becomes zero too.
Is there a bug here or am I understanding it wrong?
mask_matrix = mask_matrix.view(B, -1)
for midx in range(B):
mask_matrix[midx][indices] = 1
mask_matrix = mask_matrix.view(B, self.dim, self.dim)
mask_matrix = mask_matrix * self.reversal_eye
num_sensitive_sum = torch.sum(mask_matrix)
if num_sensitive_sum==0:
num_sensitive_sum=0.0001
.
Hi , Thanks a lot for your great work,Can I know how you tested in your Table1 about the PSMNet Results?
It has different results from the following papers:
Table3 in Domain-invariant Stereo Matching Networks
Table5 in GraftNet: Towards Domain Generalized Stereo Matching with a Broad-Spectrum and Task-Oriented Feature
As both left and right disparities are required in this code, how do u get the right disparities for KITTI-2012 dataset?
I test generalize ability of GwcNet on KITTI and Middlebury using the official pretrained model, the result is 12.5% on KITTI 2012, 12.4% on KITTI 2015. But in your paper, it is 20.2% and 22.7%. Can I know how you tested?
I have a stereo camera that produce only grayscale stereo pairs.
I'm wondering if we can use your amazing algorithm to do inference and evaluate the results.
The images are taken using head mounted camera and from a person face
Hi, Thank you for sharing!
Follow the GETTING_STARED.md, i implemented the cosine similarity myself and checked it on the demo data with the checkpoint from https://github.com/DeepMotionAIResearch/DenseMatchingBenchmark/blob/master/configs/AcfNet/ResultOfAcfNet.md#sceneflow
and most of the feature similarity are still around 0.6-0.9.
Is the checkpoint provided already trained for feature consistency?
And it would be appreciated if a script to check the similarity could be provided.
Thanks again!
I followed the instruction that you provided in GETTING_STARTED.md and INSTALL.md but when I run demo.sh I get this error:
in demo.py when you run line 9:
from dmb.apis.inference import init_model, inference_stereo, is_image_file
from dmb.visualization.stereo.vis import group_color
error stack
.../spatial_correlation_sampler-0.3.0-py3.8-linux-x86_64.egg/spatial_correlation_sampler_backend.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN3c1015SmallVectorBaseIjE8grow_podEPvmm File "/home/andreaa/dev/stereo_depth/FCStereo/DenseMatchingBenchmark/dmb/modeling/stereo/cost_processors/utils/correlation1d_cost.py", line 5, in <module> from spatial_correlation_sampler import SpatialCorrelationSampler File "/home/andreaa/dev/stereo_depth/FCStereo/DenseMatchingBenchmark/dmb/modeling/stereo/cost_processors/builder.py", line 5, in <module> from .utils.correlation1d_cost import COR_FUNCS File "/home/andreaa/dev/stereo_depth/FCStereo/DenseMatchingBenchmark/dmb/modeling/stereo/cost_processors/__init__.py", line 1, in <module> from .builder import build_cost_processor File "/home/andreaa/dev/stereo_depth/FCStereo/DenseMatchingBenchmark/dmb/modeling/stereo/models/general_stereo_model.py", line 7, in <module> from dmb.modeling.stereo.cost_processors import build_cost_processor File "/home/andreaa/dev/stereo_depth/FCStereo/DenseMatchingBenchmark/dmb/modeling/stereo/models/__init__.py", line 1, in <module> from .general_stereo_model import GeneralizedStereoModel File "/home/andreaa/dev/stereo_depth/FCStereo/DenseMatchingBenchmark/dmb/modeling/stereo/__init__.py", line 1, in <module> from .models import build_stereo_model File "/home/andreaa/dev/stereo_depth/FCStereo/DenseMatchingBenchmark/dmb/modeling/__init__.py", line 2, in <module> from .stereo.models import _META_ARCHITECTURES as _STEREO_META_ARCHITECTURES File "/home/andreaa/dev/stereo_depth/FCStereo/DenseMatchingBenchmark/dmb/data/datasets/evaluation/stereo/eval.py", line 8, in <module> from dmb.modeling.stereo.layers.inverse_warp import inverse_warp File "/home/andreaa/dev/stereo_depth/FCStereo/DenseMatchingBenchmark/dmb/data/datasets/evaluation/stereo/__init__.py", line 2, in <module> from .eval import do_evaluation, do_occlusion_evaluation, remove_padding File "/home/andreaa/dev/stereo_depth/FCStereo/DenseMatchingBenchmark/dmb/visualization/stereo/vis_hooks.py", line 20, in <module> from dmb.data.datasets.evaluation.stereo.eval import remove_padding File "/home/andreaa/dev/stereo_depth/FCStereo/DenseMatchingBenchmark/dmb/visualization/stereo/__init__.py", line 4, in <module> from .vis_hooks import DistStereoVisHook File "/home/andreaa/dev/stereo_depth/FCStereo/DenseMatchingBenchmark/dmb/visualization/__init__.py", line 2, in <module> from .stereo import SaveResultTool as DispSaveResultTool File "/home/andreaa/dev/stereo_depth/FCStereo/DenseMatchingBenchmark/dmb/data/datasets/evaluation/flow/eval_hooks.py", line 17, in <module> from dmb.visualization.stereo import ShowConf File "/home/andreaa/dev/stereo_depth/FCStereo/DenseMatchingBenchmark/dmb/data/datasets/evaluation/flow/__init__.py", line 3, in <module> from .eval_hooks import DistFlowEvalHook, flow_output_evaluation_in_pandas File "/home/andreaa/dev/stereo_depth/FCStereo/DenseMatchingBenchmark/dmb/data/datasets/evaluation/__init__.py", line 1, in <module> from .flow import flow_output_evaluation_in_pandas File "/home/andreaa/dev/stereo_depth/FCStereo/DenseMatchingBenchmark/dmb/apis/train.py", line 14, in <module> from dmb.data.datasets.evaluation.stereo import DistStereoEvalHook File "/home/andreaa/dev/stereo_depth/FCStereo/DenseMatchingBenchmark/dmb/apis/__init__.py", line 1, in <module> from .train import train_matcher File "/home/andreaa/dev/stereo_depth/FCStereo/tools/demo.py", line 9, in <module> from dmb.apis.inference import init_model, inference_stereo, is_image_file /home/andreaa/miniconda3/envs/dense_matching_benchmark/lib/python3.8/site-pack
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.