Giter Site home page Giter Site logo

smvs's People

Contributors

flanggut avatar pierotofy avatar russkel avatar xubury avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

smvs's Issues

string..h missing

Hi I am getting the following error

smvsrecon.cc:23:25: fatal error: util/string.h: No such file or directory

I am trying to build on Ubuntu 16.04

an issue in win10 system

hi! thanks for your hard working. I have successfully compiled SMVS in Windows 10 + VisualStudio 2013. However, the caculated point cloud in win10 isn't as accurate as that in macOS. The complete pipelines are the same in different systems. The results for dataset:
http://vision.middlebury.edu/mview/data/

win10+vs2013:
image

macOS:
image

could you give some advices?

thanks very much!

two questions

Hi, your paper and project is the best work i have met on the 3D reconstruction field.

I have two questions about the code:
1.what the option " -S (Shading-based optimization)" means? It's not using geometric energy term and only using shading energy term?
2.I only want to use the geometric energy term to optimize my depthmap ,but i can't find where to change the code in the functions of gauss_newton_step.cc, can you give me some advice?

Killed while running

Hi, I am trying to get the reconstruction via smvs for Strecha fountain-p11 dataset,
makescene and sfmrecon are both from mve pipeline, and then smvsrecon,
while it can load something and Starting to run, it always stuck after the starting, shown as below, no additional info given to the standard output

Initializing scene with 11 views...
Initialized 11 views (max ID is 10), took 1ms.
Reading Photosynther file (11 cameras, 17376 features)...
Automatic input scale: 0
Input embedding: undistorted
Output embedding: smvs-S0
Running view selection for 11 views...  done, took 0.89s.
Starting 1/11 ID: 8 Neighbors: 7 9 6 10 5 4
Starting 2/11 ID: 6 Neighbors: 5 7 4 3 8 2
Starting 3/11 ID: 2 Neighbors: 3 1 4 0 5 6
Starting 4/11 ID: 5 Neighbors: 6 4 3 7 2 1
Starting 5/11 ID: 10 Neighbors: 9 8 7 6 5 4
Starting 6/11 ID: 4 Neighbors: 5 3 2 6 1 7
Starting 7/11 ID: 1 Neighbors: 2 3 0 4 5 6
Starting 8/11 ID: 9 Neighbors: 8 7 10 6 5 4
Starting 9/11 ID: 0 Neighbors: 1 2 3 4 5 6
Starting 10/11 ID: 7 Neighbors: 6 5 8 4 9 3
Starting 11/11 ID: 3 Neighbors: 2 4 1 5 6 0
./runsmvs.sh: line 28: 11416 Killed ${smvs} -S ${scene_dir}

Any idea about the error? Much thanks in advance.

occurred "Illegal instruction (core dumped)" when smvsrecon <scene-dir>

Shading-aware Multi-view Stereo (built on Dec 22 2016, 15:17:43)

Initializing scene with 6 views...
Initialized 6 views (max ID is 5), took 27ms.
Reading Photosynther file (6 cameras, 221 features)...
Automatic input scale: 2
Input embedding: undist-L2
Output embedding: smvs-B2
Starting 1/6 ID: 0 Neighbors: 3 2 5
Starting 2/6 ID: 1 Neighbors: 5 4 2

Illegal instruction (core dumped)

Question about parameters for Middlebury benchmark

Hello,
I was wondering if you can reveal the set of parameters that was used for the results reported in the Middlebury dataset?

I am trying to reproduce the results for the Dino and Temple to betetr understand what I can set to obtain better results for my data, which has similar characteristics to the ones from Middlebury.

Thank you.

Strecha dataset

Hello
Well, it seems that Strecha dataset (fountain P11) has been removed from the original web site.
i wonder if you know why and if you can upload it or where can i find it

Explanation of the depth map cutting

Thanks for a seriously AWESOME library! It worked for me right out of the box and has given me really great results on most scenes I've run it on.

I'm interested in learning more about the depth map cutting (which is WIP I guess): https://github.com/flanggut/smvs/blob/master/lib/mesh_generator.cc#L24

It seems that the idea here is that you compute a consistency score by projecting points out from the depth map, computing the normals, then compare those normals to the values in your normal map. That all makes sense to me. However, I am getting a bit lost in the lines 127-141 https://github.com/flanggut/smvs/blob/master/lib/mesh_generator.cc#L127

These lines seem to be filtering the data somehow based on... the depth consistency? Could you clarify what is going on in those if statements and why, for example, you want to compare surface_power_j_j > 0.5 * surface_power (where does the 0.5 come from?!), etc.

Thanks and again, awesome work!

Camera Response Function

Hey Fabian, I hope you are doing good.
Well, about your shading term; you assumed that Image intensity must be equal the estimated irradiance ( actually you used the gradient) but what if you used the Camera Response Function (CRF), naturally this function relates the scene irradiance R to the image intensity I. So estimating this function first then using it into shading term will enhance the result (probably) and you still don't need to use albedo (included in crf) and also you can avoid using gradient since your intensity is more accurate.

Dense reconstruction with Blender cameras

Hi,
I've posted this issue on the MVE issue tracker as well, but i'm posting this here too, as SMVS plays a key part in what i'm doing.
I've set up a scene in Blender with an object at the origin and a camera to rotate around it. I've rendered out several frames while rotating the camera around 10 degrees, and exported the camera positions from Blender using this script. I've set up an MVE scene based on the rendered frames and the exported cameras and i'm trying to reconstruct it using SMVS (in this case without any SFM points obviously).
The problem is, MVE puts out either an empty point cloud every time or a one with weirdly placed points. I suspect the issue might be in the camera conversion. I'm not sure what coordinate system MVE is using, what rotation and translation matrices it expects.
Maybe you can help me with what i'm doing wrong.

NEON optimization

Is there any chance for NEON acceleration to be implemented? Any directives on how difficult would it be to provide NEON alternatives to SSE optimized blocks in smvs?
Also, why does mve and smvs use self-implemented data structures and math functions? Eigen is pretty mature in this area, providing in-place SIMD accelerations, why not use it?
thanks!

General Protocol for Face Scanning for Surgery Planning (Orthognathic and Rhinoplasty) - Comparison between Tools

Dear friends,

My name is Cicero Moraes, I am a specialist in forensic facial reconstruction, orthognathic surgery planning, rhinoplasty planning, confection of veterinary and facial prostheses.

Over the years I have been using photogrammetry in my work and lately I have been developing two Python addons for Blender 3D. The OrtogOnBlender and the RhinOnBlender.

OrtogOnBlender short, pre-running video: https://www.youtube.com/watch?v=h-bFvhLp-8g

OrtogOnBlender has several interesting tools, worth mentioning the direct import of DICOM to 3D (Dicom2Mesh) and conversion of photos into 3D models (OpenMVG + OpenMVS and MVE / SMVS + Meshlab + MVS Texturing).

Short video, alignment and resizing of photogrammetry in OrtogOnBlender: https://www.youtube.com/watch?v=MTfQLnKjK0o

I am sending this message to show a broader study I did in recent days on face scanning for use in orthognathic surgery and rhinoplasty planning.

The total was 13 people photographed in one place and the same person photographed in five different places. Resulting 32 outlets, 832 photos and 106 models!

These models were scanned with the following set of tools:

  1. MVE / SMVS + Meshlab + Mvs-Texturing
  2. MVE / SMVS + Mvs-Texturing
  3. OpenMVG + OpenMVS
  4. Photoscan

We wrote the original document in Portuguese, we are translating it into English and German, but for now you can access it through Google Translator and have a good idea of ​​what we did, since we have many, many images.

In it we describe the whole methodology, the results and finally the way of solving the problems that appeared during the survey:

https://goo.gl/aYh6rm

I hope you enjoy it and that it helps you in some way in developing the tools.

I can not fail to thank you all for sharing such magnificent programs. Many, many thanks!

Question about image hessian

Hi there,
Thank you for the wonderful job. In your codes, subview image hessian is used to calculate the jacobian entries, which is not mentioned in your paper. Could you please explain it a little bit. Appreciate it.

Possible Bug: vector out of range in fill_values_at_pixels

So I try to run this project on windows compiled by MinGW-GCC. But I keep getting a vector out of range error when running DepthOptimizer's optimize() method. (Doesn't have this problem on Linux)
So I dig into the function and trace the crash to this functionpatch->fill_values_at_pixels(&pixels, &depths, &depth_derivatives,&depth_2nd_derivatives, &pids, sampling);
It is called by jacobian_entries_for_patch in gauss_newton_step.construct step
At first I thought it was caused by the rounding in pixels->resize(MATH_POW2(this->size) / MATH_POW2(subsample));.
But It is not. Then I just force break the loop.

if (id == pixels->size()) 
{
   break;
}

Altough the outcome doesn't seem to differ much to the one I did on Linux, I am still wondering what could cause this crash?

General Protocol for Face Scanning for Surgery Planning (Orthognathic and Rhinoplasty) - Comparison between Tools

Dear friends,

My name is Cicero Moraes, I am a specialist in forensic facial reconstruction, orthognathic surgery planning, rhinoplasty planning, confection of veterinary and facial prostheses.

Over the years I have been using photogrammetry in my work and lately I have been developing two Python addons for Blender 3D. The OrtogOnBlender and the RhinOnBlender.

OrtogOnBlender short, pre-running video: https://www.youtube.com/watch?v=h-bFvhLp-8g

OrtogOnBlender has several interesting tools, worth mentioning the direct import of DICOM to 3D (Dicom2Mesh) and conversion of photos into 3D models (OpenMVG + OpenMVS and MVE / SMVS + Meshlab + MVS Texturing).

Short video, alignment and resizing of photogrammetry in OrtogOnBlender: https://www.youtube.com/watch?v=MTfQLnKjK0o

I am sending this message to show a broader study I did in recent days on face scanning for use in orthognathic surgery and rhinoplasty planning.

The total was 13 people photographed in one place and the same person photographed in five different places. Resulting 32 outlets, 832 photos and 106 models!

These models were scanned with the following set of tools:

  1. MVE / SMVS + Meshlab + Mvs-Texturing
  2. MVE / SMVS + Mvs-Texturing
  3. OpenMVG + OpenMVS
  4. Photoscan

We wrote the original document in Portuguese, we are translating it into English and German, but for now you can access it through Google Translator and have a good idea of ​​what we did, since we have many, many images.

In it we describe the whole methodology, the results and finally the way of solving the problems that appeared during the survey:

https://goo.gl/aYh6rm

I hope you enjoy it and that it helps you in some way in developing the tools.

I can not fail to thank you all for sharing such magnificent programs. Many, many thanks!

Images and camera poses as input

Hi, first of all congrats for this amazing library.

I've seen that you've made this lib as an extension to MVE, so you take as input the result of sfmrecon (point cloud, matches, camera poses...).

What about if I already have images and known camera poses, would it be easy to adapt this lib for that?

Many thanks

illumination model

Hello
your work is so amazing one of the best paper i read.
so i have some ideas and i wanted to test them as the title says, i want to change the illumination model tweak it here and there and see what i got, he thing is i can't find where you did put the illumination model, and the shading opitimization term, i know it's somewhere inside depthoptimzer but i'm being lasy and i would like you to point me exactly to the right direction

thanks
best regards

A question about Semi-global Matching argument

Hi! In the case that the mve scenes do not contain an initial sparse point cloud, how can we estimation an initial depth sweep. It seems that the argument influences the results a lot in this case. could you give some suggestions?
thanks for your reading.

Is there a way to make it more right height like dmrecon?

hi.
I make 3D data with openMVG + (SMVS or dmrecon) + PoissonRecon.
Using SMVS, I'm satisfied with finishing the looks beautifully. On the other hand, the actual position is expressed smoothly.

Please see the figure below.
This is a sectional view of 3D cut vertically.
ws000120

The actual value black acquired by laser, red is dmrecon, blue is SMVS.
SMVS is a smooth line.
Is there a way to get closer to reality by changing settings and code?

Is the result of evaluate_3_band reasonable compared with result of evaluate_3_band_exact

Hi @flanggut , first of all, thank u for ur wonderful software.
I have roughly explored your code recently, since I want to do some research combining MVS and shading cues.
Well, in the lighting_optimizer.cc, in line 41 function fit_lighting_to_image call
sh::evaluate_4_band(*normal, *sh)
Then my question is why can u use evaluate_4_band rather than evaluate_4_band_exact?
I was wondering whether it is reasonable.
Just to minimize operations? Would you please explain it in more details?
Thanks very much.

Models are flipped over X axis

First of all, I would like to thank you for this amazing tool. I'm using drone videos for scene reconstruction and it works amazingly well. Extracted video framwes are fed to the MVE pipeline. Standard parameters are used to reconstruct the scene. Here is an example of meta.ini rotation parameter generated:
rotation = -0.9634011388 -0.2680799067 -0.001736412523 0.1153697371 -0.4203992188 0.900010407 -0.2420091629 0.8668632507 0.4359418452
Then smvsrecon is run on a prepared scene folder.
All goes fine, the resulting point cloud looks correct, however it's always rotated (for every different set of images, even for sample temple model) as on the screenshots. I'm not sure if this is wrong on SfM step or it's smvs recon not picking up camera orientation properly.
This causes problems in the next steps - mesh generation and texture appliance. The resulting models are black and rotated.
One possible workaround is to take ply file and rotate it programatically using meshlabserver filter.

image

Manually rotated 180 degrees over X axis and barycenter point - looks fine
image

Would you have a windows .exe available?

I am very interested in trying out this software as an alternative to dmrecon and scene2pset. Unfortunately my windows setup making it rather hard to compile the mve software. I see that you are working in Xcode so I imagine you are doing this all on a mac, but in case you have a windows executable, could you include it in your distribution?

Thanks!!!

-ben

Middelburry dataset

Hello
I was wondering how did you handle the Middelburry ground truth dataset for evaluation. I mean because smvs is based on sfmrecon which give ambiguous parameters (correct but not identical to ground truth) thus leading the reconstruction to be out bounding box given by the website aka the reconstruction is not in the true scene coordinate system. I spoke to Mr Simon in issue 388 the solution he proposes is kind hard ( and not worth it since I need to do it for just one time for evaluation ). How did you tackled this problem
Thanks

Illegal Instruction

I followed the instructions of the following link to run "smvsrecon" and the program stopped by stating
"illegal instruction"
What should I do to debug?
Joe

https://pfalkingham.wordpress.com/2016/10/29/photogrammetry-testing-7-smvs-mve/

jc@jc-T5400:~/git$ ./smvs/app/smvsrecon styrac_scene/
Shading-aware Multi-view Stereo (built on Aug 12 2017, 17:33:24)

Initializing scene with 53 views...
Initialized 53 views (max ID is 52), took 4ms.
Reading Photosynther file (53 cameras, 14913 features)...
Automatic input scale: 2
Input embedding: undist-L2
Output embedding: smvs-B2
Running view selection for 53 views... done, took 8.108s.
Starting 1/53 ID: 0 Neighbors: 29 1 30 27 47 28
Starting 2/53 ID: 6 Neighbors: 7 10 11 14 13 12
Starting 3/53 ID: 1 Neighbors: 29 30 27 0 26 28
Starting 4/53 ID: 4 Neighbors: 3 5 2 6 32 34
Starting 5/53 ID: 5 Neighbors: 3 4 6 2 34 8
Starting 6/53 ID: 2 Neighbors: 3 30 32 1 31 29
Starting 7/53 ID: 7 Neighbors: 11 10 13 14 12 15
Starting 8/53 ID: 3 Neighbors: 2 1 32 30 5 4
Illegal instruction

Run smvs on several machines in parallel

Is there a simple way to distribute smvs over several machines? If e.g. one is limited by RAM on a single machine, but has several machines around, this could significantly speed up processing.

Question : free for commercial use?

It looks like SMVS is free from patents / code contamination (and this table seems to confirm it ).

Is it safe for commercial use ? Do you happen to know if SMVS uses any GPL parts from MVE?

Thank you

Crash after 8 view selection

I'm trying running on bash (Ubuntu) embedded into Windows 10.
Since i do not have any images for now, i'm using those from the repo:
https://github.com/openMVG/SfM_quality_evaluation.git

I tried different set of images from that repo, and it always hang after 8 images.
I noticed that another open issue crash after 8th image, maybe it's related... #14
Here's the output:

Shading-aware Multi-view Stereo (built on Dec 28 2017, 14:16:04)

Initializing scene with 25 views...
Initialized 25 views (max ID is 24), took 11ms.
Reading Photosynther file (25 cameras, 8967 features)...
Automatic input scale: 1
Input embedding: undist-L1
Output embedding: smvs-B1
Running view selection for 25 views...  done, took 0.626s.
Starting 1/25 ID: 7 Neighbors: 6 19 18 8 5 17
Starting 2/25 ID: 3 Neighbors: 4 15 5 17 16 6
Starting 3/25 ID: 6 Neighbors: 18 7 19 5 17 8
Starting 4/25 ID: 1 Neighbors: 14 2 0 3 15 4
Starting 5/25 ID: 0 Neighbors: 14 1 15 3 2 16
Starting 6/25 ID: 2 Neighbors: 3 15 14 4 1 16
Starting 7/25 ID: 5 Neighbors: 6 17 4 18 7 19
Starting 8/25 ID: 4 Neighbors: 3 5 17 15 16 6
Received signal SIGSEGV (segmentation fault)
Received signal SIGSEGV (segmentation fault)
Obtained 15 stack frames: Received signal SIGSEGV (segmentation fault)Received signal SIGSEGV (segmentation fault)

Obtained 0xe stack frames: 0x4aeaa4 0x7f45dda754b0 0x4322f8 0x4aeaa4 0x7f45dda754b0 0x7f45ddb8dada 0x432300 0x438313 0x43a28e 0x40ff1f 0x410788 0x412459 0x7f45de6dea99 0x40b9b1 0x41364b 0x7f45de3f8c80 0x7f45de6d76ba 0x7f45ddb473dd
0x438313smvsrecon [0x0x43a28e4aeaa4 ]
0x40ff1f/lib/x86_64-linux-gnu/libc.so.6 (0x410788+0x 354b00x412459) [0x0x7f45de6dea997f45dda754b0 ]
0x40b9b1/lib/x86_64-linux-gnu/libc.so.6 (0x41364b+0x 14dada0x7f45de3f8c80) [0x0x7f45de6d76ba7f45ddb8dada ]
0x7f45ddb473ddsmvsrecon
[0xsmvsreconReceived signal SIGSEGV (segmentation fault)432300[0x
]
4aeaa4Received signal SIGSEGV (segmentation fault)smvsrecon]

[0x/lib/x86_64-linux-gnu/libc.so.6Obtained 438313(Received signal SIGSEGV (segmentation fault)]
+0x
smvsrecon354b0Obtained [0x)Obtained 43a28e[0x]
smvsreconsmvsrecon15smvsrecon7f45dda754b0smvsrecon[0x[0x[0xsmvsreconsmvsrecon]
[0x4aeaa44aeaa44aeaa4[0x[0xsmvsrecon40ff1f]
]
]
4aeaa44aeaa4[0x]
/lib/x86_64-linux-gnu/libc.so.6/lib/x86_64-linux-gnu/libc.so.6/lib/x86_64-linux-gnu/libc.so.6]
]
4322f8smvsrecon(((/lib/x86_64-linux-gnu/libc.so.6/lib/x86_64-linux-gnu/libc.so.6]
[0x+0x+0x+0x((smvsrecon410788354b0354b0354b0+0x+0x[0x]
)))354b0354b0438313smvsrecon[0x[0x[0x))]
[0x7f45dda754b07f45dda754b07f45dda754b0[0x[0xsmvsrecon412459]
]
]
7f45dda754b07f45dda754b0[0x]
smvsreconsmvsreconsmvsrecon]
]
43a28e/lib/x86_64-linux-gnu/libpthread.so.0[0x[0x[0x/lib/x86_64-linux-gnu/libc.so.6/lib/x86_64-linux-gnu/libc.so.6]
(4321ab4322f84322f8((smvsrecon+0x]
]
]
+0x+0x[0xea99smvsreconsmvsreconsmvsrecon14dada14dada40ff1f)[0x[0x[0x))]
[0x4383134383134382e3[0x[0xsmvsrecon7f45de6dea99]
]
]
7f45ddb8dada7f45ddb8dada[0x]
smvsreconsmvsreconsmvsrecon]
]
410788smvsrecon[0x[0x[0xsmvsreconsmvsrecon]
[0x43a28e43a28e43a28e[0x[0xsmvsrecon40b9b1]
]
]
432300432300[0x]
smvsreconsmvsreconsmvsrecon]
]
412459smvsrecon[0x[0x[0xsmvsreconsmvsrecon]
[0x40ff1f40ff1f40ff1f[0x[0x/lib/x86_64-linux-gnu/libpthread.so.041364b]
]
]
4382e34382e3(]
smvsreconsmvsreconsmvsrecon]
]
+0x/usr/lib/x86_64-linux-gnu/libstdc++.so.6[0x[0x[0xsmvsreconsmvsreconea99(410788410788410788[0x[0x)+0x]
]
]
43a28e43a28e[0xb8c80smvsreconsmvsreconsmvsrecon]
]
7f45de6dea99)[0x[0x[0xsmvsreconsmvsrecon]
[0x412459412459412459[0x[0xsmvsrecon7f45de3f8c80]
]
]
40ff1f40ff1f[0x]
/lib/x86_64-linux-gnu/libpthread.so.0/lib/x86_64-linux-gnu/libpthread.so.0/lib/x86_64-linux-gnu/libpthread.so.0]
]
40b9b1/lib/x86_64-linux-gnu/libpthread.so.0(((smvsreconsmvsrecon]
(+0x+0x+0x[0x[0xsmvsrecon+0xea99ea99ea99410788410788[0x76ba)))]
]
41364b)[0x[0x[0xsmvsreconsmvsrecon]
[0x7f45de6dea997f45de6dea997f45de6dea99[0x[0x/usr/lib/x86_64-linux-gnu/libstdc++.so.67f45de6d76ba]
]
]
412459412459(]
smvsreconsmvsreconsmvsrecon]
]
+0x[0x/lib/x86_64-linux-gnu/libc.so.6[0x[0x/lib/x86_64-linux-gnu/libpthread.so.0/lib/x86_64-linux-gnu/libpthread.so.0b8c8040b9b1(40b9b140b9b1(()]
clone]
]
+0x+0x[0xsmvsrecon+0xsmvsreconsmvsreconea99ea997f45de3f8c80[0x6d[0x[0x))]
41364b)41364b41364b[0x[0x/lib/x86_64-linux-gnu/libpthread.so.0]
[0x]
]
7f45de6dea997f45de6dea99(/usr/lib/x86_64-linux-gnu/libstdc++.so.67f45ddb473dd/usr/lib/x86_64-linux-gnu/libstdc++.so.6/usr/lib/x86_64-linux-gnu/libstdc++.so.6]
]
+0x(]
((smvsreconsmvsrecon76ba+0x+0x+0x[0x[0x)b8c80b8c80b8c8040b9b140b9b1[0x)))]
]
7f45de6d76ba[0x[0x[0x]
7f45de3f8c807f45de3f8c807f45de3f8c80]
]
]

How to load pre-calibrated camera parameters

I have some data and it can not be calibrated well by sfm, because the images are captured with green curtain as backgound and few feature points can be found. Therefore, I want to use my camera parameters calibrated by chessboard.

Could you teach me how to modify the code or set the parameters ?

Compile error - ambiguous reference to linear_at()

The current git version cannot build on my Fedora 35 machine. The MVE part builds just fine, but the smvs (this part) cannot compile.
The call to method linear_at() is ambiguous, referring to two possible targets. Can you please advise which is the correct one?

The error:

[myhomedir@marvin ~/tarapps]$ make -C smvs
make: Entering directory '/home/myhomedir/tarapps/smvs'
make -C lib
make[1]: Entering directory '/home/myhomedir/tarapps/smvs/lib'
g++ -Wall -Wextra -Wundef -pedantic -march=native -funsafe-math-optimizations -fno-math-errno -std=c++11 -g -O3 -pthread -I../../mve/libs -I../lib -march=native -pthread  -c -MM bicubic_patch.cc correspondence.cc delaunay_2d.cc depth_optimizer.cc depth_triangulator.cc gauss_newton_step.cc global_lighting.cc light_optimizer.cc mesh_generator.cc mesh_simplifier.cc sgm_stereo.cc sse_vector.cc stereo_view.cc surface.cc surface_derivative.cc surface_patch.cc view_selection.cc >Makefile.dep
g++ -Wall -Wextra -Wundef -pedantic -march=native -funsafe-math-optimizations -fno-math-errno -std=c++11 -g -O3 -pthread -I../../mve/libs -I../lib -march=native -pthread  -c -o bicubic_patch.o bicubic_patch.cc
g++ -Wall -Wextra -Wundef -pedantic -march=native -funsafe-math-optimizations -fno-math-errno -std=c++11 -g -O3 -pthread -I../../mve/libs -I../lib -march=native -pthread  -c -o correspondence.o correspondence.cc
g++ -Wall -Wextra -Wundef -pedantic -march=native -funsafe-math-optimizations -fno-math-errno -std=c++11 -g -O3 -pthread -I../../mve/libs -I../lib -march=native -pthread  -c -o delaunay_2d.o delaunay_2d.cc
g++ -Wall -Wextra -Wundef -pedantic -march=native -funsafe-math-optimizations -fno-math-errno -std=c++11 -g -O3 -pthread -I../../mve/libs -I../lib -march=native -pthread  -c -o depth_optimizer.o depth_optimizer.cc
depth_optimizer.cc: In member function ‘void smvs::DepthOptimizer::reproject_neighbor(std::size_t)’:
depth_optimizer.cc:736:40: error: call of overloaded ‘linear_at(double&, double&, int)’ is ambiguous
  736 |                     subimage->linear_at(proj[0], proj[1], 0);
      |                     ~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~
In file included from ../../mve/libs/mve/image_io.h:16,
                 from depth_optimizer.cc:13:
../../mve/libs/mve/image.h:94:7: note: candidate: ‘T mve::Image::linear_at(float, float, int64_t) const [with T = float; int64_t = long int]’
   94 |     T linear_at (float x, float y, int64_t channel) const;
      |       ^~~~~~~~~
../../mve/libs/mve/image.h:101:10: note: candidate: ‘void mve::Image::linear_at(float, float, T*) const [with T = float]’
  101 |     void linear_at (float x, float y, T* px) const;
      |          ^~~~~~~~~
depth_optimizer.cc:738:40: error: call of overloaded ‘linear_at(double&, double&, int)’ is ambiguous
  738 |                     subimage->linear_at(proj[0], proj[1], 0);
      |                     ~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~
In file included from ../../mve/libs/mve/image_io.h:16,
                 from depth_optimizer.cc:13:
../../mve/libs/mve/image.h:94:7: note: candidate: ‘T mve::Image::linear_at(float, float, int64_t) const [with T = float; int64_t = long int]’
   94 |     T linear_at (float x, float y, int64_t channel) const;
      |       ^~~~~~~~~
../../mve/libs/mve/image.h:101:10: note: candidate: ‘void mve::Image::linear_at(float, float, T*) const [with T = float]’
  101 |     void linear_at (float x, float y, T* px) const;
      |          ^~~~~~~~~
depth_optimizer.cc:740:40: error: call of overloaded ‘linear_at(double&, double&, int)’ is ambiguous
  740 |                     subimage->linear_at(proj[0], proj[1], 0);
      |                     ~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~
In file included from ../../mve/libs/mve/image_io.h:16,
                 from depth_optimizer.cc:13:
../../mve/libs/mve/image.h:94:7: note: candidate: ‘T mve::Image::linear_at(float, float, int64_t) const [with T = float; int64_t = long int]’
   94 |     T linear_at (float x, float y, int64_t channel) const;
      |       ^~~~~~~~~
../../mve/libs/mve/image.h:101:10: note: candidate: ‘void mve::Image::linear_at(float, float, T*) const [with T = float]’
  101 |     void linear_at (float x, float y, T* px) const;
      |          ^~~~~~~~~
depth_optimizer.cc: In member function ‘double smvs::DepthOptimizer::mse_for_patch(std::size_t)’:
depth_optimizer.cc:779:51: error: call of overloaded ‘linear_at(double&, double&, int)’ is ambiguous
  779 |             grad_sub[0] = sub_gradients->linear_at(proj[0], proj[1], 0);
      |                           ~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~
In file included from ../../mve/libs/mve/image_io.h:16,
                 from depth_optimizer.cc:13:
../../mve/libs/mve/image.h:94:7: note: candidate: ‘T mve::Image::linear_at(float, float, int64_t) const [with T = float; int64_t = long int]’
   94 |     T linear_at (float x, float y, int64_t channel) const;
      |       ^~~~~~~~~
../../mve/libs/mve/image.h:101:10: note: candidate: ‘void mve::Image::linear_at(float, float, T*) const [with T = float]’
  101 |     void linear_at (float x, float y, T* px) const;
      |          ^~~~~~~~~
depth_optimizer.cc: In member function ‘double smvs::DepthOptimizer::ncc_for_patch(std::size_t, std::size_t)’:
depth_optimizer.cc:886:44: error: call of overloaded ‘linear_at(double&, double&, int)’ is ambiguous
  886 |         color_sub[0] = sub_image->linear_at(proj[0], proj[1], 0);
      |                        ~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~
In file included from ../../mve/libs/mve/image_io.h:16,
                 from depth_optimizer.cc:13:
../../mve/libs/mve/image.h:409:1: note: candidate: ‘T mve::Image::linear_at(float, float, int64_t) const [with T = float; int64_t = long int]’
  409 | Image::linear_at (float x, float y, int64_t channel) const
      | ^~~~~~~~
../../mve/libs/mve/image.h:101:10: note: candidate: ‘void mve::Image::linear_at(float, float, T*) const [with T = float]’
  101 |     void linear_at (float x, float y, T* px) const;
      |          ^~~~~~~~~
make[1]: *** [../../mve/Makefile.inc:29: depth_optimizer.o] Error 1
make[1]: Leaving directory '/home/myhomedir/tarapps/smvs/lib'
make: *** [Makefile:3: all] Error 2
make: Leaving directory '/home/myhomedir/tarapps/smvs'

Question: Is this library suitable for a close range turntable setup.

If I've read correctly then this library uses a Global approach and not a Sequential approach. My understanding is that Global approaches are not that great for my use case with a turntable setup. I think this is due to lack of overlap in resulting images.

I have a single fixed camera and an object that rotates on a turntable while the camera stays in a fixed position. Usually I flip the object and will have between 80-120 high resolution 18 Megapixel, or larger, images for the reconstruction. All images will be in sharp focus and share the same exposure and focal length. I can also auto mask the background of the source images so only the object appears in the images if that is supported.

If I'm wrong and this library is a good fit for my use case I'd be pleasantly surprised and will definitely try it out and see what kind of pointcloud I get out of it. Any thoughts on its suitability would be welcome.

Thanks for reading and keep up the great work!

[lib] diff size between pixels & depths in depth_optimizer.cc

hi,

depth_optimizer.cc

double
DepthOptimizer::ncc_for_patch (std::size_t patch_id, std::size_t sub_id)
{
    ~~
    for (std::size_t i = 0; i < pixels.size(); ++i)
    {
        /* top */
        if (min[1] > 2 && pixels[i][1] == min[1])
        {
            pixels.emplace_back(pixels[i][0], pixels[i][1] - 2);
            pixels.emplace_back(pixels[i][0], pixels[i][1] - 1);
            depths.emplace_back(depths[i]);
        }

pixels.size() == depths.size() * 2

for (std::size_t i = 0; i < pixels.size(); ++i)
{
Correspondence C(this->Mi[sub_id], this->ti[sub_id],
pixels[i][0] + 0.5, pixels[i][1] + 0.5, depths[i]);

Is it correct?

thanks.

cxxflags not adequate when building wiht gcc7

When building on archlinux with gcc7(7.1.1) compiler complains about target specific option mismatch. Boosting -msse from 4.1 to 4.2 fixes the issue
gcc output:

g++ -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector-strong -pthread -I../../mve-git/libs -I../lib -msse4.1 -pthread -D_FORTIFY_SOURCE=2 -c -o bicubic_patch.o bicubic_patch.cc
In file included from /usr/lib/gcc/x86_64-pc-linux-gnu/7.1.1/include/smmintrin.h:811:0,
from sgm_stereo.cc:11:
/usr/lib/gcc/x86_64-pc-linux-gnu/7.1.1/include/popcntintrin.h: In member function ‘void smvs::SGMStereo::create_cost_volume(float, float, int)’:
/usr/lib/gcc/x86_64-pc-linux-gnu/7.1.1/include/popcntintrin.h:42:1: error: inlining failed in call to always_inline ‘long long int _mm_popcnt_u64(long long unsigned int)’: target specific option
mismatch

[Question]

Can we use other SfM pipelines such as openMVG ?

Cmake

Please share a cmake.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.