This is an example of 3D reconstruction by space carving. The results below were obtained from 36 images.
spacecarving's Introduction
spacecarving's People
spacecarving's Issues
Camera properties
Hello.
Is it possible to do this task without the camera properties(projection matrix) which is in the dino_ps.mat file. Or can you suggest me some reference or code which does this task without camera properties as I'm given with the same task of 3d reconstruction with multiple images but no camera properties are given
Some questions about the algorithm
I am beginner in computer graphics and confused about some details of the algorithm.
Here is my overall understanding:
- create a cube, which should embrace the object we want to reconstruct.
- using the projection matrix in each view to project the object 3D points into the image plane and get a long array 'fill'. This 'fill' indicate if each pixel in the image plane is in the silhouette.
- Stack all the 'fill', we can vote for each voxel in the original cube to tell if it belond to the 3D structure we want to reconstruct.
Still, I have some specific question.
- Why do you subtract the z-axis of pts with 0.62?
- As far as I know, projection matrix is used to get the image points from the world points. And in your code, the projection matrix is multiplied with the 3D points in the cube cooridnates. So the projection matrix is not like the normal one or the world coordinate is the same as the cube coordinates?
- Is 'uvs' the projected image points at each view? Why some of the pixels will outside the image boundary after using the projection matrix?
Also, could you suggest some reference to make these cloud points into mesh? Thank you!
only the silhouettes are used
Very intuitive example and many thanks.
Although, from the code, you used only the silhouettes of the images. I tried to generate the shape.txt using 24 occ, like this:
for occ, p in zip(occupancy, pts[:, :3]):
if occ > 24:
fout.write(",".join(p.astype(str)) + "," + str(occ) + "\n")
Then, I import the generated shape.txt file into CloudCompare, the result is a bit rough, which is expected, as only silhouettes are used:
Even using the occ > 24, the result is still worse than yours in readme. Could it be something I ignored ?
Many thanks !
some more clarifications about lines 54-60
Hi
Thanks for sharing the code . I have some more clarifications about the code.
- Why did you choose s=120 did you play around with other numbers. I ask because if I want to initialize a voxel grid for my data how do I do that so that it encloses the object. Any tips or educated procedure that I should follow?
- Why are the x,y,z normalized by max values and why are then the center point subtracted . Can you explain the rationale
- Why were the points divided by 5 in line 59
- Again about line 60 how did you decide on the number 0.62. I am asking so that i can do the same with my images.
- Another question about camera parameters , can you point to the source of the camera parameters. is the camera parameters the projection matrix which is K*[R |t]
Thanks a lot in advance
Dataset
Hello sir @zinsmatt how/where can we find the related datasets of other objects as the one used in your project.
along with camera properties and .ppm images
uvs
Thanks for sharing the code. I'm currently trying to apply if for an aquaponics system to determine the volume of plants.
but i'm stuck on the actual space carving part.
- How do you decide if your object fits inside the voxel matrix?
- My uvs is way larger than my image sizes. values go all the way up to 40000. Is this what you use the division for, do adjust properly for these values?
- my projection matrix is also nearly the opposite of yours when it comes to positive and negative, causing the condition uvs[0,:]>=0 to be false in nearly all cases. Should i then filter for <=0?
uvs[1,:]>=0 however gives all points.
and neither rows have any values smaller than the respective image size and larger than 0.
As for how i calibrated and got these projection matrices, i used the chessboard technique.
How to open the generated shape.vtr file in 3d format
Hello Sir @zinsmatt . I wanted to know how can we visualize the generated .vtr file in 3d software's like blender,meshlab
or how can we convert vtr into obj or stl for easy visualisation
Quick Question about the matrix used
Is the matrix that is being loaded up a R|T matrix? (Rotational 3x3 matrix with the 1x3 matrix appended to it)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.