Comments (15)
For the random arbitrary reference, I set up the optimization to update all the camera parameters, rather than fixing any one camera, for ease of implementation. This leads to an arbitrary reference frame after optimization is done.
In terms of reference, the implementation of the bundle adjustment optimization in anipose is roughly based on this code, with some changes to make it more robust:
https://scipy-cookbook.readthedocs.io/items/bundle_adjustment.html
We're working on writing up an Anipose paper with more specific details. A preprint should be out within the next month, maybe sooner.
from anipose.
Hello @niccle27 ,
Anipose supports multiple triangulation options, including:
- plain triangulation using linear-least squares
- RANSAC
- our own constrained optimization framing with spatiotemporal constraints
We're working on a preprint which details these methods and compares their performance on multiple datasets.
If you have specific questions about the code, I can try to answer them too.
from anipose.
Hello @lambdaloop
Thank you a lot for your reply!
I'm currently trying to understand your code to set those triangulation parameter properly.
Though by testing the differents functions of the anipose API, the calibration gave me the following result
[cam_0]
name = "1"
size = [ 1920, 1080,]
matrix = [ [ 1341.294113016525, 0.0, 959.5,], [ 0.0, 1341.294113016525, 539.5,], [ 0.0, 0.0, 1.0,],]
distortions = [ 0.07205439308266164, 0.0, 0.0, 0.0, 0.0,]
rotation = [ 0.009136216504817538, 0.03511747068734043, 0.0405268345500322,]
translation = [ -35.09658399674201, 6.625260161173124, -298.1691170878793,]
[cam_1]
name = "2"
size = [ 1920, 1080,]
matrix = [ [ 1317.194602329241, 0.0, 959.5,], [ 0.0, 1317.194602329241, 539.5,], [ 0.0, 0.0, 1.0,],]
distortions = [ 0.009832504731787133, 0.0, 0.0, 0.0, 0.0,]
rotation = [ -0.1633837957842965, -0.8982009987073111, 0.001370804330048589,]
translation = [ 589.1383573212819, -33.67420192984623, -30.829766699026,]
The intrinsic parameters seems alright, i'm quite surprised by the rotation/translation. What exactly is the reference coordinate system ? I was also wondering whether the calibration was happening pair by pair or is your script calibrating extrinsic parameters using the redundancies between views ? As i'm currently working on a 12 cameras views system, this might be of great help to me.
my configuration was as followed:
[calibration]
# checkerboard / charuco / aruco
board_type = "checkerboard"
# width and height of grid
board_size = [9, 6]
# length of marker side
# board_marker_length = 18.75 # mm
# board_marker_length = 24.5 # mm
# If charuco or checkerboard, square side length
board_square_side_length = 24.5 # mm
animal_calibration = false
fisheye = false
# [manual_verification]
# true / false
# manually_verify = true
[triangulation]
cam_regex = 'cam([1-2])'
triangulate = true
cam_align = ""
ransac = false
optim = false
constraints = [
# ["Nose","Left ear"],
# ["Nose","Right ear"],
# ["Start_tail","End_tail"]
]
scale_smooth = 1
scale_length = 1
scale_length_weak = 1
reproj_error_threshold = 1
score_threshold = 0.01
n_deriv_smooth = 1
from anipose.
Hi @niccle27 ,
Your calibration parameters seem reasonable.
The calibration is done first by initializing parameters pair by pair, and then optimizing the calibration across all views simultaneously. So far I've calibrated up to 6 cameras. It should work with 12 cameras as well. Let me know!
from anipose.
Yes but my question was, what is the reference ? I did code some calibration chaining using openCV a a while ago. I was using the stereocalibrate function to get the Rotation and Translation from one camera to another, therefore one of the camera was the reference and all 3D points could then be reprojected to the other cameras. On the example i gave you on my previous post, both cameras have some rotation and translation, therefore none of them is the reference. I'm wondering what reference are you using ?
from anipose.
Ah I see. Right now the first camera is set as the reference in the initialization. During the optimization, however, there is no constraint on the reference so the overall rotation and translation is somewhat arbitrary.
However, it's possible to just pick a camera, and then make that the reference, by appropriately rotating and translating all the other cameras.
The cam_align
parameter is specify the reference camera, but it is not implemented yet (sorry!).
If it's important for you, I can push it higher on the priority list. Just let me know.
from anipose.
Okay, so i'm still quite interested in the way you can get a random arbitrary reference somewhere. Maybe it's part of the optimization process ? As a matter of fact do you have a paper reference or something you used to create this process?
So my setup is 12 cameras mounted in various location in a cube. For various reason i couldn't calibrate every camera in a single shot. Therefore, i'm gonna calibrate for each group, and i have to be able to merge all those files. This is why i need to be able to move everything to a single camera.
Just to be sure i understand what should be done,
Considering "X" the coordinates system arbitrary fixed using your method and Cam1, Cam2, two cameras.
Your calibration system should provide the R and T from X to Cam1 system and from X to Cam2 system.
Considering putting Cam1 as the reference coordinate system,
considering the 4x3 matrix_RT,
So for the matrix_RT for cam 2, it can be computed as matrix_RT(XtoCam2)*matrix_RT(cam1toX)
?
I believe my reasoning is correct, if it's the case, i can manage as it is. Though that might be a great feature to add !
Thank you a lot
from anipose.
Okay thank you a lot for your time !
from anipose.
No problem!
Sorry, I forgot to answer your other question. Your scheme is correct. That's pretty much how I was planning to implement the "cam_align" variable.
from anipose.
I have two questions related to this thread:
-
What is the type of rotation vector in the calibration.toml file? Since you seem to be using opencv under the hood, I assume its a Rodrigues vector. Is that right?
-
Following 3D position estimation, can I simply apply the inverted R and T transform for a given camera to the estimated 3D points in order to make that given camera the origin of my 3D space?
Thanks!
from anipose.
@niccle27 did you have any success transforming the coordinate system like you mentioned? Iād be interested to see what you did and how well it worked.
from anipose.
Hi ! Well i was actually working for my internship in a research center who didn't authorized me to publish my work. Therefore i can't provide you the code i did since i do not own it. if i remember correctly, i was taking a camera as reference and i was multiplying all the other extrinsic matrix by the inverse of that chosen camera extrinsic matrix. it was working properly and i was able to project the cameras location in matplotlib 3D plot.
from anipose.
Ah I understand, thanks! One more question, did you ever notice any issues with the scaling of the translation vectors during your calibration? I calibrate 4 cameras but the arbitrary origins placement doesn't seem to make sense relative to the 4 cameras.
from anipose.
Hi, as far as i remember it's a position vector and not a scaling vector. About the Arbitrary point, it's normal, it's the reference obtained arbitrary by the bundle adjustment algorithm he is using (i couldn't understand this part unfortunately) maybe @lambdaloop can put you on the way. The all point of the procedure i describe is to move all your coordinates into a new system where you fix the reference to one of the camera.
from anipose.
I notice the "scaling" or whatever this issue is when I am trying to apply another transformation to a defined ground plane. I got an image of my ChArUco board laying on the ground and obtained the rvec and tvec to transform a camera to it. So what I am doing is transforming all the cameras to camera 0 for example, and then getting camera 0's transform to the ground plane. There is a big difference in the units of my camera 0 to ground transformation (obtained using opencv) and the translations obtained from anipose.
ex.
Note1: All cameras are 1 to 2m apart from one another
OpenCV transformation to ground (calibrated in meters...so the tvec units should be meters and I've checked with the physical setup if these numbers make sense)
rvec: [ 2.08682682 -0.3117344 0.3171962 ]
tvec: [-0.55785219 0.91809845 3.71028874]
Note2: For the Anipose transformations all cameras share the same relative magnitude of tvec...so this can't make physical sense given that the cameras are known to be 1 to 2m apart and assuming the units are meters.
Anipose transformation to arbitrary point X (calibrated in meters, specified in my config.toml)
rvec: [0.116348 -0.19617 0.136346]
tvec: [-0.081677 -0.045249 0.0389551]
output toml from anipose calib
calibration_nov13_v1.zip
from anipose.
Related Issues (20)
- Issue with anipose analyze: 'device_spec' HOT 4
- Triangulation Gaps in Unfilled Demo
- ERROR in using Anipose Analyze
- Calibrate delivering widely different results from the same detections HOT 8
- anipose calibrate HOT 4
- Error in Calibration HOT 2
- How to Utilize GPU for Anipose HOT 1
- Anipose Label-3D 'traitsui.toolkits'
- Anipose tutorial
- error in anipose filter command if turn [filter] parameter for 'True' HOT 4
- anipose label-3d slows down after the first video
- How to use Sleap 2D coordinates output with Anipose HOT 2
- draw-calibration returns a different pattern HOT 1
- I encountered the following problem when performing the anipose filter step. It was still running successfully a month ago
- I encountered the following situation when running anipose angles. If there are no these parameters, an error will be reported. HOT 1
- Questions about camera calibration HOT 1
- Skeleton Different in Anipose2D than in DeepLabCut
- Anipose triangulation error: not enough 3D points to run optimization
- Why does the csv file generated after I run anipose-triangulate have no xyz coordinates?
- Anipose doesn't use GPU HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
š Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ššš
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ā¤ļø Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from anipose.