hkust-aerial-robotics / vins-mobile Goto Github PK
View Code? Open in Web Editor NEWMonocular Visual-Inertial State Estimator on Mobile Phones
License: GNU General Public License v3.0
Monocular Visual-Inertial State Estimator on Mobile Phones
License: GNU General Public License v3.0
Hi,
what is the slam coordinate system?
I found in code that IMU has this one:
Z^
| /Y
| /
| /
|/--------->X
Is the SLAM system same?
btw, machine hall dataset has this one, right?
/Z
/
/
|/--------->X
|
|
|
Y
Thanks.
Hello everyone, I'm trying to modify the code to get the camera images and IMU readings. There are codes to handle this need in the project, but they seem to be commented and some bit messy. What should I do to get the images and IMU data generated by the device?
Ld /Users/macbook-liang/Library/Developer/Xcode/DerivedData/VINS_ios-aoxynvrebnnkzzbqjqbjamczosua/Build/Products/Release-iphonesimulator/VINS_ios.app/VINS_ios normal x86_64
cd /Users/macbook-liang/Downloads/VINS-Mobile-master
export IPHONEOS_DEPLOYMENT_TARGET=10.0
export PATH="/Users/macbook-liang/Downloads/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Users/macbook-liang/Downloads/Xcode.app/Contents/Developer/usr/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin"
/Users/macbook-liang/Downloads/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ -arch x86_64 -isysroot /Users/macbook-liang/Downloads/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator10.2.sdk -L/Users/macbook-liang/Library/Developer/Xcode/DerivedData/VINS_ios-aoxynvrebnnkzzbqjqbjamczosua/Build/Products/Release-iphonesimulator -L/Users/macbook-liang/Downloads/VINS-Mobile-master/VINS_ThirdPartyLib/ceres-solver/ceres-bin/lib -L/Users/macbook-liang/Downloads/VINS-Mobile-master/ThirdParty -L/Users/macbook-liang/Downloads/VINS-Mobile-master/Resources -F/Users/macbook-liang/Library/Developer/Xcode/DerivedData/VINS_ios-aoxynvrebnnkzzbqjqbjamczosua/Build/Products/Release-iphonesimulator -F/Users/macbook-liang/Downloads/VINS-Mobile-master/VINS_ThirdPartyLib -F/Users/macbook-liang/Downloads/VINS-Mobile-master/ThirdParty -filelist /Users/macbook-liang/Library/Developer/Xcode/DerivedData/VINS_ios-aoxynvrebnnkzzbqjqbjamczosua/Build/Intermediates/VINS_ios.build/Release-iphonesimulator/VINS_ios.build/Objects-normal/x86_64/VINS_ios.LinkFileList -Xlinker -rpath -Xlinker @executable_path/Frameworks -mios-simulator-version-min=10.0 -dead_strip -Xlinker -object_path_lto -Xlinker /Users/macbook-liang/Library/Developer/Xcode/DerivedData/VINS_ios-aoxynvrebnnkzzbqjqbjamczosua/Build/Intermediates/VINS_ios.build/Release-iphonesimulator/VINS_ios.build/Objects-normal/x86_64/VINS_ios_lto.o -Xlinker -objc_abi_version -Xlinker 2 -stdlib=libc++ -fobjc-arc -fobjc-link-runtime -Xlinker -sectcreate -Xlinker __TEXT -Xlinker __entitlements -Xlinker /Users/macbook-liang/Library/Developer/Xcode/DerivedData/VINS_ios-aoxynvrebnnkzzbqjqbjamczosua/Build/Intermediates/VINS_ios.build/Release-iphonesimulator/VINS_ios.build/VINS_ios.app.xcent /Users/macbook-liang/Downloads/VINS-Mobile-master/Resources/boost.a -ljpeg -framework opencv2 -framework CoreMotion -framework GLKit -framework CoreGraphics -framework OpenGLES -lceres -framework Foundation -framework UIKit -framework AVFoundation -framework AssetsLibrary -framework Accelerate -framework CoreVideo -framework CoreMedia -framework CoreImage -framework QuartzCore -Xlinker -dependency_info -Xlinker /Users/macbook-liang/Library/Developer/Xcode/DerivedData/VINS_ios-aoxynvrebnnkzzbqjqbjamczosua/Build/Intermediates/VINS_ios.build/Release-iphonesimulator/VINS_ios.build/Objects-normal/x86_64/VINS_ios_dependency_info.dat -o /Users/macbook-liang/Library/Developer/Xcode/DerivedData/VINS_ios-aoxynvrebnnkzzbqjqbjamczosua/Build/Products/Release-iphonesimulator/VINS_ios.app/VINS_ios
ld: library not found for -lceres
clang: error: linker command failed with exit code 1 (use -v to see invocation)
拿iPhone7P 和 iPhone6(放宽了设备限制)试过 都不能初始化! 按照提示移动也没有反应!
Hi, Li, thank you for your sharing, and I have a little problem while building the project.
build failed, and show Semantic issues ,like no member named 'findEssentialMat' and 'recoverPos' in namespace cv
I know the first three items are position (x,y,z),how about next items ?
I start the app, using iphone7 plus. The keypoints detection seems very stable, red points, sometimes are blue. But the status is fail_imu, fail_sfm, fail_align.......
How to play this app?
为什么把代码放到Unity3D工程里面,会变得很卡
I find that you choose one frame every three frames, and only calculate accurate poses of these frames. So, if application only displays these frames, it looks like frame skipping. Can this project provide more frequency of pose by camera or IMU? I saw this part in your paper 'Beside pose, we also have accurate veloc- ity and IMU bias estimation, which enable to propagate the camera pose with IMU data up to 100 Hz. This ensures low-latency AR experience'. I don't know what this part mean. Is that mean that this project can provide pose up to 100Hz by IMU that can be used by AR application? Thanks a lot.
Hi,
In this function there showed me an error when Build and Run:
Quaterniond q_array[max_length];
error: variable length array of non-POD element type 'Quaterniond'(aka 'Quaternion')
What should I do to solve this? I followed the same instructions on github in Xcode. Thanks!
After initialized(with s equal to 0.07?is this normal?), program assert failed at imu_factor.h for both jacobian_pose_i.maxCoeff and jacobian_pose_i.minCoeff out of range( e^22), can you tell what would cause this?
Hi, I am a college student from Information Engineering Univesity.
I have read your paper Robust Initialization of Monocular Visual-Inertial Estimation on Aerial Robots, but feel a little confused about your code for solveGyroscopeBias, I don't know why the matrix A and b are consturcted like that, Can you show me the matrix fomula for the following code? thanks!!!
tmp_A = frame_j->second.pre_integration->jacobian.template block<3, 3>(O_R, O_BG);
tmp_b = 2 * (frame_j->second.pre_integration->delta_q.inverse() * q_ij).vec();
A += tmp_A.transpose() * tmp_A;
b += tmp_A.transpose() * tmp_b;
I do some modification to support iphone6p and it runs ok on iphone6p (IOS10.3.1).
When I test it and found that when I turn around and return the start point, it does not show the correct result. Is it because I am using wrong params for iphone6sp on iphone6p? If yes, how can I turning and get accurate params for iphone6p device. Thanks
Below are my modification
file: global_param.hpp
enum DeviceType
{
iPhone7P,
iPhone7,
iPhone6sP,
iPhone6s,
// +++++
iPhone6,
iPhone6P,
// +++++++ END
iPadPro97,
iPadPro129,
unDefine
};
file: ViewController.mm
DeviceType deviceName()
{
...
// ++++
else if(([device compare:@"iPhone7,1"] == NSOrderedSame))
{
printf("Device iPhone6p\n");
device_type = iPhone6P;
}
//+++ END
}
file: global_param.cpp
bool setGlobalParam(DeviceType device)
{
//+++++ , params copied from iphone6sp
case iPhone6P:
printf("Device iPhone6P param\n");
FOCUS_LENGTH_X = 547.565;
FOCUS_LENGTH_Y = 547.998;
PX = 239.033;
PY = 309.452;
SOLVER_TIME = 0.08;
FREQ = 3;
//extrinsic param
TIC_X = 0.0;
TIC_Y = 0.065;
TIC_Z = 0.0;
return true;
break;
// ++++ END
Hello,
Firstly, congratulations on this amazing work. The accuracy seems amazing from the videos.
I am particularly looking for a VINS for a (camera and/or lidar) + IMU setup on Ubuntu.
Can we use your work for my purpose ? I understand from a quick look that its only IOS devices. Could you provide any relevant references in that direction ?
Thanks
I am reading the file 'ViewController.mm', but cannot locate where the thread '- (void)processImage:(cv::Mat&)image' is started. Could anyboy please help me out of this dummy problem? Thank you very much!
你好,在addFeatureCheckParallax()的最后,
if(parallx_num==0||track_num<20)
{
return true ;
}
是否应该返回false呢?
I follow the documentation steps you have given to run holokit, but the following problems, a moment can not be resolved, please help solve:
1): "vtable for ceres::HuberLoss", referenced from:
2): "ceres::Problem::AddResidualBlock(ceres::CostFunction*, ceres::LossFunction*, double*, double*, double*)", referenced from:
3): "ceres::Solver::Summary::BriefReport() const", referenced from:
4): "ceres::LocalParameterization::~LocalParameterization()", referenced from:
5): "ceres::Problem::SetParameterBlockConstant(double*)", referenced from:
6): "google::log_sinks_global", referenced from:
7): "vtable for ceres::CauchyLoss", referenced from:
8): "ceres::LocalParameterization::MultiplyByJacobian(double const*, int, double const*, double*) const", referenced from:
9): "ceres::Problem::AddResidualBlock(ceres::CostFunction*, ceres::LossFunction*, std::__1::vector<double*, std::__1::allocator<double*> > const&)", referenced from:
10): "ceres::Problem::AddParameterBlock(double*, int)", referenced from:
11): "ceres::Problem::AddParameterBlock(double*, int, ceres::LocalParameterization*)", referenced from:
12): "ceres::Problem::Problem()", referenced from:
13): "ceres::Problem::GetResidualBlocks(std::__1::vector<ceres::internal::ResidualBlock*, std::__1::allocatorceres::internal::ResidualBlock* >) const", referenced from:
14): "ceres::Problem::AddResidualBlock(ceres::CostFunction, ceres::LossFunction*, double*, double*, double*, double*)", referenced from:
15): "ceres::Solver::Summary::Summary()", referenced from:
16): "ceres::Solve(ceres::Solver::Options const&, ceres::Problem*, ceres::Solver::Summary*)", referenced from:
17): "vtable for ceres::QuaternionParameterization", referenced from:
18): "typeinfo for ceres::LocalParameterization", referenced from:
19): "ceres::Problem::RemoveResidualBlock(ceres::internal::ResidualBlock*)", referenced from:
20): "ceres::Problem::~Problem()", referenced from:
21): Linker command failed with exit code 1 (use -v to see invocation)
Thank you!
可否加多些注释。。。
the matrix V in your paper is a 15x12 matrix .
but why in the code it's a 15x18 matrix ?
MatrixXd V = MatrixXd::Zero(15,18); V.block<3, 3>(0, 0) = 0.25 * delta_q.toRotationMatrix() * _dt * _dt; V.block<3, 3>(0, 3) = 0.25 * -result_delta_q.toRotationMatrix() * R_a_1_x * _dt * _dt * 0.5 * _dt; V.block<3, 3>(0, 6) = 0.25 * result_delta_q.toRotationMatrix() * _dt * _dt; V.block<3, 3>(0, 9) = V.block<3, 3>(0, 3); V.block<3, 3>(3, 3) = 0.5 * MatrixXd::Identity(3,3) * _dt; V.block<3, 3>(3, 9) = 0.5 * MatrixXd::Identity(3,3) * _dt; V.block<3, 3>(6, 0) = 0.5 * delta_q.toRotationMatrix() * _dt; V.block<3, 3>(6, 3) = 0.5 * -result_delta_q.toRotationMatrix() * R_a_1_x * _dt * 0.5 * _dt; V.block<3, 3>(6, 6) = 0.5 * result_delta_q.toRotationMatrix() * _dt; V.block<3, 3>(6, 9) = V.block<3, 3>(6, 3); V.block<3, 3>(9, 12) = MatrixXd::Identity(3,3) * _dt; V.block<3, 3>(12, 15) = MatrixXd::Identity(3,3) * _dt;
In ViewController.mm --> process(), when it fail to obtain newest processed image in image_pool and vins_pool, the function will use original image. Does it cause the time jump on the image because the original image timestamp is after that of the processed image?
为什么把代码放到Unity工程里面,会变得很卡
Really thanks for this open code.
1, I want to know which calibration tools you used to calibrate the imu-camera extrinsic for the phone?
do you use kalibr tools for https://github.com/ethz-asl/kalibr/wiki/camera-imu-calibration?
if you use this tool, I want to know these params for imu noise is right ?
----imu.yaml----
update_rate: 100.0 #Hz
accelerometer_noise_density: 0.5 #continous
accelerometer_random_walk: 0.002
gyroscope_noise_density: 0.2 #continous
gyroscope_random_walk: 4.0e-5
2, Another question for coordinate system. As read you code, I think the camera frame and imu/body frame is like this:
I am right for that ?
Hi,
I was trying to test the project on an iPhone6 (not S), but it always fail (FAIL_IMU, FAIL_SFM...).
I changed focus lengths and PX, PY with my own but still fails.
Do I need to change any other parameter?
Thanks and congratz for you awesome work!
STA: FAIL_IMU always shown on iphone7
build时候在这 enum {NO, GAIN, GAIN_BLOCKS }; 出现Expected identifier 错误
你好。 我将程序中LOOP_CLOSURE设置为false,然后只跑VIO,initialization成功后状态不收敛,速度比较大(0.5m/s),然后导致位置后面一直增加(即使是静止时)。加速度计和陀螺仪bias的估计比较正常(为10的负二次方量级)。不知道应该调哪些参数呢?
Hi
I find that the lowest device become iphone6s. The earlier versions can run on iphone6 and ipad Air 2,and i want to konw why add the limit.
iPhone 7
运行程序时,一直提示"wait for imu, only should happen at the beginning",是需要配置么?
if (!(imu_msg_buf.back()->header > img_msg_buf.front()->header))
{
NSLog(@"wait for imu, only should happen at the beginning");
return measurements;
}
enum {NO, GAIN, GAIN_BLOCKS}
expected identifier
Hi, thank you for releasing the code. I tried running it on my iphone 7 plus, I am seeing the tracked features, but it seems to be failing in initialization. The status is constantly switching between FAIL SFM, FAIL IMU and FAIL RELA. I tried switching between VINS and AR, toggling START/STOP, or hitting REINIT, nothing seems to help. Is there some specific way the system needs to be initialized?
为什么会报这个错:
#include "ceres/split.h" file not found
这个目录和文件都存在
我用的opencv2.framework是你们百度网盘中提供的,但也会报错
Hi,
thanks a lot for sharing your implementation. I found it very educative.
I am trying to replicate your results from the paper, namely Figures 7-9 from the paper
"Monocular Visual-Inertial State Estimation for Mobile Augmented Reality".
Would it be possible to get the code for these tables as well?
I am currently trying to produce these results using your implementation, but constantly failing. :(
Thanks in advance,
Kostia.
macOS Sierra10.12
Xcode 8.2.1
iPhone 6plus
ios 10.0.1
issue #1
Variable length of non-POD element type 'Quaterniond'(aka "Quaternion')
When building the project
Hi, I am having some problem while compiling your project in Xcode 8. The error messages convey to me that the project is unable to link to ceres-solver and also has "Undefined symbols for architecture x86_64". I am using MacOS Seirra and MacBook Pro
Here is a screenshot attached.
BTW, awesome work guys.
ceres/sparse_matrix.h file not found
From Readme, you have a modified opencv framework. I find in your code , you use this opencv framework to get Video Frame's timestamp. Can you share this modify code?
For my test, the time stamp is almost faster 48ms than in "processImage" function using systemUptime.
If I want to get the real video frame's timestamp in android. What I can do for that?
====
您好,非常感谢您能开源这么棒的资源。 我想请教一下您关于视频帧的时间戳获取的方法。
看您的代码,您修改了opencv的代码。从而获取了视频帧的真实时间戳。我实际测试,发现这个时间戳比在processImage方法中的systemUptime 提前了有48ms左右。能分享一下您获取视频帧时间戳这一块的代码吗?
另外如果我想在android中完成同样的功能。是否也能够借用这里的思路?
谢谢,期待您的回复。
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.