Giter Site home page Giter Site logo

hkust-aerial-robotics / vins-mobile Goto Github PK

View Code? Open in Web Editor NEW
1.2K 109.0 524.0 69.64 MB

Monocular Visual-Inertial State Estimator on Mobile Phones

License: GNU General Public License v3.0

C++ 96.35% CMake 1.94% C 0.57% HTML 0.01% Python 0.28% Makefile 0.12% Shell 0.06% Objective-C 0.15% GLSL 0.01% Objective-C++ 0.54%
state-estimation vio vins

vins-mobile's Introduction

VINS-Mobile

Monocular Visual-Inertial State Estimator on Mobile Phones

27 Jun 2017: We upgrade the pose outputs and AR rendering to 30 Hz by motion-only 3D tracking in front-end and improve the loop-closure procedure(See our technical report for detail).

22 May 2017: VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator is released. It is the Linux version and is fully integrated with ROS. Available at: link

VINS-Mobile is a real-time monocular visual-inertial state estimator developed by members of the HKUST Aerial Robotics Group. It runs on compatible iOS devices, and provides localization services for augmented reality (AR) applications. It is also tested for state estimation and feedback control for autonomous drones. VINS-Mobile uses sliding window optimization-based formulation for providing high-accuracy visual-inertial odometry with automatic initialization and failure recovery. The accumulated odometry errors are corrected in real-time using global pose graph SLAM. An AR demonstration is provided to showcase its capability.

Authors: Peiliang LI, Tong QIN, Zhenfei YANG, Kejie QIU, and Shaojie SHEN from the HKUST Aerial Robotics Group

Videos: https://youtu.be/0mTXnIfFisI https://youtu.be/CI01qbPWlYY (Video1 Video2 for mainland China friends)

Related Papers:

If you use VINS-Mobile for your academic research, please cite at least one of our related papers.

1. Build

The code has been compiled on macOS Sierra with Xcode 8.3.1 and tested with iOS 10.2.1 on iPhone7 Plus.

1.1 Install boost for macOS

$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
$ brew install boost

1.2 Download specific opencv2.framework from here, then unzip it to VINS_ThirdPartyLib/opencv2.framework (Please make sure you haven't installed opencv for your OSX)

1.3 In your Xcode, select Product-> Scheme-> Edit Scheme-> Run-> Info, set Build Configuration to Release (not debug)

1.4 Slect your device at upper left corner, then choose your device size at Main.storyboard, build and run

1.5 Compatible Devices and iOS version requiements

iPhone7 Plus, iPhone7, iPhone6s Plus, iPhone6s, iPad Pro
iOS 10.2.1 and above

2. Acknowledgements

We use ceres solver for non-linear optimization and DBow for loop detection.

Thanks the contributions of Botao Hu (from Amber Garage) and Yang Liu.

3. Licence

The source code is released under GPLv3 licence.

We are still working for improving the code readability. Welcome to contribute to VINS-Mobile or ask any issues via Github or contacting Peiliang LI <pliapATconnect.ust.hk> or Tong QIN <tong.qinATconnect.ust.hk>.

For commercial inqueries, please contact Shaojie SHEN <eeshaojieATust.hk>

vins-mobile's People

Contributors

peiliangli avatar pipikk avatar qintonguav avatar shaojie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vins-mobile's Issues

iPhone 6 failing ...

Hi,

I was trying to test the project on an iPhone6 (not S), but it always fail (FAIL_IMU, FAIL_SFM...).

I changed focus lengths and PX, PY with my own but still fails.

Do I need to change any other parameter?

Thanks and congratz for you awesome work!

How to use it?

Hi, thank you for releasing the code. I tried running it on my iphone 7 plus, I am seeing the tracked features, but it seems to be failing in initialization. The status is constantly switching between FAIL SFM, FAIL IMU and FAIL RELA. I tried switching between VINS and AR, toggling START/STOP, or hitting REINIT, nothing seems to help. Is there some specific way the system needs to be initialized?

How to export Camera+IMU data and VINS results?

Hello everyone, I'm trying to modify the code to get the camera images and IMU readings. There are codes to handle this need in the project, but they seem to be commented and some bit messy. What should I do to get the images and IMU data generated by the device?

Coordinate systems

Hi,

what is the slam coordinate system?

I found in code that IMU has this one:
Z^
| /Y
| /
| /
|/--------->X

Is the SLAM system same?

btw, machine hall dataset has this one, right?
/Z
/
/
|/--------->X
|
|
|
Y

Thanks.

KeyFrameDatabase::optimize4DoFLoopPoseGraph error

Hi,
In this function there showed me an error when Build and Run:
Quaterniond q_array[max_length];
error: variable length array of non-POD element type 'Quaterniond'(aka 'Quaternion')
What should I do to solve this? I followed the same instructions on github in Xcode. Thanks!

about run holokit

I follow the documentation steps you have given to run holokit, but the following problems, a moment can not be resolved, please help solve:
1): "vtable for ceres::HuberLoss", referenced from:
2): "ceres::Problem::AddResidualBlock(ceres::CostFunction*, ceres::LossFunction*, double*, double*, double*)", referenced from:
3): "ceres::Solver::Summary::BriefReport() const", referenced from:
4): "ceres::LocalParameterization::~LocalParameterization()", referenced from:
5): "ceres::Problem::SetParameterBlockConstant(double*)", referenced from:
6): "google::log_sinks_global", referenced from:
7): "vtable for ceres::CauchyLoss", referenced from:
8): "ceres::LocalParameterization::MultiplyByJacobian(double const*, int, double const*, double*) const", referenced from:
9): "ceres::Problem::AddResidualBlock(ceres::CostFunction*, ceres::LossFunction*, std::__1::vector<double*, std::__1::allocator<double*> > const&)", referenced from:
10): "ceres::Problem::AddParameterBlock(double*, int)", referenced from:
11): "ceres::Problem::AddParameterBlock(double*, int, ceres::LocalParameterization*)", referenced from:
12): "ceres::Problem::Problem()", referenced from:
13): "ceres::Problem::GetResidualBlocks(std::__1::vector<ceres::internal::ResidualBlock*, std::__1::allocatorceres::internal::ResidualBlock* >) const", referenced from:
14): "ceres::Problem::AddResidualBlock(ceres::CostFunction
, ceres::LossFunction*, double*, double*, double*, double*)", referenced from:
15): "ceres::Solver::Summary::Summary()", referenced from:
16): "ceres::Solve(ceres::Solver::Options const&, ceres::Problem*, ceres::Solver::Summary*)", referenced from:
17): "vtable for ceres::QuaternionParameterization", referenced from:
18): "typeinfo for ceres::LocalParameterization", referenced from:
19): "ceres::Problem::RemoveResidualBlock(ceres::internal::ResidualBlock*)", referenced from:
20): "ceres::Problem::~Problem()", referenced from:
21): Linker command failed with exit code 1 (use -v to see invocation)

Thank you!

confused about covariance calculation in the code

the matrix V in your paper is a 15x12 matrix .

but why in the code it's a 15x18 matrix ?
MatrixXd V = MatrixXd::Zero(15,18); V.block<3, 3>(0, 0) = 0.25 * delta_q.toRotationMatrix() * _dt * _dt; V.block<3, 3>(0, 3) = 0.25 * -result_delta_q.toRotationMatrix() * R_a_1_x * _dt * _dt * 0.5 * _dt; V.block<3, 3>(0, 6) = 0.25 * result_delta_q.toRotationMatrix() * _dt * _dt; V.block<3, 3>(0, 9) = V.block<3, 3>(0, 3); V.block<3, 3>(3, 3) = 0.5 * MatrixXd::Identity(3,3) * _dt; V.block<3, 3>(3, 9) = 0.5 * MatrixXd::Identity(3,3) * _dt; V.block<3, 3>(6, 0) = 0.5 * delta_q.toRotationMatrix() * _dt; V.block<3, 3>(6, 3) = 0.5 * -result_delta_q.toRotationMatrix() * R_a_1_x * _dt * 0.5 * _dt; V.block<3, 3>(6, 6) = 0.5 * result_delta_q.toRotationMatrix() * _dt; V.block<3, 3>(6, 9) = V.block<3, 3>(6, 3); V.block<3, 3>(9, 12) = MatrixXd::Identity(3,3) * _dt; V.block<3, 3>(12, 15) = MatrixXd::Identity(3,3) * _dt;

Status is fail...

I start the app, using iphone7 plus. The keypoints detection seems very stable, red points, sometimes are blue. But the status is fail_imu, fail_sfm, fail_align.......
How to play this app?

Build Framework failed

I want to build VINS-ios-framework Target to a framework for my project , build failed.
// step
Selected VINS-ios-framework Target
add VINS_ios - info.plist to VINS-ios-framework Target's info
Build Failed :
jietu20170612-233146

Maybe I did something wrong . Could you tell me about this ,Please?

can't you share your opencv-framework source?

From Readme, you have a modified opencv framework. I find in your code , you use this opencv framework to get Video Frame's timestamp. Can you share this modify code?
For my test, the time stamp is almost faster 48ms than in "processImage" function using systemUptime.
If I want to get the real video frame's timestamp in android. What I can do for that?

====
您好,非常感谢您能开源这么棒的资源。 我想请教一下您关于视频帧的时间戳获取的方法。
看您的代码,您修改了opencv的代码。从而获取了视频帧的真实时间戳。我实际测试,发现这个时间戳比在processImage方法中的systemUptime 提前了有48ms左右。能分享一下您获取视频帧时间戳这一块的代码吗?
另外如果我想在android中完成同样的功能。是否也能够借用这里的思路?
谢谢,期待您的回复。

clang: error: linker command failed with exit code 1 (use -v to see invocation)

Ld /Users/macbook-liang/Library/Developer/Xcode/DerivedData/VINS_ios-aoxynvrebnnkzzbqjqbjamczosua/Build/Products/Release-iphonesimulator/VINS_ios.app/VINS_ios normal x86_64
cd /Users/macbook-liang/Downloads/VINS-Mobile-master
export IPHONEOS_DEPLOYMENT_TARGET=10.0
export PATH="/Users/macbook-liang/Downloads/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Users/macbook-liang/Downloads/Xcode.app/Contents/Developer/usr/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin"
/Users/macbook-liang/Downloads/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ -arch x86_64 -isysroot /Users/macbook-liang/Downloads/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator10.2.sdk -L/Users/macbook-liang/Library/Developer/Xcode/DerivedData/VINS_ios-aoxynvrebnnkzzbqjqbjamczosua/Build/Products/Release-iphonesimulator -L/Users/macbook-liang/Downloads/VINS-Mobile-master/VINS_ThirdPartyLib/ceres-solver/ceres-bin/lib -L/Users/macbook-liang/Downloads/VINS-Mobile-master/ThirdParty -L/Users/macbook-liang/Downloads/VINS-Mobile-master/Resources -F/Users/macbook-liang/Library/Developer/Xcode/DerivedData/VINS_ios-aoxynvrebnnkzzbqjqbjamczosua/Build/Products/Release-iphonesimulator -F/Users/macbook-liang/Downloads/VINS-Mobile-master/VINS_ThirdPartyLib -F/Users/macbook-liang/Downloads/VINS-Mobile-master/ThirdParty -filelist /Users/macbook-liang/Library/Developer/Xcode/DerivedData/VINS_ios-aoxynvrebnnkzzbqjqbjamczosua/Build/Intermediates/VINS_ios.build/Release-iphonesimulator/VINS_ios.build/Objects-normal/x86_64/VINS_ios.LinkFileList -Xlinker -rpath -Xlinker @executable_path/Frameworks -mios-simulator-version-min=10.0 -dead_strip -Xlinker -object_path_lto -Xlinker /Users/macbook-liang/Library/Developer/Xcode/DerivedData/VINS_ios-aoxynvrebnnkzzbqjqbjamczosua/Build/Intermediates/VINS_ios.build/Release-iphonesimulator/VINS_ios.build/Objects-normal/x86_64/VINS_ios_lto.o -Xlinker -objc_abi_version -Xlinker 2 -stdlib=libc++ -fobjc-arc -fobjc-link-runtime -Xlinker -sectcreate -Xlinker __TEXT -Xlinker __entitlements -Xlinker /Users/macbook-liang/Library/Developer/Xcode/DerivedData/VINS_ios-aoxynvrebnnkzzbqjqbjamczosua/Build/Intermediates/VINS_ios.build/Release-iphonesimulator/VINS_ios.build/VINS_ios.app.xcent /Users/macbook-liang/Downloads/VINS-Mobile-master/Resources/boost.a -ljpeg -framework opencv2 -framework CoreMotion -framework GLKit -framework CoreGraphics -framework OpenGLES -lceres -framework Foundation -framework UIKit -framework AVFoundation -framework AssetsLibrary -framework Accelerate -framework CoreVideo -framework CoreMedia -framework CoreImage -framework QuartzCore -Xlinker -dependency_info -Xlinker /Users/macbook-liang/Library/Developer/Xcode/DerivedData/VINS_ios-aoxynvrebnnkzzbqjqbjamczosua/Build/Intermediates/VINS_ios.build/Release-iphonesimulator/VINS_ios.build/Objects-normal/x86_64/VINS_ios_dependency_info.dat -o /Users/macbook-liang/Library/Developer/Xcode/DerivedData/VINS_ios-aoxynvrebnnkzzbqjqbjamczosua/Build/Products/Release-iphonesimulator/VINS_ios.app/VINS_ios

ld: library not found for -lceres
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Why limit device

Hi
I find that the lowest device become iphone6s. The earlier versions can run on iphone6 and ipad Air 2,and i want to konw why add the limit.

a problem about scale initialization

I read your paper and use your method for the initialization. I can get the right gyroscope bias, but can't get the correct scale, and it's very unstable. Here is the result for scale.(the correct scale is about 2)

I don't know what's wrong with it. The equation I used is as follow:

Confused about code for initial alignment

Hi, I am a college student from Information Engineering Univesity.
I have read your paper Robust Initialization of Monocular Visual-Inertial Estimation on Aerial Robots, but feel a little confused about your code for solveGyroscopeBias, I don't know why the matrix A and b are consturcted like that, Can you show me the matrix fomula for the following code? thanks!!!

tmp_A = frame_j->second.pre_integration->jacobian.template block<3, 3>(O_R, O_BG);  
tmp_b = 2 * (frame_j->second.pre_integration->delta_q.inverse() * q_ij).vec();  
A += tmp_A.transpose() * tmp_A;  
b += tmp_A.transpose() * tmp_b;  

Unable to link to ceres-solver library

Hi, I am having some problem while compiling your project in Xcode 8. The error messages convey to me that the project is unable to link to ceres-solver and also has "Undefined symbols for architecture x86_64". I am using MacOS Seirra and MacBook Pro

Here is a screenshot attached.
screen shot 2017-05-17 at 10 26 05

BTW, awesome work guys.

Time jump for image?

In ViewController.mm --> process(), when it fail to obtain newest processed image in image_pool and vins_pool, the function will use original image. Does it cause the time jump on the image because the original image timestamp is after that of the processed image?

how to calc/set/adjust the params for new device ,eg iphone6p

I do some modification to support iphone6p and it runs ok on iphone6p (IOS10.3.1).
When I test it and found that when I turn around and return the start point, it does not show the correct result. Is it because I am using wrong params for iphone6sp on iphone6p? If yes, how can I turning and get accurate params for iphone6p device. Thanks

Below are my modification

  1. file: global_param.hpp
    enum DeviceType
    {
    iPhone7P,
    iPhone7,
    iPhone6sP,
    iPhone6s,
    // +++++
    iPhone6,
    iPhone6P,
    // +++++++ END
    iPadPro97,
    iPadPro129,
    unDefine
    };

  2. file: ViewController.mm
    DeviceType deviceName()
    {
    ...
    // ++++
    else if(([device compare:@"iPhone7,1"] == NSOrderedSame))
    {
    printf("Device iPhone6p\n");
    device_type = iPhone6P;
    }
    //+++ END
    }

  3. file: global_param.cpp
    bool setGlobalParam(DeviceType device)
    {
    //+++++ , params copied from iphone6sp

    case iPhone6P:
    printf("Device iPhone6P param\n");
    FOCUS_LENGTH_X = 547.565;
    FOCUS_LENGTH_Y = 547.998;
    PX = 239.033;
    PY = 309.452;
    SOLVER_TIME = 0.08;
    FREQ = 3;
    //extrinsic param
    TIC_X = 0.0;
    TIC_Y = 0.065;
    TIC_Z = 0.0;
    return true;
    break;

// ++++ END

imu-camera extrinsic calibration tools?

Really thanks for this open code.
1, I want to know which calibration tools you used to calibrate the imu-camera extrinsic for the phone?
do you use kalibr tools for https://github.com/ethz-asl/kalibr/wiki/camera-imu-calibration?
if you use this tool, I want to know these params for imu noise is right ?
----imu.yaml----
update_rate: 100.0 #Hz
accelerometer_noise_density: 0.5 #continous
accelerometer_random_walk: 0.002
gyroscope_noise_density: 0.2 #continous
gyroscope_random_walk: 4.0e-5

2, Another question for coordinate system. As read you code, I think the camera frame and imu/body frame is like this:

coor

I am right for that ?

opencv 的问题

build时候在这 enum {NO, GAIN, GAIN_BLOCKS }; 出现Expected identifier 错误

build error

snip20170525_2
when I build ,have this error,
please help me , thank you

replicate results on Machine Hall datasets

Hi,

thanks a lot for sharing your implementation. I found it very educative.

I am trying to replicate your results from the paper, namely Figures 7-9 from the paper
"Monocular Visual-Inertial State Estimation for Mobile Augmented Reality".
Would it be possible to get the code for these tables as well?
I am currently trying to produce these results using your implementation, but constantly failing. :(

Thanks in advance,
Kostia.

为什么我初始化不了

拿iPhone7P 和 iPhone6(放宽了设备限制)试过 都不能初始化! 按照提示移动也没有反应!

newbee in ios

I'm a newbee in ios.I used xcode to open your project cloned from the hub.It shows some error like in the below image.

image

nothing changed when I run this app

wechatimg35
when i run this app on my iPhone7 nothing happened, no STA and I have translated in the room,It seems nothing happened except some features .

VINS for LINUX ?

Hello,

Firstly, congratulations on this amazing work. The accuracy seems amazing from the videos.

I am particularly looking for a VINS for a (camera and/or lidar) + IMU setup on Ubuntu.

Can we use your work for my purpose ? I understand from a quick look that its only IOS devices. Could you provide any relevant references in that direction ?

Thanks

程序运行后状态不收敛。

你好。 我将程序中LOOP_CLOSURE设置为false,然后只跑VIO,initialization成功后状态不收敛,速度比较大(0.5m/s),然后导致位置后面一直增加(即使是静止时)。加速度计和陀螺仪bias的估计比较正常(为10的负二次方量级)。不知道应该调哪些参数呢?

Frame skip of AR display?

I find that you choose one frame every three frames, and only calculate accurate poses of these frames. So, if application only displays these frames, it looks like frame skipping. Can this project provide more frequency of pose by camera or IMU? I saw this part in your paper 'Beside pose, we also have accurate veloc- ity and IMU bias estimation, which enable to propagate the camera pose with IMU data up to 100 Hz. This ensures low-latency AR experience'. I don't know what this part mean. Is that mean that this project can provide pose up to 100Hz by IMU that can be used by AR application? Thanks a lot.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.