Giter Site home page Giter Site logo

Comments (8)

stephanweiss avatar stephanweiss commented on August 15, 2024

Hi,

I am not familiar with Gazebo but generally if you initialize q_ci, p_ci, q_wv, and the scale factor correctly, the filter should work well. I assume you can get these values from your simulation. You may want to verify if the value of the initialized scale (and the other states) is indeed near the true value.

Best,
Stephan


From: erwin14 [[email protected]]
Sent: Friday, December 12, 2014 5:33 AM
To: ethz-asl/ethzasl_sensor_fusion
Subject: [ethzasl_sensor_fusion] Problem with the initialization of ethzasl_sensor_fusion (#32)

Hello,
I have problems concerning the initialization of ethzasl_sensor_fusion. I try to simulate ethzasl_sensor_fusion with ethzasl_ptam in a Gazebo simulation (robot = PR2).
My World-Frame is a 2D-Map (map-frame). I´m initializing the filter with the init-checkbox in dynamic reconfigure.

My Question is how to get the right values for q_ci, p_ci and q_wv in the pose_sensor_fix.yaml file?
Are there other steps necessary for the initialization or is it enough to find the correct values for q_ci, p_ci and q_wv?

I tried the following:

  1. For q_ci and p_ci:
    rosrun tf tf_echo
  2. For q_wv:
    rosrun tf tf_echo => q1
    rosrun tf tf_echo => q2
    rostopic echo vslam/pose => q3
    q_wv = q1 * q2 * q3

I´m not sure if this is correct.
Thanks in advance.


Reply to this email directly or view it on GitHubhttps://github.com//issues/32.

from ethzasl_sensor_fusion.

simonlynen avatar simonlynen commented on August 15, 2024

@erwin14 lets assume your measurements are generated by projecting your 2D map into the sensor-frame of reference, then your p_ic/q_ic should be the transformation that expresses the pose of the sensor frame of reference w.r.t. the imu frame of reference.

q_wv Should be set to identity. (your derivation is not correct) This quantity can be used to estimate a roll/pitch offset between your map frame and the gravity aligned world frame. You should write your simulation s.t. the z-direction of your map is gravity aligned, then this quantity is irrelevant.

from ethzasl_sensor_fusion.

erwin14 avatar erwin14 commented on August 15, 2024

Hi Stephan and Simon,

thanks for your fast response.

What I've learned from other issues is that q_wv is the unit quaternion if the world-frame (in my case the 2d-map) and the vision-frame (ptam-frame) are both gravity aligned.
But this is not the case in my configuration.

My aim is to use ethzasl_sensor_fusion to get pose estimates of the robot in a 2d-map similar to amcl.

So I'm still not sure how to get the correct values for q_wv and the scale factor.

Thanks in advance.

from ethzasl_sensor_fusion.

stephanweiss avatar stephanweiss commented on August 15, 2024

Can you elaborate in more detail why your vision frame is not aligned with gravity in the 2D case?
Is the vision frame alignment arbitrary and unknown? If so, you may need to observe the currently measured acceleration of your robot in still mode (i.e. measure the gravity vector) and compute roll and pitch of your vision frame w.r.t. that vector.

Best,
Stephan


From: erwin14 [[email protected]]
Sent: Saturday, December 13, 2014 5:02 AM
To: ethz-asl/ethzasl_sensor_fusion
Cc: Stephan Weiss
Subject: Re: [ethzasl_sensor_fusion] Problem with the initialization of ethzasl_sensor_fusion (#32)

Hi Stephan and Simon,

thanks for your fast response.

What I've learned from other issues is that q_wv is the unit quaternion if the world-frame (in my case the 2d-map) and the vision-frame (ptam-frame) are both gravity aligned.
But this is not the case in my configuration.

My aim is to use ethzasl_sensor_fusion to get pose estimates of the robot in a 2d-map similar to amcl.

So I'm still not sure how to get the correct values for q_wv and the scale factor.

Thanks in advance.


Reply to this email directly or view it on GitHubhttps://github.com//issues/32#issuecomment-66875658.

from ethzasl_sensor_fusion.

erwin14 avatar erwin14 commented on August 15, 2024

The Vision-Frame(PTAM-Frame) is not aligned with gravity because the camera is front-looking.
Could you describe in some more detail how I could compute a pose estimate of the robot in a 2d-map with the help of ethzasl_sensor_fusion?

I still have no idea how this could work.

Thanks in advance.

from ethzasl_sensor_fusion.

stephanweiss avatar stephanweiss commented on August 15, 2024

The alignment of the camera has no influence if the Vision frame is aligned with gravity or not. I assumed your vision frame is aligned with gravity since you are working on a 2D (gravity aligned) map. The alignment of the vision frame is according to the dominant plane of the first triangulated features (i.e. no matter how the camera is looking at a gravity aligned flat floor, the vision frame will be gravity aligned).
So if my assumption on your 2D map is correct, your q_vw will be a unit quaternion.

Best,
Stephan


From: erwin14 [[email protected]]
Sent: Sunday, December 14, 2014 6:54 PM
To: ethz-asl/ethzasl_sensor_fusion
Cc: Stephan Weiss
Subject: Re: [ethzasl_sensor_fusion] Problem with the initialization of ethzasl_sensor_fusion (#32)

The Vision-Frame(PTAM-Frame) is not aligned with gravity because the camera is front-looking.
Could you describe in some more detail how I could compute a pose estimate of the robot in a 2d-map with the help of ethzasl_sensor_fusion?

I still have no idea how this could work.

Thanks in advance.


Reply to this email directly or view it on GitHubhttps://github.com//issues/32#issuecomment-66944041.

from ethzasl_sensor_fusion.

tmagcaya avatar tmagcaya commented on August 15, 2024

So even if the camera is pitched 90 degrees form a downward looking frame (thus making it a forward looking) and we use a wall for grid initialization, we have to use a unit quaternion?

from ethzasl_sensor_fusion.

Booooooosh avatar Booooooosh commented on August 15, 2024

Thanks for sharing your work and it really can help me doing the fusion between the output of the svo and the imu!

But still I've met some problems with regard to initialization, sorry my question might be length, and thanks in advance for your time~:

  1. I've read through the topic but still confused about the q_wv and the scale of the vo, I am not quite sure how to set these two values. The following is the outlay of my imu and the camera(all the coordinates follows the right-hand rules), so basically the imu frame's z direction points upward and the downward looking camera just rotates 180 degrees in x axis, so I initialize q_ci to (0, -1, 0, 0) and p_ci to the translation b/w the camera and the imu. But what about the q_wv? Actually I am confused about what is the q_wv?
    cameraimu

  2. About the scale, so I read the code and found out that actually the scale is getting zero propagated, so it will not change during the algorithm, so does that mean I have to set the scale value to the exact value of the camera? If so, how can I get this scale? (I am currently using a monocular camera with the svo package provided by Robotics Perception Group)

  3. Also, the bias of the imu are not being updated during the algorithm in the state propagation, is it supposed to be this? Then why there are bias noises parameters in the yaml file?

Thanks very much!

Bosch Tang

from ethzasl_sensor_fusion.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.