Giter Site home page Giter Site logo

ethzasl_sensor_fusion's Introduction

ethzasl_sensor_fusion

time delay compensated single and multi sensor fusion framework based on an EKF

ethzasl_sensor_fusion's People

Contributors

flaviofontana avatar simonlynen avatar stephanweiss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ethzasl_sensor_fusion's Issues

ethzasl_sensor_fusion ... no output data

Hello,
we got a disturbing problem here. We've set up ethzasl_ptam and asctec_mav_framework to use it on our pelican with ethzasl_sensor_fusion.

-ptam delivere the expected data (/vslam/pose).
-mav_framework dont give any data when i rostopic echo /fcu/ekf_state_out
when i launch : rosrun asctec_hl_interface plot_position_input i have zero on all the graphs and it doesnt change if i move the pelican (/vslm/pose is well remapped and the data exists !!)
image1

the sensor_fusion initialisation is good and there are no errors but the sensor_fusion dont deliver any data on the output. Neither on /fcu/ekf_state_in nor on /ssf_core/pose. Echo call stays empty.

We also configured the parameter files like described in the tutorial.

the remap we did on sensor_fusion launch file :
remap from="/pose_measurement" to="/vslam/pose"/
remap from="/hl_state_input" to="/fcu/ekf_state_out"/
the remap we did on fcu.aunch file :
remap from="/fcu/pose" to="/vslam/pose"/
remap from="/fcu/ekf_state_in" to="/correction"/

we made all the changes indicated on the tutorials except (5.2.2 Changing the RX Buffer Behaviour)

Can anybody tell us whats the problem?
We attached our rqt_graph.
rosgraph7

Fusing Kinect based 3D pose estimation (visual odometry) and IMU

Here I am trying to employ ethzasl_sensor_fusion package to fuse IMU with a visual odometry method on Kinect. Specifically I am using the ccny_rgbd ROS package to estimate the 3D pose of Kinect and a xsens IMU with ethzasl_xsens_driver.

Q1. One observation is that visual odometry always (from my observations) under-estimates the actual motion, for example, if the Kinect is moved for 1 metre, then the visual odometry (VO) method reports only 85-90 cm. I think that IMU always over-estimates the motion, by reporting more than the traversed distance. So I assume that fusing VO and IMU can give better results and am heading in this direction. Do you think this makes sense ?

My Kinect + IMU configuration is shown below
2014-04-02-155828

In this case, the coordinate frames of IMU and Kinect (the pose frame from VO) align with each other and thus the rotation is the same as the default one. I just added a rough translation between the devices.

I tried moving Kinect + IMU from the forward looking direction,
Q2.1 Initial values of the filter estimates are not good, i.e. if the VO says 0.01 m in x, then filtered output says 0.06 m in x.
Q2.2 After certain random motion, the EKF is able to give approximate estimates but they are not as good as VO. For example, actual translation of 1 m, is reported as 80-82 cm by filter and 85-90 cm by VO. The filtered value is still underestimating the VO value which is not accurate (under estimate) in first place. Could you point me some directions to explore ?

Q3. What kind of excitation or movement is required for the above setup for the filter convergence ?

Q4. What is the importance of the covariance values from kinect's pose and IMU ? Right now I am giving covariance values for pose from reconfigure_gui. (0.01 for both meas1 and meas2)

Q5. Most importantly the pose update from VO is around 8 Hz. Does this effect the accuracy of the filter ? What can I do to cope up with this ? Should I use a bag file and run the algorithm slowly or any other suggestions ?

Thank you for your time and effort,

Good day!
Sai Manoj

Some warning && matrix R computation && residuals

1 When I run program, many times the following warnings appear:
[ WARN] [1381953346.986521136]: fuzzy tracking triggered: 0.783476 limit: 0.1
[ WARN] [1381953347.654354944]: Negative scale detected: -0.0697402. Correcting to 0.1

2 Also I didn't find the related paper which describes how to compute R matrices.

3 Compare viconpose_sensor.cpp in second tutorial with pose_sensor.cpp, I found some difference when construct residuals:
pose_sensor.cpp: r_old(6, 0) = -2 * (state_old.q_wv_.w() * state_old.q_wv_.z() + state_old.q_wv_.x() * state_old.q_wv_.y()) / (1 - 2 * (state_old.q_wv_.y() * state_old.q_wv_.y() + state_old.q_wv_.z() * state_old.q_wv_.z()));
viconpose_sensor.cpp: q_err = (state_old.q_wv_ * state_old.q_ * state_old.q_ci_).conjugate() * z_q_;
r_old(6, 0) = -2 * (q_err.w() * state_old.q_wv_.z() + state_old.q_wv_.x() * state_old.q_wv_.y()) / (1 - 2 * (state_old.q_wv_.y() * state_old.q_wv_.y() + state_old.q_wv_.z() * state_old.q_wv_.z()));

I am not sure which one is right because I don't know what's the meaning of last element in residuals vector( the first three is position && second three is attitude).

Thanks in advance.

Coordinate system confusion

2
1

Hi Stephan,

I'm a graduate student from China, working on autonomous quadrotor. Your published masterpieces & software packs are really helpful and they are guiding our research.

Recently we got a confusion about the coordinate system using in "Ethzasl_sensor_fusion" package. As the user manual provided by AscTEC(http://www.asctec.de/downloads/manuals/AscTec_SDK2.0_Manual_V1.1.pdf), a NED coordinate system is used. However, in the "dataset.bag" posted on the ROS website, the acc-z reading is positive +9.8 when the replay started, it seems to be a ENU to us. We've tried them all, making our homemade flight control board sending raw IMU readings in each coordinate system, if we are sending in ENU( acc-z ~+9.8), after initializing, there is always a warning message, indicating a "fuzzy tracking", and sometimes a "negative scale detected" warning. On the other hand, the NED one ( acc-z ~-9.8)seems to be OK without any warning message.(Using Ethzal_PTAM as update sensor)
Has the Asctec's coordinate changed?

In your thesis, the body-fixed coordinate of IMU is not a right-handed Cartesian coordinate, is that the old-style coordinate used by Asctec? Moreover, except for the world cooridnate, the other coordinates are all polted in left-handed, can you give some hint about it?
Really looking forward to your reply, Thanks a lot!

Zhang Xu
from Tianjin University, CHINA

ethzasl_sensor_fusion tutorial is not working properly

Forwarded communication from Gmail

From: Ahmed D. Hunzai
Sent: Monday, January 21, 2013 12:14 AM
To: Weiss Stephan; Achtelik Markus

Hi,

Thanks for sharing framework and such nice tutorials, I am intended to use ethzasl_sensor_fusion package to integrate my 6dof visual odometry and IMU. I have tried the tutorial and got it working, but have a problem in results (ssf_core/state_out etc), mostly it is showing nothing on rxplot even after fallowing all instructions in tutorial, some times it shows good results for few seconds are then giving error, as given below and screenshot of the rxgraph is attached.

My second question is regarding using package with my visual odoemtry and IMU, initially i will use with fixed covariance. Please confirm if following steps are correct ( uisng viconpose_ example given in second tutorial)

  • remap visual odometry and IMU reading to ssf_core/viconpose_measurement and ssf_core/imu_state_input respectively.
  • Change the measurementCallback in viconpos_sensor.cpp file according to Visual Odometry Message
  • adjust values in .yaml file.

third question is, how pose_sensor can be used with 6dof visual odometry?

can you please share code that is running correctly with the dateset given in tutorials?


subscribed to topics:
/auk/fcu/imu
/ssf_core/hl_state_input
/vicon/auk/auk
advertised topics:
/rosout
/ssf_core/state_out
/ssf_core/correction
/ssf_core/pose
/ssf_core/ext_state
/viconpos_sensor/parameter_descriptions
/viconpos_sensor/parameter_updates

[ INFO] [1358713846.549348267]: filter initialized to:
position: [0, 1.42598e-311, -6.19637e-42]
scale:1
attitude (w,x,y,z): [-3.58916e-45, -1.6005e-41, -4.27156e-42, -1.70691e-41
p_ci: [0, 0, 0
q_ci: (w,x,y,z): [1, 0, 0, 0]
[ WARN] [1358713866.497728980]: large time-gap re-initializing to last state

=== ERROR === update: NAN at index 0
=== ERROR === update: NAN at index 0
=== ERROR === prediction p: NAN at index 0
=== ERROR === update: NAN at index 0
......


Thanks in advance.

cheers,
Hunzai

------------------------------- ==== Answer ==== ---------------------------------
from: Weiss Stephan

Dear Hunzai,

Thank you for the feedback and for using our framework. If the rxplots do not show anything but the time-line still advances in the plot this means that the filter throws NANs somewhere. Code and tutorial should work - unless an error slipped in when I was testing it.

  1. NAN errors: your attitude is initialized to [0 0 0 0] which should be a quaternion of rotation (i.e. norm=1). Please check if the initialization file viconpos_measurements.h correctly initializes the attitude quaternion q. If you think it is, then please add the following output to the ROS_INFO_STREAM in viconpos_measurements.h:
        "q_cv: (w,x,y,z): [" << q_cv_.w() << ", " << q_cv_.x() << ", " << q_cv_.y() << ", " << q_cv_.z() << "]" << std::endl <<
        "q_wv: (w,x,y,z): [" << q_wv_.w() << ", " << q_wv_.x() << ", " << q_wv_.y() << ", " << q_wv_.z()  << "]"<< std::endl <<

and send me the output.

  1. Using your VO/IMU: the general steps you describe are correct. However, I think your VO computes a 6DoF pose, whereas the tutorial is made for a 3DoF position. Therefor, you may actually want to do similar steps as described in the tutorial but using the pose_sensor files as a basis (instead of the position_sensor files as in the tutorial). Keep in mind that the values you should change in the *_fix.yaml file are the camera-imu calibration parameters (q_ci, p_ci), the delay and the noise of your VO. If the filter does not do what you expect, usually q_ci is not set correctly. The noise parameters should be computed once and then kept fix.

  2. using VO as 6DoF: see in 2) the pose (=6DoF) sensor module essentially provides everything you need if I am not mistaken.

Please let me know if the above helps and how things are going.

Best
Stephan

PS: Do you mind posting this conversation as an issue on github? This way, other users have access to the information as well.

Performance Expectation Issues

Hi,
First of all I want to say thank you for creating such an amazing algorithm, making it open source and then continuously maintaining it and addressing the issues being faced by different people.

I want to ask few questions related to the performance of this algorithm. I have tested this algorithm with the dataset that you have provided

a. Using the dataset that you have provided and testing the performance of the filter, I found out that the gyro bias and acc bias values do not show a converging trend as I expected, rather the values hover up to value as high as 0.015 and 0.3 respectively. Is it right? Shouldn't it show some converging trend?
gyro bias
acc bias

b. In your dissertation; you have mentioned that in the absence of vision data, IMU can control the robot. But upon skipping the vicon readings in the code, I found that the value of position in /msf_core/odometry keeps increasing in whatever direction it is moving, Is it expected or am I making some mistake?

c. Upon checking the output orientation, the roll, pitch and yaw values are too big as high as 3 rad/sec. Are these correct values or I am making some mistake again?
orientation

d. I am planing to use this algorithm with an unmanned ground vehicle so basically I will be having accelerations in one plane only and angular velocity along yaw. Do I need some modifications, What advice can you give to me regarding my current need.

e. If I need to use this algorithm without ROS, Is there anything in particular that I need to take care of, As I know through another issue
that your algorithm isn't dependent on ROS.

Thank you in advance for your time

Noise configuration

Hi,

I'm the filter together with a ptam-like 6DoF vision pose update. Initial tests suggest the filter is working in principle, but there are some issues. For example the scale is very quickly converging to an unchanging value close to the initialization, leading me to believe the filter is getting overconfident about the variance of the scale and others.

As suggested in the tutorials my next goal is to set the noise parameters as accurately as possible. However I'm not sure I can interpret the manufacturer's information correctly. I don't have the calibration certificate handy, so I'm relying on the manual for now until I get my hands on the certificate. In the manual of my XSens MTi IMU the attached image displays the information about sensor noise. And the following is what I made of it:

 noise_acc: 0.002
 noise_accbias: 0.00004 # 0.002 * 0.02 ???
 noise_gyr: 0.00087     # 0.05 degrees in radians
 noise_gyrbias: 0.00087 # 0.00087 * 1 ???
 noise_scale: 0.0       # ??
 noise_qwv: 0.0         # ??
 noise_qci: 0.0         # ??
 noise_pic: 0.0         # ??
 delay: 0.00
 meas_noise1: 0.01      # ??
 meas_noise2: 0.02      # ??

In particular I'm not sure how to deduce the bias noises from the info in the attached image.

What about the noises for scale, world-vision drift, camera-image rotation, image-camera translation? I guess these mark the process noise during prediction. In all examples I have seen they were set to 0. When would it be appropriate to change them to non zero? (on a side note, why is inter-sensor rotation in "ci" direction and translation in "ic" for the noises?)

My 6DoF pose sensor does not give me useful variance information at the moment, so I pulled the numbers out of thin air. Can you suggest a reasonable method to determine the accuracy of a black box 6DoF sensor. Are there maybe typical signs of over- or under-confident measurement updates (I guess overconfident is more of a problem)?

Any hints and partial answers are greatly appreciated as always.

Best regards,
Nikolaus

xsensmticalibration

Bag file with sample viedo and IMU data

Hi,
Thank you for sharing this framework and for writing tutorials on how to use it.

In the tutorial http://www.ros.org/wiki/ethzasl_sensor_fusion/Tutorials/sfly_framework it is assumed that you have a flying platform. Wouldn't it be nice to have some sample bag files from your platform with video, gyroscope and accelerometer recordings. Maybe I missed them, but I couldn't find any. The dataset http://www.ros.org/wiki/ethzasl_sensor_fusion/Tutorials/getting_started?action=AttachFile&do=view&target=dataset.bag provides only IMU data.

With a bag file from you, people could test if their build works with data that is tested and results should be good.

Regards,
Boian

sensor fusion + ptam + IMU setup issue

Hi, All,

Apology if my questions seems silly or they have been answered in other threads. I am trying to use sensor fusion + ptam on my quadrotor (not AscTec) with a front facing camera. There are related questions here #20 and #8 asked by other users. However, I still can't get it to work as there's seems no consolidated answer. Here is my questions:

  1. What is the frame coordinate used by the camera? I assume that the Z-axis points out from the camera lens. Then I use right hand rule to deduce the X and Y axis. Is this correct?

  2. For the IMU, I am using a microstrain type and it uses NED frame. I edit the ROS code such that it outputs /imu/data in ENU frame. Assuming IMU's X-axis points to the front (same direction as the front facing camera), Z-axis points up out of quadrotor's body. What is the correct transformation between camera and IMU in the yaml file?

  3. Once PTAM is intiallized, the PTAM's camera frame seems arbitrarily oriented with respect to the world frame. How can I make my camera frame gravity aligned, assuming my quadrotor rarely roll/pitch a lot? Or, the world frame should be gravity aligned?

  4. the scale, /lambda, what I understand is that P_metric * /lambda = P_sensor, where P_metric is the position from /ssf_core/pose, and P_sensor is the position from /vslam/pose. Is this the correct interpretation?

  5. My goal is to use this setup to navigate a quadrotor in a small indoor environment.The obvious input I can use is /ssf_core/pose as it gives quadrotor's position in metric form. I notice that there's no frame id for this /ssf_core/pose, is it specified in the world frame used in PTAM? If so, how can I use it for control as the world frame is not gravity aligned.

Thanks a lot for the help!

What kind of data is position of vicon?

Hi there,
thanks for this great framework. I am new to this topic but after a while working and trying a got the tutorial running. Now I want to modify the framework to use a GPS sensor instead of vicon to meassure position. I was just wondering what kind of data for the position is given in the test file dataset.bag? is it just the difference to the last position for x,y and z in meter?
I really would appreciate your help!

Kind regards
Philipp

Getting started with ethzasl_sensor_fusion

Forwarded communication from Gmail. Could be useful for everyone.

Aswin Thomas [email protected]
10:39 AM (22 hours ago)

to stephan.weiss
Hi Stephan,
Your sensor fusion ROS package looks awesome and would like to try it out. I have tried ethzasl_ptam and have got it working. I want to try the effect of having an IMU added to the system. I have no robot an just want to try a handheld approach in an indoor environment.

  1. Should I be using ssf_updates?
  2. Should pose_measurement be mapped to vslam/pose? (ptam topic)
  3. For imu_state_input i am using the xsens_driver at 100Hz. Is this ok? I understand 1kHz is ideal, but just want to try the system. Any other IMU you recommend?
  4. How do I publish to hl_state_input? Not sure how to run the sensor_fusion_comm.

Is it necessary to have an asctec quadcopter?

Regards

Aswin

Weiss Stephan
1:04 AM (8 hours ago)

Hi Thomas,

Thank you for using my framework and I am glad that ethzasl_ptam is already working for you.

If you are aiming at a self-calibrating state-estimation framework using IMU and ethzasl_ptam then you should definitely use ethzasl_sensor_fusion (i.e. a sensor module in ssf_updates). Note that this is a loosely coupled system - if you want to improve PTAM performance by including IMU information (i.e. tightly coupled approach) ethzasl_sensor_fusion is not necessarily the way to go.

assuming the you have a self-calibrating state-estimation framework in mind:

  1. ethzasl_sensor_fusion consists of a core part which always uses the IMU as a EKF propagation sensor and an "update" part. In the "update" part you define your own update sensor module (such as a 6DoF pose sensor when using ethzasl_ptam). If you use ethzasl_ptam then you can directly use the provided module "pose_sensor"

  2. yes, that is the input for the update-callback for the EKF

  3. The mapping is correct. Be aware that 100Hz is slow but ok as long as the vision update is slower. We usually use the IMU on the asctec helicopters, but any IMU is fine - the filter should be able to deal with significant noise levels of poor IMUs.

  4. You do not need to publish hl_state_input. This is an alternative topic to imu_state_input published by asctec_mav_framework when using asctec helicopters and doing the EKF prediction step on the helicopters micro controller. If you are not using asctec helicopters then the general IMU topic imu_state_input should be used.

You are perfectly fine not using asctec helicopters. You just feed the IMU prediction topic to imu_state_input and the camera update topic (vslam/pose) to pose_measurement. If you go to flying platforms the asctec ones just provide a good interface (1kHz IMU) and things are tested and verified with our framework.

If you don't mind, could you post this to GitHub such that we have a ticked other users can also consult?
Also, please check the tutorials for ethzasl_sensor_fusion on www.ros.org/wiki/ethzasl_sensor_fusion/Tutorials and if you have time, please let me know we can improve them to answer your questions there.

Lot of success with our framework and let me know if I can help you in anything else
Stephan

Estimation Jitter Causing Instability

Hi Again,

As my plots in #113 showed I was seeing spikes in the state estimate for no apparent reason.

ff98bb2e-f97a-11e4-89da-3d9ad374735e
bb837030-f98b-11e4-9e50-359ab901949c

I've done some more digging and I still can't find the reason for them. I tried a second IMU (first consisted of ADXRS642 and ADIS16006, now an MPU6000) to no avail. The rqt_plot for the IMU shows smooth readings with no jumps. Similarly, the rqt_plot for the ptam vslam/pose topic also shows no problems, everything is smooth. The IMU is updating at ~83Hz (former IMU was 90Hz), and the visual slam at 18Hz according to rostopic hz.

Somewhere along the line it is being corrupted and directly influencing the state estimates. Sometimes, the spike is temporary and looks to be for one pose update. Other times it seems as though the spike incfluences the estimate and actually causes the filter to go unstable (velocities spike to 50 000+, position also spikes up). Occasionally it seems to converge on a value that is not correct (like 500m, 300m, 100m for states 0,1,2). This happens without any external sensor input (i.e. the UAV is sitting there untouched).

Interestingly enough this behaviour occurs both in SSF (ethzasl_sensor_fusion) and MSF, so I'm leaning toward it being a problem with my setup rather than one with the filter itself (that and it used to not have all these spikes). But all the debugging I've done shows the values getting to the sensor as anticipated.

The covariance values are the defaults.

I'll update with a more recent picture shortly.

Adding two position sensors within the ssf_update

Hi,
for the localization using the AscTec Pelican, we would use two different position sensors (one @ 10 Hz, the other one @ 50 Hz).
Both provide absolute position of the UAV with respect to the world reference system.

We already wrote the two callbacks and the data are received correctly within the ssf_update.

We estimated the standard deviation for both sensors and used fixed covariance within the ssf_update.

So we modified the code as follows:

...
#define N_MEAS 12 // measurement size (x,y,z from first sensor; x,y,z from the second sensor, the others concerns the artificial measurements that we keep fix)

...
...

callback_faster(...)
{
    // measurements
    z_p_ = Eigen::Matrix<double,3,1>(msg->position.x, msg->position.y,msg->position.z);
    z_p_2 = Eigen::Matrix<double,3,1>(msg2->position.x, msg2->position.y,msg2->position.z);

    ...

    // H matrix (we use the same expression since both sensors provide absolute position of the UAV
    H_old.block<3,3>(0,0) = C_wv.transpose()*state_old.L_; // p first sensor
    H_old.block<3,3>(0,0) = C_wv.transpose()*state_old.L_; // p
    H_old.block<3,3>(0,6) = -C_wv.transpose()*C_q.transpose()*pci_sk*state_old.L_; // q
    H_old.block<3,1>(0,15) = C_wv.transpose()*C_q.transpose()*state_old.p_ci_ + C_wv.transpose()*state_old.p_; // L
    H_old.block<3,3>(0,16) = -C_wv.transpose()*skewold; // q_wv

    H_old.block<3,3>(3,0) = C_wv.transpose()*state_old.L_; // p second sensor

    ...

    // residual
    r_old.block<3,1>(0,0) = z_p_ - C_wv.transpose()*(state_old.p_ + C_q.transpose()*state_old.p_ci_)*state_old.L_;
    r_old.block<3,1>(3,0) = z_p_2 - C_wv.transpose()*(state_old.p_ + C_q.transpose()*state_old.p_ci_)*state_old.L_;

    ...
}

In this way the output of the filter diverges even if the two set of position measurements are very close each other.
Are we missing something?
Maybe we have to add also:

    H_old.block<3,3>(3,0) = C_wv.transpose()*state_old.L_; // p
    H_old.block<3,3>(3,6) = -C_wv.transpose()*C_q.transpose()*pci_sk*state_old.L_; // q
    H_old.block<3,1>(3,15) = C_wv.transpose()*C_q.transpose()*state_old.p_ci_ + C_wv.transpose()*state_old.p_; // L
    H_old.block<3,3>(3,16) = -C_wv.transpose()*skewold; // q_wv

Do you have some suggestion?

Thanks

Problem with serial communication

HI,
we experienced some problem with the serial communication on our mastermind and the sensor fusion module. The problems are the following: Rx timeout on the asctec_hl_interface with maximum baudrate and NaN error on the sensor fusion. We already patched the kernel, and we use the following config file for the hl_interface

serial_port: /dev/ttyS2
baudrate: 921600
frame_id: fcu
packet_rate_imu: 0.0
packet_rate_rc: 20.0
packet_rate_gps: 5.0
packet_rate_ssdk_debug: 10.0
packet_rate_ekf_state: 300.0

and the default one for the sensor fusion.

With this configuration no problems occurs, for around 20 minutes during which packet_rate_ekf_state was dynamically reconfigured to 700 and 1K without any problem.

Then we rerun the ascect_hl_interface and switched the packet_rate_ekf_state to 1KHz, so we run the sensor fusion and after around 2-3 minutes the filter diverged giving

=== ERROR === before prediction p,v,q: NAN at index 0
=== ERROR === before prediction p: NAN at index 0
=== ERROR === prediction p: NAN at index 0
=== ERROR === prediction done P: NAN at index 0
=== ERROR === update: NAN at index 0

So we restart both hl_interface and filter with the same config as the first experiment (921600 and 300Hz) and after short the hl_interface gave this error.

rx timeout
[ WARN] [1372327067.024112282]: No new valid packets within the last 1.000000 s
[ WARN] [1372327068.024801337]: No new valid packets within the last 1.000000 s
[ WARN] [1372327069.024673660]: No new valid packets within the last 1.000000 s
[ WARN] [1372327070.024373841]: No new valid packets within the last 1.000000 s
[ WARN] [1372327071.023930559]: No new valid packets within the last 1.000000 s
[FATAL] [1372327071.023997467]: No valid packets within the last 5.000000 s, aborting !

Now if we rerun the hl_interface we get:

waiting for acknowledged packet timed out
rx timeout
waiting for acknowledged packet timed out
waiting for acknowledged packet timed out
rx timeout
waiting for acknowledged packet timed out
waiting for acknowledged packet timed out
[ERROR] [1372327365.126108421]: failed
[ERROR] [1372327365.126243285]: unable to connect
rx timeout
rx timeout
[fcu-1] process has died [pid 7152, exit code 255, cmd /home/asctec/ros_workspace/asctec_mav_framework/asctec_hl_interface/bin/hl_node __name:=fcu __log:=/home/asctec/.ros/log/b983e8ec-df0b-11e2-9ba7-000e8e313de3/fcu-1.log].
log file: /home/asctec/.ros/log/b983e8ec-df0b-11e2-9ba7-000e8e313de3/fcu-1*.log
all processes on machine have died, roslaunch will exit

The only way to solve this problem is to turn off and on the autopilot, but if we rerun the hl_interface (921600 and 300Hz) and the sensor fusion, we still got in a short time on the hl_interface:

rx timeout
[ WARN] [1372327067.024112282]: No new valid packets within the last 1.000000 s

Doing the same again (turn off and on the autopilot and rerun hl_interface (921600 and 300Hz) and the sensor fusion) didn't show problem for around 5 minutes, after witch packet_rate_ekf_state was dynamically reconfigured to 1KHz and no problem occurred for around 7 min.

So again we stopped again sensor fusion and hl_interface and we rerun the ascect_hl_interface and switched the packet_rate_ekf_state to 1KHz. Immediately after initializing the filter we got NAN errors.

We do the same again and after five minutes we got:

[ WARN] [1372329184.260457270]: fuzzy tracking triggered: 0.699022 limit: 0.1

[ WARN] [1372329184.290052371]: Negative scale detected: -0.254625. Correcting to 0.1
=== ERROR === prediction done P: INF at index 78
=== ERROR === prediction done P: NAN at index 0
=== ERROR === prediction done P: NAN at index 0
=== ERROR === prediction done P: NAN at index 0
=== ERROR === prediction done P: NAN at index 0
=== ERROR === prediction done P: NAN at index 0
=== ERROR === prediction done P: NAN at index 0
=== ERROR === prediction done P: NAN at index 0
=== ERROR === prediction done P: NAN at index 0
=== ERROR === prediction done P: NAN at index 0
=== ERROR === prediction done P: NAN at index 0
=== ERROR === prediction done P: INF at index 130
=== ERROR === prediction done P: NAN at index 0
=== ERROR === update: NAN at index 0
=== ERROR === before prediction p: NAN at index 0
=== ERROR === prediction p: NAN at index 0
=== ERROR === prediction done P: NAN at index 0
=== ERROR === update: NAN at index 0
=== ERROR === before prediction p,v,q: NAN at index 0
=== ERROR === before prediction p: NAN at index 0
=== ERROR === prediction p: NAN at index 0
=== ERROR === prediction done P: NAN at index 0
=== ERROR === update: NAN at index 0

After this if we rerun the hl_interface and the sensor fusion, using as baudrate 460800 and datarate 300Hz the system look pretty rock solid, but packet_rate_ekf_state can't put to 350, for example, or NAN errors occurs in a short after filter is initialized.

The drone was always stationary, Mastermind was never restarted during these experiments, and rosrun ssf_core plot_relevant was always started after the sensor fusion to check for strange behaviors.

What to do you think about these problems?

Have you ever tested the asctec mastermind?

Thanks in advance.

Best regards.

ERROR: Cannot load message class for [sensor_fusion_comm/ExtEkf]. Are your messages built?

Hi,

I'm trying to implement an Ekf sensor fusion on the asctec_pelican.
My mapping and roslaunch seems OK. (More description of my setup can be found here : http://asctec-users.986163.n3.nabble.com/Issue-on-using-ethzasl-sensor-fusion-with-HighLevel-EKF-mode-td4024666.html).

When trying to rostopic echo /state_out or /ext_state, I get the following error:

ERROR: Cannot load message class for [sensor_fusion_comm/ExtEkf]. Are your messages built?

However, when I rosmake the ethzasl_sensor_fusion package I get no error.

Do you have any idea?

Thanks,

Sensor fusion crash on ARM based machine

Hi, as mentioned in an earlier issue, I was attempting to run ssf_update on a PandaBoard ES running Ubuntu 12.04 and get a SIGBUS error before I even initialise the fusion. The backtrace is below

[ INFO] [1370922064.866466859]: interpreting measurement as sensor w.r.t. world
[ INFO] [1370922064.869182924]: using covariance from sensor
[ INFO] [1370922065.067577699]: Filter type: pose_sensor
[ INFO] [1370922065.068645814]: /pose_sensor:
subscribed to topics:
/imu/data
/ssf_core/hl_state_input
/vslam/pose
advertised topics:
/rosout
/ssf_core/state_out
/ssf_core/correction
/ssf_core/pose
/ssf_core/ext_state
/pose_sensor/parameter_descriptions
/pose_sensor/parameter_updates

Program received signal SIGBUS, Bus error.
allInOne<ros::serialization::IStream, geometry_msgs::Quaternion_<std::allocator >&> (m=..., stream=)
at /opt/ros/fuerte/include/geometry_msgs/Quaternion.h:215
215 stream.next(m.z);
(gdb) bt
#0 allInOne<ros::serialization::IStream, geometry_msgs::Quaternion_<std::allocator >&> (m=..., stream=)

at /opt/ros/fuerte/include/geometry_msgs/Quaternion.h:215

#1 read<ros::serialization::IStream, geometry_msgs::Quaternion_<std::allocator > > (t=..., stream=)

at /opt/ros/fuerte/include/geometry_msgs/Quaternion.h:219

#2 deserialize<geometry_msgs::Quaternion_<std::allocator >, ros::serialization::IStream> (t=..., stream=)

at /opt/ros/fuerte/include/ros/serialization.h:161

#3 next<geometry_msgs::Quaternion_<std::allocator > > (t=...,

this=<synthetic pointer>) at /opt/ros/fuerte/include/ros/serialization.h:716

#4 allInOne<ros::serialization::IStream, sensor_msgs::Imu_<std::allocator >&> (

m=..., stream=<synthetic pointer>) at /opt/ros/fuerte/include/sensor_msgs/Imu.h:291

#5 read<ros::serialization::IStream, sensor_msgs::Imu_<std::allocator > > (

t=..., stream=<synthetic pointer>) at /opt/ros/fuerte/include/sensor_msgs/Imu.h:299

#6 deserialize<sensor_msgs::Imu_<std::allocator >, ros::serialization::IStream> (

t=..., stream=<synthetic pointer>)
at /opt/ros/fuerte/include/ros/serialization.h:161

#7 ros::SubscriptionCallbackHelperTboost::shared_ptr<sensor_msgs::Imu_<std::allocator const> const&, void>::deserialize (this=, params=...)

at /opt/ros/fuerte/include/ros/subscription_callback_helper.h:172

#8 0xb6f84d30 in ros::MessageDeserializer::deserialize() ()

from /opt/ros/fuerte/lib/libroscpp.so
#9 0xb6f802bc in ros::SubscriptionQueue::call() ()

from /opt/ros/fuerte/lib/libroscpp.so
#10 0xb6f2d4ae in ros::CallbackQueue::callOneCB(ros::CallbackQueue::TLS*) ()

from /opt/ros/fuerte/lib/libroscpp.so
#11 0xb6f2d074 in ros::CallbackQueue::callAvailable(ros::WallDuration) ()

from /opt/ros/fuerte/lib/libroscpp.so
#12 0xb6f82324 in ros::SingleThreadedSpinner::spin(ros::CallbackQueue*) ()

from /opt/ros/fuerte/lib/libroscpp.so
#13 0xb6f669ee in ros::spin(ros::Spinner&) () from /opt/ros/fuerte/lib/libroscpp.so
#14 0xb6f669b2 in ros::spin() () from /opt/ros/fuerte/lib/libroscpp.so
#15 0x00010b8e in main (argc=1715953599, argv=)

at /home/bird/bird_repo/cpp/external_ros_pkgs/ethzasl_sensor_fusion/ssf_updates/src/main.cpp:75

Update:

This issue was resolved by applying the ROS serialization patch required (https://code.ros.org/trac/ros/attachment/ticket/2883/serialization_memcpy_instead_of_reinterpret.diff) and rebuilding the IMU driver node, tf and geometry packages and ssf_core using rosmake --pre-clean.

state_out vs. ext_state for Position Control on HLP

Hello,

I'm in the process of getting ethzasl_sensor_fusion running together with asctec_mav_framework and ethzasl_ptam on our Pelican. I'd like to use the position control function on the HLP, and I believe that the position control uses the /ssf_core/ext_state topic (which I remapped to /fcu/state) to communicate with asctec_mav_framework. However, I noticed that the /ssf_core/state_out from the filter is a much cleaner estimate of the Pelican's pose, rather than the /ssf_core/ext_state, which seems to me to be the same signal as /fcu/ekf_state_out. Is there a reason why /ssf_core/ext_state is different from /ssf_core/state_out (aside from being different message types)? I see that state_out is filled under applyCorrection in ssf_core.cpp, and ext_state is filled under stateCallback.

Can you confirm that I am running the software correctly? Should the position control be obtaining pose feedback from the cleaner estimate as seen in /ssf_core/state_out?

Many thanks for the help. I am enjoying using the software!

no output from ethzasl_sensor_fusion

Hello,
we got an disturbing problem here. We've set up ethzasl_ptam and asctec_mav_framework to use it on our pelican with ethzasl_sensor_fusion.
ptam and mav_framework delivere the expected data on the outputs (/vslam/pose, /fcu/ekf_state_out) and we also mapped the inputs for the sensor_fusion on these outputs.
Everything looks to be fine and there are no errors but the sensor_fusion dont deliver any data on the output. Neither on /fcu/ekf_state_in nor on /ssf_core/pose. Echo call stays empty. We also configured the parameter files like described in the tutorial.
Can anybody tell us whats the problem?
We attached our rqt_graph.
rosgraph

Example code or API

Hello,

I am not sure if i missed it somewhere while reading or not, But can someone guide to an example code to use this module.

I am trying to combine odometry from other sensor sources with IMU but can't figure out all the messages and topic I need to.

Thank you

H matrix of body_vel sensor

Hi,

I checked the H matrix of the body_vel sensor. Using the results of Wheeler, M. and Ikeuchi, K. "Iterative estimation of rotation and translation using the quaternion", I get H_old.block(0,6,3,3) = - 2 * C_ci*vec1_sk; instead of H_old.block(0,6,3,3) = C_ci*vec1_sk; and H_old.block(0,19,3,3) = - 2 * pic_sk - 2 * vec2_sk; instead of H_old.block(0,19,3,3) = pic_sk+vec2_sk;.
Perhaps this is a typing mistake? However, changing these entries, I can't recognize any difference on the filter outputs.

Additionally, I think the derivative with respect to b_w_ is missing in H. Adding them makes the filter converge much better:

Eigen::Matrix<double,3,1> x_cross_p_ci_, y_cross_p_ci_, z_cross_p_ci_;
x_cross_p_ci_ << 0.0, -p_ci_(2), p_ci_(1);
y_cross_p_ci_ << p_ci_(2), 0.0, -p_ci_(0);
z_cross_p_ci_ << -p_ci_(1), p_ci_(0), 0.0;

H_old.block<3, 1>(0, 9) = - C_e_i * x_cross_p_ci_ * scale;  //b_w_(0)
H_old.block<3, 1>(0, 10) = - C_e_i * y_cross_p_ci_ * scale; //b_w_(1)
H_old.block<3, 1>(0, 11) = - C_e_i * z_cross_p_ci_ * scale; //b_w_(2)`

Can you please check, whether this works for you too?

Cheers
Julian

issues on IMU-camera calibration

Hi,

First I would like to thank you all for providing this open source framework which helps me a lot on my project.

I extracted the EKF algorithm part out of the framework since my application is not ROS related. I came across the issue on IMU-camera calibration. In my configuration, the camera and IMU coordinate don't match, there is a 90 degree rotation around X axis. I set up the initial q_ci to (x,y,z,w)(sqrt(2)/2, 0,0,sqrt(2)/2). But after EKF ran, I checked the state of q_ci. It actually changed back to (0,0,0,1), and the EKF filtered output seems not right.

So my questions are how to initialize q_ci? Are there any other parameters need to pay attention in order to make the self-calibration work? I tried to set the noise of q_ci very low, so as to increase the confidence about the setting, but it seems doesn't work.

Thanks,

Daniel

ethzasl_sensor_fusion compilation problem (on groovy)

Hello,
I am trying to install the ethzasl_sensor_fusion package to make a vision based navigation, i succeeded to install this package on my personal computer (on hydro) but i can't do it on the mastermind (the pelican) where i have groovy installed

i have this error, can you help me please :

asctec@mastermind:~$ rosmake ethzasl_sensor_fusion
[ rosmake ] rosmake starting...
[ rosmake ] Packages requested are: ['ethzasl_sensor_fusion']
[ rosmake ] Logging to directory /home/asctec/.ros/rosmake/rosmake_output-20140417-040829
[ rosmake ] Expanded args ['ethzasl_sensor_fusion'] to:
['ssf_core', 'ssf_updates', 'sensor_fusion_comm']
[rosmake-0] Starting >>> catkin [ make ]
[rosmake-0] Finished <<< catkin ROS_NOBUILD in package catkin
No Makefile in package catkin
[rosmake-0] Starting >>> genmsg [ make ]
[rosmake-1] Starting >>> cpp_common [ make ]
[rosmake-3] Starting >>> rospack [ make ]
[rosmake-2] Starting >>> rosgraph [ make ]
[rosmake-0] Finished <<< genmsg ROS_NOBUILD in package genmsg
No Makefile in package genmsg
[rosmake-0] Starting >>> genlisp [ make ]
[rosmake-1] Finished <<< cpp_common ROS_NOBUILD in package cpp_common
No Makefile in package cpp_common
[rosmake-2] Finished <<< rosgraph ROS_NOBUILD in package rosgraph
No Makefile in package rosgraph
[rosmake-3] Finished <<< rospack ROS_NOBUILD in package rospack
No Makefile in package rospack
[rosmake-2] Starting >>> genpy [ make ]
[rosmake-1] Starting >>> gencpp [ make ]
[rosmake-3] Starting >>> rostime [ make ]
[rosmake-0] Finished <<< genlisp ROS_NOBUILD in package genlisp
No Makefile in package genlisp
[rosmake-0] Starting >>> roslib [ make ]
[rosmake-2] Finished <<< genpy ROS_NOBUILD in package genpy
No Makefile in package genpy
[rosmake-3] Finished <<< rostime ROS_NOBUILD in package rostime
No Makefile in package rostime
[rosmake-3] Starting >>> roscpp_traits [ make ]
[rosmake-2] Starting >>> roslang [ make ]
[rosmake-0] Finished <<< roslib ROS_NOBUILD in package roslib
No Makefile in package roslib
[rosmake-1] Finished <<< gencpp ROS_NOBUILD in package gencpp
No Makefile in package gencpp
[rosmake-1] Starting >>> message_generation [ make ]
[rosmake-0] Starting >>> rosunit [ make ]
[rosmake-3] Finished <<< roscpp_traits ROS_NOBUILD in package roscpp_traits
No Makefile in package roscpp_traits
[rosmake-3] Starting >>> roscpp_serialization [ make ]
[rosmake-2] Finished <<< roslang ROS_NOBUILD in package roslang
No Makefile in package roslang
[rosmake-0] Finished <<< rosunit ROS_NOBUILD in package rosunit
No Makefile in package rosunit
[rosmake-1] Finished <<< message_generation ROS_NOBUILD in package message_generation
No Makefile in package message_generation
[rosmake-2] Starting >>> rosconsole [ make ]
[rosmake-0] Starting >>> xmlrpcpp [ make ]
[rosmake-1] Starting >>> rosparam [ make ]
[rosmake-3] Finished <<< roscpp_serialization ROS_NOBUILD in package roscpp_serialization
No Makefile in package roscpp_serialization
[rosmake-3] Starting >>> message_runtime [ make ]
[rosmake-1] Finished <<< rosparam ROS_NOBUILD in package rosparam
No Makefile in package rosparam
[rosmake-0] Finished <<< xmlrpcpp ROS_NOBUILD in package xmlrpcpp
No Makefile in package xmlrpcpp
[rosmake-1] Starting >>> rosmaster [ make ]
[rosmake-0] Starting >>> rosclean [ make ]
[rosmake-2] Finished <<< rosconsole ROS_NOBUILD in package rosconsole
No Makefile in package rosconsole
[rosmake-3] Finished <<< message_runtime ROS_NOBUILD in package message_runtime
No Makefile in package message_runtime
[rosmake-3] Starting >>> std_msgs [ make ]
[rosmake-1] Finished <<< rosmaster ROS_NOBUILD in package rosmaster
No Makefile in package rosmaster
[rosmake-3] Finished <<< std_msgs ROS_NOBUILD in package std_msgs
No Makefile in package std_msgs
[rosmake-3] Starting >>> rosgraph_msgs [ make ]
[rosmake-0] Finished <<< rosclean ROS_NOBUILD in package rosclean
No Makefile in package rosclean
[rosmake-1] Starting >>> geometry_msgs [ make ]
[rosmake-3] Finished <<< rosgraph_msgs ROS_NOBUILD in package rosgraph_msgs [ 2 Active 22/38 Complete ]
No Makefile in package rosgraph_msgs
[rosmake-1] Finished <<< geometry_msgs ROS_NOBUILD in package geometry_msgs
No Makefile in package geometry_msgs
[rosmake-3] Starting >>> roscpp [ make ]
[rosmake-1] Starting >>> sensor_msgs [ make ]
[rosmake-0] Starting >>> rospy [ make ]
[rosmake-3] Finished <<< roscpp ROS_NOBUILD in package roscpp
No Makefile in package roscpp
[rosmake-2] Starting >>> sensor_fusion_comm [ make ]
[rosmake-2] Finished <<< sensor_fusion_comm ROS_NOBUILD in package sensor_fusion_comm
No Makefile in package sensor_fusion_comm
[rosmake-1] Finished <<< sensor_msgs ROS_NOBUILD in package sensor_msgs
No Makefile in package sensor_msgs
[rosmake-3] Starting >>> rosout [ make ]
[rosmake-0] Finished <<< rospy ROS_NOBUILD in package rospy
No Makefile in package rospy
[rosmake-3] Finished <<< rosout ROS_NOBUILD in package rosout
No Makefile in package rosout
[rosmake-3] Starting >>> roslaunch [ make ]
[rosmake-3] Finished <<< roslaunch ROS_NOBUILD in package roslaunch
No Makefile in package roslaunch
[rosmake-3] Starting >>> rostest [ make ]
[rosmake-3] Finished <<< rostest ROS_NOBUILD in package rostest
No Makefile in package rostest
[rosmake-3] Starting >>> topic_tools [ make ]
[rosmake-3] Finished <<< topic_tools ROS_NOBUILD in package topic_tools
No Makefile in package topic_tools
[rosmake-3] Starting >>> rosbag [ make ]
[rosmake-3] Finished <<< rosbag ROS_NOBUILD in package rosbag
No Makefile in package rosbag
[rosmake-3] Starting >>> rosmsg [ make ]
[rosmake-3] Finished <<< rosmsg ROS_NOBUILD in package rosmsg
No Makefile in package rosmsg
[rosmake-3] Starting >>> rosservice [ make ]
[rosmake-3] Finished <<< rosservice ROS_NOBUILD in package rosservice
No Makefile in package rosservice
[rosmake-3] Starting >>> dynamic_reconfigure [ make ]
[rosmake-3] Finished <<< dynamic_reconfigure ROS_NOBUILD in package dynamic_reconfigure
No Makefile in package dynamic_reconfigure
[rosmake-3] Starting >>> ssf_core [ make ]
[ rosmake ] All 26 linessf_core: 1.6 sec ] [ 1 Active 36/38 Complete ]
{-------------------------------------------------------------------------------
mkdir -p bin
cd build && cmake -Wdev -DCMAKE_TOOLCHAIN_FILE=/opt/ros/groovy/share/ros/core/rosbuild/rostoolchain.cmake ..
[rosbuild] Building package ssf_core
[rosbuild] Cached build flags older than manifests; calling rospack to get flags
Failed to invoke /opt/ros/groovy/bin/rospack cflags-only-I;--deps-only ssf_core
Package sensor_fusion_comm was not found in the pkg-config search path.
Perhaps you should add the directory containing `sensor_fusion_comm.pc'
to the PKG_CONFIG_PATH environment variable
No package 'sensor_fusion_comm' found
[rospack] Error: python function 'rosdep2.rospack.call_pkg_config' could not call 'pkg-config --cflags-only-I sensor_fusion_comm' without errors

CMake Error at /opt/ros/groovy/share/ros/core/rosbuild/public.cmake:129 (message):

Failed to invoke rospack to get compile flags for package 'ssf_core'.  Look
above for errors from rospack itself.  Aborting.  Please fix the broken
dependency!

Call Stack (most recent call first):
/opt/ros/groovy/share/ros/core/rosbuild/public.cmake:227 (rosbuild_invoke_rospack)
CMakeLists.txt:12 (rosbuild_init)

-- Configuring incomplete, errors occurred!
-------------------------------------------------------------------------------}
[ rosmake ] Output from build of package ssf_core written to:
[ rosmake ] /home/asctec/.ros/rosmake/rosmake_output-20140417-040829/ssf_core/build_output.log
[rosmake-3] Finished <<< ssf_core [FAIL] [ 1.60 seconds ]
[ rosmake ] Halting due to failure in package ssf_core.
[ rosmake ] Waiting for other threads to complete.
[ rosmake ] Results:
[ rosmake ] Built 37 packages with 1 failures.
[ rosmake ] Summary output to directory
[ rosmake ] /home/asctec/.ros/rosmake/rosmake_output-20140417-040829
asctec@mastermind:~$

package roadmap

Hi. We are going to use ethzasl_sensor_fusion package for our AUV. However, we have 4 sensors so we need to use multi-sensor package. So I want to ask when the msf package will be released?

regards,

fuerte "eigen" build

The current master branch does not build for Fuerte ROS. I had to update the code according to:http://www.ros.org/wiki/eigen

  1. all the CMakelists.txt a need:

find_package(Eigen REQUIRED)
include_directories(${Eigen_INCLUDE_DIRS})

2.the manifest.xml and stack.xml need:
rosdep name="eigen"

instead of:
depend package="eigen"

Tutorial worked first time only

Strangest thing happened at the end of completing the getting started tutorial:
http://wiki.ros.org/ethzasl_sensor_fusion/Tutorials/getting_started

I followed the instructions and then ran the launch file. I see the graph working as expected (updated to rqt_plot). I then let the bag file run until it gets to the end. I'm thinking so far so good but then I try to run it again, now it won't work for me.

I have tried reinitialization everything again. I try over and over. The graph is now flat lining. I can't understand why it would only work once when nothing has changed.

The only thing that I can think of that may be different each time, is the time that I initialize the filter. This is always different as I stop the bag file after running it for approximately 1 second and then initialize as per instructions.

I either get an error warning:
=== ERROR === update: NAN at index 0
=== ERROR === prediction p: NAN at index 0

Or I get this message:
[ WARN] [1441884003.214357239]: large time-gap re-initializing to last state

And sometimes this:
[ WARN] [1441884003.679369485]: Negative scale detected: -0.39217. Correcting to 0.1

It is different depending on where the 'play head' is when I initialize.

I am running Indigo on Ubuntu 14.04.

ethzasl_sensor_fusion second tutorial: NAN Error

Hi,

Thanks for sharing your framework. Now I try to follow the second tutorial (viconpose_sensor), but I got some problem which similar with the old issue published here (#6), but I still can't solve this. The following is output info:
process[viconpose_sensor-1]: started with pid [11982]
[ INFO] [1381255401.020772867]: interpreting measurement as sensor w.r.t. world
[ INFO] [1381255401.020851439]: using fixed covariance
[ INFO] [1381255401.033578756]: Filter type: viconpose_sensor
[ INFO] [1381255401.033650623]: /viconpose_sensor:
subscribed to topics:
/auk/fcu/imu
/ssf_core/hl_state_input
/vicon/auk/auk
advertised topics:
/rosout
/ssf_core/state_out
/ssf_core/correction
/ssf_core/pose
/ssf_core/ext_state
/viconpose_sensor/parameter_descriptions
/viconpose_sensor/parameter_updates

[ INFO] [1381255413.846260449]: filter initialized to:
position: [0.371229, 0.109753, 1.19432]
scale:1
attitude (w,x,y,z): [-0.432855, -0.435135, 0.364935, 0.700084]
p_ci: [0, 0, 0]
q_ci: (w,x,y,z): [1, 0, 0, 0]
q_cv: (w,x,y,z): [-0.432855, -0.435135, 0.364935, 0.700084]
q_wv: (w,x,y,z): [1, 0, 0, 0]
[ WARN] [1381255413.863853053]: large time-gap re-initializing to last state

=== ERROR === update: NAN at index 0
=== ERROR === prediction p: NAN at index 0
=== ERROR === update: NAN at index 0
=== ERROR === update: NAN at index 0
=== ERROR === prediction p: NAN at index 0
=== ERROR === update: NAN at index 0
=== ERROR === update: NAN at index 0
=== ERROR === prediction p: NAN at index 0

The related advice is "the measurement messages are not correctly received" mentioned in the old issues. But I think I have gotten the measurement messages such as:
position: [0.371229, 0.109753, 1.19432]
q_cv: (w,x,y,z): [-0.432855, -0.435135, 0.364935, 0.700084]

Later amajan said another problem is IMU timestamp, but I use dataset which is same as the first tutorial. I can get first tutorial working. So I don't know what's the reason?

Thanks in advance.

mobile tests

Hi all,
I am currently trying to get the package run on an odroid platform with ubuntu. I am not sure if I am doing things right, so please advise.

  1. I am using a setup rig with a watec camera at 30fps (for ptam) and xsens IMU at 100Hz. https://dl.dropbox.com/u/8948006/setup.jpg. Is it necessary to specify the distance and orientation diff between IMU and camera anywhere?

  2. I run ptam and sensor_fusion on odroid(which is on the rig) and checked if all topics are being subscribed and published as required.

  3. I start ptam. It runs at 22 fps, but with sensor_fusion this drops to 15fps. I guess this is a hardware limitation. Running IMU at higher rates could be worse.

  4. I set the accelerometer and gyro noise as per calibration cert of the IMU. I then start the filter remotely (no other settings have been changed).

  5. I excite the system so that all velocities reach close to zero and give smooth output.

  6. Then I switch on ptam visualizer and fusion visualizer.

  7. I tried moving the rig to simulate take off, move around and reduce altitude. Results:
    https://dl.dropbox.com/u/8948006/fusion1.png
    https://dl.dropbox.com/u/8948006/fusion2.png
    https://dl.dropbox.com/u/8948006/fusion3.png
    https://dl.dropbox.com/u/8948006/fusion4.png
    Yellow color is PTAM and green color is sensor_fusion.

  8. In general, the fused data does not move as much as the update sensor. Is this expected?

  9. Once ptam loses track of the map, the filter does not output anymore data. Is it true that there is no prediction if there is no update?

  10. In the light of higher computational cost, is it better to use ptam alone for flight? In what situation would the filter be useful?

Regards
Aswin

Indexes on H matrix

Hi Stephan,
I was modifying the default version of position_sensor.cpp, but I was stuck about some indexes that you've used within the H matrix.

On row 128 of position_sensor.cpp, the columns 22-24 of H matrix are used for p_ci

H_old.block<3,3>(0,22) = C_wv.transpose()*C_q.transpose()*state_old.L_; // use "camera"-IMU distance p_ci state here as position_sensor-IMU distance

while in state.cpp at row 114, rows 22-24 refers to vectorial part of q_ci.

state.data[22] = q_ci_.x();
state.data[23] = q_ci_.y();
state.data[24] = q_ci_.z();
state.data[25] = p_ci_[0];
state.data[26] = p_ci_[1];
state.data[27] = p_ci_[2];

The same for

    H_old.block<3,1>(0,15) = C_wv.transpose()*C_q.transpose()*state_old.p_ci_ + C_wv.transpose()*state_old.p_; // L
    H_old.block<3,3>(0,16) = -C_wv.transpose()*skewold; // q_wv
    H_old.block<3,3>(3,16) = Eigen::Matrix<double,3,3>::Identity(); // fix vision world drift q_wv since it does not exist here
    H_old.block<3,3>(6,19) = Eigen::Matrix<double,3,3>::Identity(); // fix "camera"-IMU drift q_ci since it does not exist here

Can you help me?

Thanks,

Alessandro

rosmake can't build

Hello

I installed ROS indigo on raspberry pi successfully and followed the instruction to install ethzasl_sensor_fusion (first image) but when I type "rosmake ethzas_sensor_fusion" I see that no packages has been installed! (second image)

capture

capture3

I searched for the problem. One mentioned the same for ethzasl_ptam (http://answers.ros.org/question/185920/problems-with-building-ptam/) and solved it by copying the download package to source folder of catkin. I did this but the same thing happened. I think ros cannot undentify the package or something like this.

Could you please give me some hints how to solve this?

Problem with the initialization of ethzasl_sensor_fusion

Hello,
I have problems concerning the initialization of ethzasl_sensor_fusion. I try to simulate ethzasl_sensor_fusion with ethzasl_ptam in a Gazebo simulation (robot = PR2).
My World-Frame is a 2D-Map (map-frame). I´m initializing the filter with the init-checkbox in dynamic reconfigure.

My Question is how to get the right values for q_ci, p_ci and q_wv in the pose_sensor_fix.yaml file?
Are there other steps necessary for the initialization or is it enough to find the correct values for q_ci, p_ci and q_wv?

I tried the following:

  1. For q_ci and p_ci:
    rosrun tf tf_echo 'camera-frame' 'imu-frame'
  2. For q_wv:
    rosrun tf tf_echo 'map-frame' 'imu-frame' => q1
    rosrun tf tf_echo 'imu-frame' 'camera-frame' => q2
    rostopic echo vslam/pose => q3
    q_wv = q1 * q2 * q3

I´m not sure if this is correct.
Thanks in advance.

IMU and Sensor Coordinates confusion

Hello
im working on the asctec pelican for ly thesis
im trying to implement the ethzasl_sensor_fusion in the pelican . I have gone through the tutorial
http://wiki.ros.org/ethzasl_sensor_fusion/Tutorials/getting_started
From the asctec website the coordinate frames of the pelican IMU are

68747470733a2f2f662e636c6f75642e6769746875622e636f6d2f6173736574732f363036323735372f313634323631372f38313666366137632d353839302d313165332d383062322d6439666431636661663335362e706e67
Which give the configuration as follows
img_20150429_180710

Im running ethzasl_ptam and rostopic echo/vslam/pose_world gives
selection_002
where as rostopic echo /fcu/imu gives
selection_003

my imu Z axis seems to be parallel to the ptam's Z axis
the ptams Y_axis is parallel to the above cameras Z_Axis(pointing out the lens)
the ptams X_axis is perpendicular to the above cameras Z_Axis(pointing out the lens)

I have read all the previous discussions but still unable to deduce how can i find

how can in compute the q_ci, . form this information?
How to compute the quaternions p_ci and q_wv ?

I am sorry if my questions r too intuitaive / basic.

Thanks
Farhan

Issue running ethzasl_sensor_fusion

Dear Stephan,
we are trying to run the ethz asl sensor fusion together with asctec mav framework, using the Asctec Pelican as UAV, but something goes wrong :)

We have created a publisher that publishes the 3D position (PoseWithCovarianceStamped) of the UAV within a topic called MULTI_measurement. (this publisher right now provides position data only from one sensor).
Concerning the UAV we applied the patch to the mastermind kernel in order to remove the limitation of the serial port.

Regarding the asctec mav framework this is our fcu_parameters.yaml:

serial_port: /dev/ttyS2
baudrate: 460800
frame_id: fcu
packet_rate_imu: 50.0
packet_rate_rc: 20.0
packet_rate_gps: 5.0
packet_rate_ssdk_debug: 10.0
packet_rate_ekf_state: 50.0
max_velocity_xy: 0.5
max_velocity_z: 0.5

state_estimation: HighLevel_EKF
position_control: off

and, we defined the following remapping in the ssf_update launch.file (MULTI_sensor.launch).

< launch >
< node name="position_sensor" pkg="ssf_updates" type="position_sensor" clear_params="true" output="screen" >
< remap from="/ssf_core/imu_state_input" to="/fcu/imu" / >

    < remap from="ssf_core/position_measurement" to="MULTI_measurement" / >

    < rosparam file="$(find ssf_updates)/MULTI_sensor_fix.yaml" / >
< /node>

< /launch>

After turning on the UAV and running the MAV framework, we launched the ssf_update with the command provided on the guide:

roslaunch ssf_updates MULTI_sensor.launch

If we try to init the filter with the dinamic reconfigure gui we got a warning saying that no position has been acquired so the position for the filter initialization is put to zero.

Then if we try to plot the output of the filter with the script plot_relevant we don't get anything.

Just to be sure, we understood that we don't need to directly launch the sss_core, because this should be launched within the ss_update. Is this right?

What's wrong with our setup/remap? Do you have some suggestion in order to fix the problem?

Thanks in advance.

Alessandro

I can not visual the data

I use ubuntu16.04,and ros kinetic, When I follow the document--<<ethzasl_sensor_fusionTutorialsgetting_started >>,i can not visual the data by $rosrun ssf_core plot_relevant,it says
,TopicCompleter.update_topics(): could not get message class for topic type "sensor_fusion_comm/ExtEkf" on topic "/ssf_core/correction"
TopicCompleter.update_topics(): could not get message class for topic type "sensor_fusion_comm/DoubleArrayStamped" on topic "/ssf_core/state_out"
TopicCompleter.update_topics(): could not get message class for topic type "sensor_fusion_comm/ExtState" on topic "/ssf_core/ext_state"

Do you have any ideas??

Expected filter performance

Hi,

I am planning to use this sensor fusion package together with our own monocular visual SLAM framework. The existing pose sensor seems perfectly suited for that.

I started out by following the getting-started tutorial on the ROS wiki. Observing the filter performance for provided dataset I noticed the following:

  1. The scale factor is all over the place rapidly changing between roughly 1 and 2. Is this expected? The tutorial mentiones that the scale factor is brittle and prone to initialization errors. Does this also mean that the initialization described for the dataset (start rosbag about 25 seconds in, then initialize with dynamic reconfigure gui) is not appropriate?

  2. The accelerometer biases are high and changing quickly. I get values not much below 1.0. I would expect much lower values that are changing very slowly. Is this expected?

It seems to me that the filter is not behaving properly. One of our goals is to estimate scale accurately, for which these example results are obviously discouraging.

Maybe I have screwed something up while following the tutorial, so I first wanted to ask the expected behaviour for the tutorial. Plotting the output position and ground truth position from vicon, they are not too far off, however I expected a much closer match. Moreover, with an estimated scale of around 2, where the correct scale would be 1, I'm surprised the position estimates work at all.

Independently from the exemplar data, is it reasonable for me to expect the filter to give me an accurate scale estimate with prediction on a 200Hz Xsens MTi IMU and updates from 5 - 20 Hz vSLAM?

On a different note, you mention that the filter needs enough excitement in order to converge. I suppose purely planar motion does not cover this. What do you think about motion that is mostly planar, but has a little excitement in the other dimensions as well? Might there even be a way to fix the unobservable state (I suppose e.g. part of the inter-sensor calibration) with artificial measurements as described in the tutorial, if one assumes that the motion is perfectly planar?

Many questions... Maybe someone can share his insight on parts of it. Thank you already for this interesting framework!

Nikolaus

P.S.: Form the other issues I assume that this platform is ok to ask this kind of question?

Prediction step providing NAN as output

Hi Everyone,
I am trying my hand at EKF for the first time after getting the position controller to work using asctec pelican with asctec_mav_framework stack and ssdk. I put up all the connections as suggested in the sfly tutorial. I am attaching an image of the connections.

image

My main problem is that even without initializing the ekf filter using dynamic reconfigure, when I look at the output of the /fcu/ekf_state_out (the prediction output from the high level processor), I get nan in all the fields except few which are zero. I do not understand this. When I intitialize the ekf filter at this stage, it just gives me a nan error. Should I initialize the predicition step after initializing the ssf_update or something. How to get this to work?

Thanks for your help,
Gowtham.

/fcu/ekf_state_out sample output:
header:
seq: 946
stamp:
secs: 1385503728
nsecs: 300362825
frame_id: ''
angular_velocity:
x: 0.00209439510239
y: 0.00209439510239
z: 0.0225147473507
linear_acceleration:
x: 0.15696
y: -0.06867
z: 9.65304
state: [nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]
flag: 1

Error from initializing filter:
=== ERROR === prediction p: NAN at index 0
=== ERROR === update NAN at index 0

example implementation

I was hoping to run your algorithm using the rosbag data files posted (http://projects.asl.ethz.ch/datasets/doku.php?id=laserregistration:apartment:home#raw_data). I have figured how to route some of the messages such as the IMU(https://dl.dropbox.com/u/36065/Screenshot%20from%202012-08-21%2015%3A02%3A33.png). I was uncertain of where the actual Kalman filtering is occurring and was wondering if you could place an example RX graph in the repository or wiki article. Thanks.

Erro Compiling - Serial port

Hi, I am very new to this and have tried to look for the error. Please can someone point me in the right direction withe the following error? :
Arduino: 1.6.1 (Windows 7), Board: "Arduino Mega or Mega 2560, ATmega2560 (Mega 2560)"
In file included from Marlin.h:22:0,
from cardreader.cpp:1:
Configuration.h:41:2: error: 'SERIAL_PORT' does not name a type
SERIAL_PORT selects which serial port should be used for communication with the host.

Error compiling.

Localization with Respect to an Object

Hello @stephanweiss

I am trying to localize a moving depth camera with respect to a static object. The depth camera (Intel D435i) provides a point cloud which is used to detect and localize the static object. Sometimes there is false detection which leads to false localization. So, I'd like to use the IMU that is also embedded in the depth camera to predict where the static object would be w.r.t the moving depth camera in order to reduce the false detection.

So, I was wondering if it is possible to use this sensor fusion framework to do this task.

Any advice is appreciated.

Thanks.

Can not find "PositionWithCovarianceStamped.h"

Hi Stephan,
I am really fascinated by your work and want to see if it is possible to be transplanted on mobile phone.
But when I finished downloading and using catkin_make to do the compilation, this error appeared
fatal error: ssf_updates/PositionWithCovarianceStamped.h: No such file or directory
#include <ssf_updates/PositionWithCovarianceStamped.h>
Is there anything missing? How could I continue the compilation.

Thanks in advance.
Tingru

When does the init() get called in pose_measurement.h?

Thanks for sharing your framework, and there is one place I am quite confused about:

It seems that when I start the ssf_updates/pose_sensor.launch, the init() function which initialize the ssf_core are not being called, thus there is no output. Currently I manually called the init() function when initializing the pose measurements, it is a good place to initialize the ssf_core? Or there is another way to perform the initialization? (I actually saw in issue #28 that the init() is triggered by a dynamic reconfigure or a service call, but I am quite confused how you do this)


OK, I found out that if you use rqt_reconfigure, then basically that is where you initiate the filter. But still a question, if I want to use this framework on-board, which means I can not use the rqt_reconfigure, is there any way that I can perform the reconfiguration automatically?

Many thanks!

Bosch Tang

strange accelerometer measurements of Microstrain MEMS IMU on an octocopter

Hello,

I tested the sensor_fusion framework with only IMU measurements which were collected by an Microstrain MEMS IMU mounted on an octocopter. The orientation results looked fine. But absurdly, the position in the vertical direction got negative, that is, the UAV was flying underground.

When the accelerometer readings were drawn in attached graphs, I was confused by accelerometer readings at z-axis.
microstraindata
microstrainaccel_zaxis

The IMU was mounted on the octocopter with z axis pointing down, so the accelerometer on z axis should sense about -9.8 m/s^2 when the UAV is in smooth flight. Even it inclines somewhat, assuming the maximum inclination is 15 degrees, the accelerometer on z axis should still sense about -9.8*cos(15 degree)=-9.47 m/s^2. However, as shown by the second graph, it only senses around -8 m/s^2 on average. As a result, the IMU seems accelerating into the underground. To make it fly up, a bias of 1.6 m/s^2 has to be applied.

However, based on our experience and its specs (http://files.microstrain.com/3DM-GX3-35-Attitude-Heading-Reference-System-GPS-Data-Sheet.pdf) , the expected accelerometer bias should be less than 4 mG.

Do you have any idea what caused this strange behavior of IMU on octocopter? Maybe vibration?

Thanks for your consideration,
Janju Huai

form rxplot to rqt_plot ?

Hi
Is it possible to convert /ssf_cor/scripts files to rqt_plot version to work on ununto 14.04LTS and Indigo
Thanks
Farhan

Configuring PTAM + IMU using your package for forward looking mode

Hi Stephan,

Thank you so much Stephan and other maintainers for developing this framework, making it publicly available and actively maintaining it.

I have opened this issue after reading another issue #21 where you wanted to help in configuring the PTAM + IMU in forward looking mode but the discussion was stopped only after getting to work the downward looking configuration of the PTAM + IMU setup.

Before I move ahead, I understand that it is important to configure the downward looking (PTAM + IMU) mode in the correct way. This is the picture of the sensor+imu setup.
2014-04-03-100126

I am using the RGB camera of Kinect and xsens IMU. This discussion (#21) has helped me figuring out the right way of configuring the transformation matrix b/w camera and IMU. The coordinate frames of camera and imu are aligned as shown below.

screenshot from 2014-04-03 10 13 04

The transformation that is used in the ptam+imu_fix.yaml is gven below


init/q_ci/w: 0.0
init/q_ci/x: -1.0
init/q_ci/y: 0.0
init/q_ci/z: 0.0

init/p_ci/x: 0.0
init/p_ci/y: 0.0
init/p_ci/z: 0.0


With this setup, to test if this setup in downward looking mode is working properly,

I have recorded a data set whose video can be see here.
http://www.youtube.com/watch?v=QAO-I6nDI-g
The PTAM's gui output video, the 3D trajectory and map can be seen here.
http://www.youtube.com/watch?v=5HkOV55WjyA

The 3D trajectory from PTAM is as below.
image2

The 3D trajectory of PTAM + IMU (using your sensor fusion package) is given below.
image1

Till now, I have gone through the tutorials and got them to work. For the questions below, If I have to go through the code in depth or read your thesis. please point them to me.

Q1. So from these, do you think that the current setup of PTAM + IMU in downward looking mode is configured and working well.

Q1.1 After seeing the graph, It makes me feel that PTAM's out looks better than PTAM+IMU's output. Is it true ?

Q1.2 Do you think I have to do some more proper translations and rotations to get the maximum performance from the system.

Q2. If downward looking configuration is good, then could you please help me in configuring the forward looking mode or any arbitrary looking mode ?

Q3. Is the current sensor_fusion package specially designed for downward looking configuration only ? Why so ?

Q4. When I try to run the the current setup in the forward looking mode, it says "FUZZY TRACKING TRIGGERED". What should I do next ?

Thank you so much for your help!

Have a good day,
Sai Manoj

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.