Giter Site home page Giter Site logo

Comments (33)

efernandez avatar efernandez commented on June 27, 2024

@mhkabir 👍 for your list of 5 items. I think you have don't more changes than the ones I applied in #4 ; it would be great if you send a separate PR.

from lsd_slam.

mhkabir avatar mhkabir commented on June 27, 2024

@efernandez I have random problems with starting live_slam regarding GTK. Ubuntu 12.04 Desktop, ROS Hydro.
The problems are seemingly random and appear intermittently.
Also, I won't PR my modifictions yet as they are very specific to my hardware and software setup, but would be useful for anyone else trying out LSD SLAM.

Kabir

from lsd_slam.

mhkabir avatar mhkabir commented on June 27, 2024

Calling XInitThreads in live_slam and linking to libX11 fixes most GTK errors on all platforms.
Also, I still prefer my own image debug topic via ROS, but need to make it less hacky.

from lsd_slam.

mhkabir avatar mhkabir commented on June 27, 2024

@JakobEngel Do you plan to support standard pose and pointcloud output topics in future? I'm not yet ready to start rolling my own support...

from lsd_slam.

JakobEngel avatar JakobEngel commented on June 27, 2024

@mhkabir
yeah, we havn't had the time to try it out on quadrocopters ourselves, but I'm quite exited to see the results! To answer some of your questions:

  • GTK stuff: yes, we've had problems with that too, in particular when trying to compile qglviewer and openCV image display into the same binary. I don't really know how to fix it to be honest.
  • lsd_slam_viewer / messages: Thats actually why we split the viewer into a separate package / executable - if you have a wifi connection to the quadrotor, you can just run the viewer on a ground-station to get the 3D visualization.
    The cleanest solution is probably to put the messages into a separate - otherwise empty - package, so you can compile the viewer without the core and vice versa.
  • standard pose / pointclouds: No, we do not plan to support them. In my experience the default ROS visualization tools are too slow to handle millions of points gracefully, and additionally (as far as i know) they don't support Sim(3) pointcloud poses. We're pretty happy to stay as independant of the ROS ecosystem as possible ;)
  • coordinate frame: It's standard image coordinate frame, i.e., z is outwards; x is right and y is downwards.

from lsd_slam.

mhkabir avatar mhkabir commented on June 27, 2024

Thanks for the reply :)

I need to get everything into standard message types to integrate between
stacks, so what do you propose would be best?

The navigation system(octomap) needs pointcloud2 messages and I need a pose
message for the EKF. I'm happy to adapt both as required, but your
recommendations would be greatly appreciated. The pose message should be
easy, I think...? Pointcloud could be a problem.

Kabir
On Sep 17, 2014 3:02 PM, "JakobEngel" [email protected] wrote:

@mhkabir https://github.com/mhkabir
yeah, we havn't had the time to try it out on quadrocopters ourselves, but
I'm quite exited to see the results! To answer some of your questions:

GTK stuff: yes, we've had problems with that too, in particular when
trying to compile qglviewer and openCV image display into the same binary.
I don't really know how to fix it to be honest.

lsd_slam_viewer / messages: Thats actually why we split the viewer
into a separate package / executable - if you have a wifi connection to the
quadrotor, you can just run the viewer on a ground-station to get the 3D
visualization.
The cleanest solution is probably to put the messages into a separate

  • otherwise empty - package, so you can compile the viewer without the core

    and vice versa.

    standard pose / pointclouds: No, we do not plan to support them. In my
    experience the default ROS visualization tools are too slow to handle
    millions of points gracefully, and additionally (as far as i know) they
    don't support Sim(3) pointcloud poses. We're pretty happy to stay as

    independant of the ROS ecosystem as possible ;)

    coordinate frame: It's standard image coordinate frame, i.e., z is
    outwards; x is right and y is downwards.


Reply to this email directly or view it on GitHub
#1 (comment).

from lsd_slam.

JakobEngel avatar JakobEngel commented on June 27, 2024

converting should be easy, but it wouldn't behave as you'd expect:
First off, the current camera pose will jump with large loop-closures, which will mess up your EKF if not handled properly. Second, each keyframe's position, orientation and scale changes with each new loopclosure, so the points will move around - which I guess your navigation system is not built for either. You could try to disable global mapping (eliminating those issues), but then you'd loose the globally consistent map.

from lsd_slam.

mhkabir avatar mhkabir commented on June 27, 2024

Thanks Jacob. Handling the navigation will be a bit difficult, and after
so some experimentation, I find that to navigate in real time on a MAV,
some modifications will be needed to my mapping stack. So so that's a
future goal.

I think position holding will be fairly easy with the poses, and can work
out as a starting point. The EKF should be able to handle jumps, not that
I'm building large maps initially :) LSD slam tracks at 15Hz or so on the
quadcore ARM, which should be okay for positioning.

So, how would we get the pose into standard messages?With your
recommendations, I'll send in a PR if you'd like when I'm done.

I went on a short mapping run with the MAVs onboard camera yesterday, and
the results were fairly good, with a small FOV lens and non-globalshutter
camera. There was motion blur and thus some outliers in the pointcloud
though, having been indoors in poor lighting. The Firefly MV combined with
a fisheye lens should work just great, although the onboard system cannot
track at higher rates as of now.

Kabir

from lsd_slam.

JakobEngel avatar JakobEngel commented on June 27, 2024

for getting ROS standart message types, it is probably best to create your own outputWrapper, analogeously to src/IOWrapper/ROS/ROSOutput3DWrapper. That would allow very easy switching between different "output modes", and hence allow to integrate it into the main repository without breaking anything else.

For running on ARM, my guess would be that you'll get better perfocmance if you down-sample the image to get at least 30fps. If your lens has distortion than the image is re-sampled anyway - you'd just have to change the calibration file.

You could also think about disabeling global Mapping (set doSLAM to false), which should greatly decrease RAM and CPU load.

Jakob

from lsd_slam.

mhkabir avatar mhkabir commented on June 27, 2024

@JakobEngel I've been playing around, and I think my first goal would be to achieve pure tracking with this first. That is, just visual odometry.

I looked into the IOWrapper code, but am unsure as to how I would get the camToWorld into standard xyz coordinates and xyz rotations. A brief description would be very helpful. If you could point out the bits which are important.... :)

from lsd_slam.

mhkabir avatar mhkabir commented on June 27, 2024

@JakobEngel I have the main framework sorted. I integrated into the present IOWrapper, just extending it for the PoseStamped message. I've got the initial stuff in my fork. Please check: mhkabir@278ce45

I'm not very sure about the coordinate frames for the rotation and translation for camToWorld, so if you could please add comments on the file and let me know.

from lsd_slam.

mhkabir avatar mhkabir commented on June 27, 2024

@JakobEngel All coordinate transforms look alright. :)

Need to integrate with EKF now. I will probably add some parameters to rotate the camera frame into IMU frame as required.

Opened PR : #13

from lsd_slam.

mhkabir avatar mhkabir commented on June 27, 2024

@JakobEngel , if I'm not mistaken, the CamToWorld is the camera coordinates in the world frame, right?

from lsd_slam.

JakobEngel avatar JakobEngel commented on June 27, 2024

CamToWorld is the transformation such that for a point X
X_InWorldCoordinates = CamToWorld * X_InCamCoordinates.
I use that naming convention everywhere throughout the code.

from lsd_slam.

mhkabir avatar mhkabir commented on June 27, 2024

@JakobEngel I'm still a bit confused. Can you check my PR and tell me the corrections to get the X,Y,Z values in the Pose message, in world coordinates?
Some line comments would help :) I think the present implementation is incorrect as it doesn't behave as it should.

from lsd_slam.

Dvad avatar Dvad commented on June 27, 2024

Hi,

With this convention in order to access the pose coordinate (X,Y,Z) in world you need to take the quantity CamToWorld.translation().
You can derive that by just taking the coordinates of the origin in camera frame and transform it to world frame (With Sophus/Eigen notation):

    position_cam_center_in_world = CamToWorld * zero 
    position_cam_center_in_world =  CamToWorld.rxso3() * zero + CamToWorld.translation()
    position_cam_center_in_world  =  CamToWorld.translation()

I looked at your implementation for me it seems OK. What is wrong? Are you sure there isn't a problem elsewhere?

from lsd_slam.

mhkabir avatar mhkabir commented on June 27, 2024

@JakobEngel Upgraded whole system to Indigo 14.04 today. Working fine.

I will replace the OpenCV image display system with a standard image message which can be visualised on ground computers. Any suggestions on how I should change the key input functionality?

from lsd_slam.

mhkabir avatar mhkabir commented on June 27, 2024

@JakobEngel What are the image encodings on the cv::Mat which are passed using util::displayImage?

from lsd_slam.

RDmitrich avatar RDmitrich commented on June 27, 2024

@mhkabir Could you explain, please, how I can move external keyframeGraph messages from the viewer into core?

Trying to compile lsd-slam on XU3 Ubuntu 14.04, but right now receive only internal error of compiler - it seems that there is not enough RAM. Same problem was with svo_ros but several starts after failure - pass the compilation without any errors. I think that swap enable in kernel could help to solve it, but this is bad variant for eMMC life, so I still in searching.

from lsd_slam.

mhkabir avatar mhkabir commented on June 27, 2024

@RDmitrich , you need to do 3 things.

  1. Copy lsd_slam_viewer''s "msg" folder to lsd_slam_core
  2. In lsd_slam_core/src/IOWrapper/ROSOutput3DWrapper.cpp replace all instances of "lsd_slam_viewer" to "lsd_slam_core"
  3. Add this line to the CMakeLists.txt, after gencfg() : rosbuild_genmsg()

Regarding your compile problem, you don't need swap. Simply limit parallel compilation jobs. On catkin, this can be done using :

catkin_make -j2

For rosmake as in lsd_slam, set this environment variable :

export ROS_PARALLEL_JOBS=-j2

The -jN flag limits the number of jobs to N. Usually 2 is okay.

from lsd_slam.

mhkabir avatar mhkabir commented on June 27, 2024

And for other's referring to this for help, you can set the NEON flags properly in CMakeLists like this :

# NEON flags
add_definitions("-DUSE_ROS")
add_definitions("-DENABLE_NEON")

# Also add some useful compiler flag
set(CMAKE_CXX_FLAGS
   "${CMAKE_CXX_FLAGS} -march=armv7-a -mfpu=neon -std=c++0x"
) 

from lsd_slam.

RDmitrich avatar RDmitrich commented on June 27, 2024

@mhkabir I have successfully build with "rosmake --pjobs=2, thank you again for help.

from lsd_slam.

mhkabir avatar mhkabir commented on June 27, 2024

@RDmitrich good to hear that :) Although, in my experience, --pjobs doesn't work often for unknown reasons, so better to use the environment flag.

from lsd_slam.

RDmitrich avatar RDmitrich commented on June 27, 2024

@mhkabir Still trying to run lsd-slam on XU3 with live_slam, but after several seconds program just freeze. I receive only this message:
"~/ros_packages/lsd_slam$ rosrun lsd_slam_core live_slam image:=/image_mono _calib:=pinhole_example_calib.cfg
Reading Calibration from file pinhole_example_calib.cfg ... not found!
Trying /home/odroid/ros_packages/lsd_slam/lsd_slam_core/calib/pinhole_example_calib.cfg ... found!
found ATAN camera model, building rectifier.
Input resolution: 640 480
In: 0.527334 0.827306 0.473568 0.499436 0.000000
NO RECTIFICATION
Output resolution: 640 480
Prepped Warp matrices
Started constraint search thread!
Started mapping thread!
Started optimization thread "

There is no any error appears and if I run "rostopic list" in other terminal - it shows me /lsd-slam/.. topics, but I couldn't connect them to rviz (on desktop computer).

Where is the possible problem could be?

from lsd_slam.

QichaoXu avatar QichaoXu commented on June 27, 2024

My system configurations is ROS fuerte + Ubuntu 12.04. In compiling lsd_slam(by typing ‘rosmake lsd_slam’), there is always a failure.
I then try compile the lsd_slam one by one. First the lsd_slam_core, again a same failure occurs. Second the lsd_slam_viewer, this time no failure.

The same failure is like" /tmp/ccN1VfD5.s:1612: Error: no such instruction: 'vfmadd312sd'".

Where is the possible problem could be?

from lsd_slam.

mhkabir avatar mhkabir commented on June 27, 2024

@RDmitrich Its probably running fine. Try 'echo'ing the /lsd_slam/pose topic using rostopic.

debugWindow is disabled, so you don't see the GUI but you don't need to do that anymore. With the latest fixes on master, you can run with debugWindow :) The userinput system is such that it parses input via debugWindow.

Enable the window to get the GUI :) Its probably running just fine in the background.

from lsd_slam.

mhkabir avatar mhkabir commented on June 27, 2024

@QichaoXu Compile with NEON enabled, instead of SSE. Those are unavailable SSE instruction errors. See my comment above on how to do it properly in CMakeLists.txt

from lsd_slam.

JakobEngel avatar JakobEngel commented on June 27, 2024

@QichaoXu I have a similar error on one system setup (Ubuntu 12.04 + Haswell i7 CPU), which is fixed by removing -march=native from the compiler commands. The reason is that gcc from ubuntu 12.04 is simply too old.

from lsd_slam.

mhkabir avatar mhkabir commented on June 27, 2024

@JakobEngel Is there any way to get the framerates a bit better on the Odroid? Presently, it is rather slow and totally unsuitable for actual flying. The frames get blurred, etc., which I do not face with same camera on my desktop , under similar conditions.

from lsd_slam.

JakobEngel avatar JakobEngel commented on June 27, 2024

hmm the frames getting blurred sounds like a camera driver issue. if lsd_slam just runs slowly, try reducing the resolution (to e.g. 320x240).

from lsd_slam.

QichaoXu avatar QichaoXu commented on June 27, 2024

when compile lsd_slam(ROS indigo + ubuntu 14.04), an error shows there is a undefined lib:

Linking CXX executable ../bin/live_slam
/usr/bin/ld: CMakeFiles/live_slam.dir/src/main_live_odometry.cpp.o: undefined reference to symbol 'XInitThreads'
//usr/lib/x86_64-linux-gnu/libX11.so.6: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
make[3]: *** [../bin/live_slam] Error 1
make[3]: Leaving directory /home/user/ros_workspace/lsd_slam/lsd_slam_core/build' make[2]: *** [CMakeFiles/live_slam.dir/all] Error 2 make[2]: Leaving directory/home/user/ros_workspace/lsd_slam/lsd_slam_core/build'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/home/user/ros_workspace/lsd_slam/lsd_slam_core/build'

how to correct it?

from lsd_slam.

JakobEngel avatar JakobEngel commented on June 27, 2024

please see Issue #29

from lsd_slam.

Chao1155 avatar Chao1155 commented on June 27, 2024

Thank you @mhkabir for explaining how to separate the core from the viewer. I have managed to compile the core alone in a raspberry pi 3, with ROS kinetic and Ubuntu Mate 16. But I cannot run the lsd_slam_core smoothly. It seams need to connect a local viewer to display the result (what is indeed the intention of the original version).
So what command did you use to run the lsd_slam_core without calling the viewer?
Something other than this?
rosrun lsd_slam_core live_slam image:=/image_raw camera_info:=/camera_info

Thanks.

from lsd_slam.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.