Giter Site home page Giter Site logo

Comments (22)

shiqizhang6 avatar shiqizhang6 commented on August 13, 2024

Thanks, @jack-oquin

I think we need both /amcl_pose and /odom. /odom is mostly used for computing the travelled distance and /amcl-pose is used for re-generate the travelled trajectory. When the robot mislocalizes itself or when we initially start the robot, the location jumps can make the mileage counting inaccurate.

Republishing odometry data is a good idea. Let me know how you would like to proceed.

from bwi_common.

jack-oquin avatar jack-oquin commented on August 13, 2024

Good point about the jumps in amcl_pose.

I can easily write a little script that subscribes to /odom and republishes at a much lower frequency.

from bwi_common.

jack-oquin avatar jack-oquin commented on August 13, 2024

I discussed this with @piyushk this afternoon. He feels strongly that we somehow need to upload the logs automatically.

I don't disagree, but made the point that we require tools to eliminate short or trivial runs before doing that automatically.

from bwi_common.

shiqizhang6 avatar shiqizhang6 commented on August 13, 2024

@jack-oquin @piyushk
The only concern I had about uploading log files automatically is that uploading large files may take considerable time, and I was not sure we always have the patience to wait until we can completely shutdown the robot.

I assume the log files will be much smaller after reducing the republished /odom frequency, so now I am perfectly fine with that.

from bwi_common.

jack-oquin avatar jack-oquin commented on August 13, 2024

Security is another concern about uploading files to nixons-head automatically. The current upload script prompts users to give the nixons-head password for their accounts.

None of the ways I can think of to get around that seem attractive:

  • rsh
    • quite insecure
  • ssh with keys
    • difficult to administer, every user needs it
    • keeping key files on NFS is not very secure

Maybe there is some hacky way to do this with acceptable security risk. Suggestions are welcome.

from bwi_common.

jack-oquin avatar jack-oquin commented on August 13, 2024

Here's a simple command to sub-sample the odometry:

rosrun topic_tools drop /odom 99 100 /odom_1hz

That cuts odometry frequency to 1Hz and bandwidth from about 72 KB/s to 720 bytes/s. Compressing one example bag file reduced it from 59 MB to 2.4 MB, almost 25 times smaller. If we get similar reductions on long runs, the approximately 260 MB per hour would yield only about 11 MB, which is not bad. Doing the compression is not particularly fast, though. It might not to work as well for long runs with less of the time spent standing still.

I am also considering computing the time and distance on the robot using a modification of Shiqi's get_distance.py script, then writing the results to a small YAML file, to always upload automatically (somehow). Copying the bags could then be optional.

That could run as a ROS node in parallel with the rest of the system. When terminated, it could write out the YAML file and try to kick off the upload script, which uses rsync, so it would not have to work flawlessly every time. But, it would still need a way around the password problem.

from bwi_common.

jack-oquin avatar jack-oquin commented on August 13, 2024

We need to experiment with some really long bags before doing much more.

from bwi_common.

jack-oquin avatar jack-oquin commented on August 13, 2024

The previous commit created a log_prune branch where we can test sub-sampling odometry without affecting all the robots.

from bwi_common.

jack-oquin avatar jack-oquin commented on August 13, 2024

I ran clamps today for two hours with the new log_prune branch. The bag file was considerably smaller:

$ rosbag info bwi_2016-01-06-11-13-34.bag 
path:        bwi_2016-01-06-11-13-34.bag
version:     2.0
duration:    2hr 4:08s (7448s)
start:       Jan 06 2016 11:13:34.76 (1452100414.76)
end:         Jan 06 2016 13:17:43.12 (1452107863.12)
size:        25.9 MB
messages:    49604
compression: none [34/34 chunks]
types:       diagnostic_msgs/DiagnosticArray         [60810da900de1dd6ddd437c3503511da]
             geometry_msgs/PoseWithCovarianceStamped [953b798c0f514ff060a53a3498ce6246]
             nav_msgs/Odometry                       [cd5e73d190d741a2f92e81eda573aca7]
topics:      /diagnostics   29702 msgs    : diagnostic_msgs/DiagnosticArray         (4 connections)
             amcl_pose      12453 msgs    : geometry_msgs/PoseWithCovarianceStamped
             odom_1hz        7449 msgs    : nav_msgs/Odometry

The rosbag compress makes it 10 times smaller:

$ ll bwi_2016-*
-rw-r--r-- 1 joq bwi  2743596 Jan  6 13:44 bwi_2016-01-06-11-13-34.bag
-rw-r--r-- 1 joq bwi 27141544 Jan  6 13:25 bwi_2016-01-06-11-13-34.orig.bag

At less than 1.4 MB/hour for the sub-sampled, compressed data, we can afford to upload it all.

Unless there are objections, I plan to merge the log_prune branch into master, so we can start running it on all the robots after people update their workspaces.

from bwi_common.

shiqizhang6 avatar shiqizhang6 commented on August 13, 2024

Thank you, @jack-oquin That sounds really good.

from bwi_common.

jack-oquin avatar jack-oquin commented on August 13, 2024

Commit 8386000 merged into master.

Please update bwi_common in your workspaces on the robots so we start saving the reduced-size logs while testing.

from bwi_common.

jack-oquin avatar jack-oquin commented on August 13, 2024

Current thinking about automatic uploads:

  • Provide a setuid script to do an upload under the bwilab account.
    • set up ssh keys for that account so it does not require a login password
    • make sure everyone's ~/.ros/bwi/bwi_logging directory is writable for bwi group
  • Compress and upload the previous bag file at the beginning of a segbot launch sequence.

Comments? @pato

from bwi_common.

pato avatar pato commented on August 13, 2024

My suggestion for the automatic uploads is to use a pull model rather than a push model. This means that instead of each individual robot uploading their logs, the server would pull the logs from the robot.

The two main advantages are

  1. We would only have to distribute the server's ssh pubkeys to each of the robots, rather than the privkeys
  2. The bulk of the logic is on the server, which makes it easier to change and update

The way that I propose this could be done is as follows:

On the server (nixons-head), there is a node (ROS or otherwise) that is waiting to pull logs. Either at known intervals or when instructed by the client (more on this later) the server will then log in remotely to the robots and pull all the logs via scp/rsync or the like (and potentially delete them on source machine after extracting).

On the client, we would add a node/script to the robot launch file which sends a ping to the server to signal that the robot is online and ready to have the previous logs pulled. This can be a simple service call if we go with ROS or a simple HTTP GET request otherwise which just contains the machine and user it is from.

from bwi_common.

jack-oquin avatar jack-oquin commented on August 13, 2024

That's definitely worth considering, Pato. I like the fact that the robots would only need the public keys for bwilab.

Note however that, before uploading, we need to compress the logs. I know rsync has an obscure way to run relatively arbitrary commands, but that does make things a bit more complicated. I suppose we can compress the data immediately before requesting the upload, instead.

I don't want to expose HTTP GET on any of these machines.

I prefer to use standard Linux commands, rather than ROS. I want to avoid confronting multimaster issues at this level.

from bwi_common.

jack-oquin avatar jack-oquin commented on August 13, 2024

Looks like HTTP GET is already exposed on nixons-head to support the robot DNS addresses.

That's going to become a problem. The University already ordered us to shut down the Apache server on that machine. (We have not yet complied with their order.)

from bwi_common.

pato avatar pato commented on August 13, 2024

Yeah we can run a command to compress the data before we rsync it. That would seem to work.

Fair call on the GET requests. As far as nixons-head and the DNS server, we do not need apache to run it. So we can safely uninstall apache without affecting the DNS server.

from bwi_common.

jack-oquin avatar jack-oquin commented on August 13, 2024

We decided to stop the apache2 server, but not uninstall it. That way we can login and turn it on when needed for specific purposes, like LDAP administration. I noticed yesterday that it is still running, however.

from bwi_common.

jack-oquin avatar jack-oquin commented on August 13, 2024

I went ahead and stopped the web server. This is the command:

$ sudo service apache2 stop

from bwi_common.

piyushk avatar piyushk commented on August 13, 2024

@jack-oquin That'll only work till the next reboot. I suspect that's how the server got restarted since you last shut it down. We do probably reboot the server once or twice a year.

from bwi_common.

jack-oquin avatar jack-oquin commented on August 13, 2024

Sure. That's probably not much of a problem, though.

There must be a standard way to avoid automatically starting it, while retaining manual control. Anybody know what it is?

from bwi_common.

jack-oquin avatar jack-oquin commented on August 13, 2024

Although I like Pato's server pull idea, I have instead decided to create a general facility for running selected scripts on each robot under the bwilab account. The upload script can be one of them.

We will create unique public and private ssh keys for the local bwilab account on each robot, storing their public keys on nixons-head. I will write a setuid("bwilab") C program that will only run a set of scripts we maintain. I believe the security risk using that approach will be acceptably small.

That will make it easy to upload logs without requiring a login, and also to notify the server when a robot becomes active or inactive.

from bwi_common.

jack-oquin avatar jack-oquin commented on August 13, 2024

We are now collecting usable log data for most of the robots (except for mavin with the arm). The problem with marvin is that it is not currently up to date with the base bwi repositories.

Uploading the data is still done by hand, using tools that have now been moved to the new utexas-bwi/bwi_scripts repository.

The rest of the work discussed here will be done there: utexas-bwi/bwi_lab#1, so I am closing this issue.

from bwi_common.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.