Giter Site home page Giter Site logo

roahmlab / sel_map Goto Github PK

View Code? Open in Web Editor NEW
106.0 106.0 12.0 9.88 MB

Semantic ELevation (SEL) map is a semantic Bayesian inferencing framework for real-time elevation mapping and terrain property estimation from RGB-D images.

Home Page: https://roahmlab.github.io/sel_map/

License: MIT License

CMake 8.96% C++ 61.87% C 0.82% Python 27.47% Shell 0.89%

sel_map's People

Contributors

buildingatom avatar stevenhong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

sel_map's Issues

Issue with segmentation in the sel_map

Hello,

First and foremost, thank you for sharing your work. After completing the relevant installations, I followed the steps below in the terminal and obtained the output:
seg_ros

As you can see in the image, there is no segmentation and mapping.
I don't get any error in terminal either. How we can solve this problem ?

Thanks,

ModuleNotFoundError: No module named 'pytorch_encoding_wrapper'

Hello, thanks for your work. I was trying out your work but found some problem. I tried to run the other launch file also, but had some issue. The one without any config or terrain property flag launches with only wireframe mesh visualizable but no semantic class and when I specifically mention the config and property flag, I get the following error:
N.B: I tried to install pip3 install pytorch-encoding and it was installed successfully but still had the same issue. Didn't change the problem / issue at all.

arghya@arghya-Pulse-GL66-12UEK:~/sel_map_ws$ roslaunch sel_map spot_bag_sel.launch semseg_config:=Encoding_ResNet50_PContext_full.yaml terrain_properties:=pascal_context.yaml
... logging to /home/arghya/.ros/log/dd6e5c2a-e2ba-11ed-b832-d34792068a9c/roslaunch-arghya-Pulse-GL66-12UEK-1075380.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://127.0.0.1:38337/

SUMMARY
========

PARAMETERS
 * /robot_description: <?xml version="1....
 * /rosdistro: noetic
 * /rosversion: 1.16.0
 * /sel_map/cameras_registered/realsense/camera_info: /zed2/zed_node/de...
 * /sel_map/cameras_registered/realsense/depth_registered: /zed2/zed_node/de...
 * /sel_map/cameras_registered/realsense/image_rectified: /zed2/zed_node/le...
 * /sel_map/cameras_registered/realsense/pose: 
 * /sel_map/cameras_registered/realsense/pose_with_covariance: 
 * /sel_map/colorscale/ends/one: 0,0,0
 * /sel_map/colorscale/ends/zero: 255,255,255
 * /sel_map/colorscale/stops: [0.2, 0.8]
 * /sel_map/colorscale/type: linear_ends
 * /sel_map/colorscale/unknown: 120,120,120
 * /sel_map/colorscale/values: ['252,197,192', '...
 * /sel_map/enable_mat_display: True
 * /sel_map/num_cameras: 1
 * /sel_map/point_limit: 20
 * /sel_map/publish_rate: 20.0
 * /sel_map/queue_size: 2
 * /sel_map/robot_base: base_link
 * /sel_map/save_classes: True
 * /sel_map/save_confidence: False
 * /sel_map/save_interval: 0.0
 * /sel_map/save_mesh_location: 
 * /sel_map/semseg/extra_args: []
 * /sel_map/semseg/model: Encnet_ResNet50s_...
 * /sel_map/semseg/num_labels: 59
 * /sel_map/semseg/onehot_projection: False
 * /sel_map/semseg/ongpu_projection: True
 * /sel_map/semseg/package: pytorch_encoding_...
 * /sel_map/sync_slop: 0.3
 * /sel_map/terrain_properties: /home/arghya/sel_...
 * /sel_map/update_policy: fifo
 * /sel_map/world_base: odom
 * /use_sim_time: True

NODES
  /
    rviz (rviz/rviz)
  /sel_map/
    sel_map (sel_map/main.py)
    static_tf_linker (sel_map_utils/StaticTFLinker.py)

ROS_MASTER_URI=http://127.0.0.1:11311

process[rviz-1]: started with pid [1075437]
process[sel_map/static_tf_linker-2]: started with pid [1075438]
process[sel_map/sel_map-3]: started with pid [1075439]
[ INFO] [1682360055.061867615]: rviz version 1.14.20
[ INFO] [1682360055.061904828]: compiled against Qt version 5.12.8
[ INFO] [1682360055.061924301]: compiled against OGRE version 1.9.0 (Ghadamon)
[ INFO] [1682360055.069993134]: Forcing OpenGl version 0.
[ INFO] [1682360055.178432992]: Stereo is NOT SUPPORTED
[ INFO] [1682360055.178490269]: OpenGL device: Mesa Intel(R) Graphics (ADL GT2)
[ INFO] [1682360055.178499632]: OpenGl version: 4.6 (GLSL 4.6) limited to GLSL 1.4 on Mesa system.
[INFO] [1682360055.222487, 0.000000]: Spinning all linked static TF frames until killed
[ INFO] [1682360055.431311840]: Mesh Display: Update
[ERROR] [1682360055.431363477]: Mesh display: no visual available, can't draw mesh! (maybe no data has been received yet?)
[ INFO] [1682360055.431451204]: Mesh Display: Update
[ERROR] [1682360055.431459554]: Mesh display: no visual available, can't draw mesh! (maybe no data has been received yet?)
[ INFO] [1682360055.433126891]: No initial data available, waiting for callback to trigger ...
[ INFO] [1682360055.433149737]: Mesh Display: Update
[ERROR] [1682360055.433164985]: Mesh display: no visual available, can't draw mesh! (maybe no data has been received yet?)
[ INFO] [1682360055.433181396]: Mesh Display: Update
[ERROR] [1682360055.433187710]: Mesh display: no visual available, can't draw mesh! (maybe no data has been received yet?)
[ INFO] [1682360055.433216312]: Mesh Display: Update
[ERROR] [1682360055.433222597]: Mesh display: no visual available, can't draw mesh! (maybe no data has been received yet?)
[ INFO] [1682360055.434230655]: Mesh Display: Update
[ERROR] [1682360055.434238770]: Mesh display: no visual available, can't draw mesh! (maybe no data has been received yet?)
Traceback (most recent call last):
  File "/home/arghya/sel_map_ws/devel/lib/sel_map/main.py", line 15, in <module>
    exec(compile(fh.read(), python_script, 'exec'), context)
  File "/home/arghya/sel_map_ws/src/sel_map/sel_map/scripts/main.py", line 171, in <module>
    sel_map_node(mesh_bounds, elem_size, threshold)
  File "/home/arghya/sel_map_ws/src/sel_map/sel_map/scripts/main.py", line 132, in sel_map_node
    segmentation_network = CameraSensor()
  File "/home/arghya/sel_map_ws/src/sel_map/sel_map_segmentation/sel_map_segmentation/src/sel_map_segmentation/cameraSensor.py", line 45, in __init__
    SemsegNetworkWrapper = importlib.import_module(wrapper_package).SemsegNetworkWrapper
  File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'pytorch_encoding_wrapper'
[sel_map/sel_map-3] process has died [pid 1075439, exit code 1, cmd /home/arghya/sel_map_ws/devel/lib/sel_map/main.py 10 10 4 0.05 10 mesh:=/mesh mesh/costs:=/mesh/costs get_materials:=/get_materials __name:=sel_map __log:=/home/arghya/.ros/log/dd6e5c2a-e2ba-11ed-b832-d34792068a9c/sel_map-sel_map-3.log].
log file: /home/arghya/.ros/log/dd6e5c2a-e2ba-11ed-b832-d34792068a9c/sel_map-sel_map-3*.log

What should I do now? Thanks in advance.

[v0.0.2] Tracker

This issue is for tracking minor updates and any issues / fixes raised since the initial release of the code.

Error when using azure kinect?

Hi, thank you for your shared work. it's really interesing.
However, when I try to use my own dataset, I consider azure kinect V3 camera, but the error happen as follows:

[INFO] [1710521970.499223]: [sel_map] Message received!
/home/yc/catkin_map/catkin_michgen/src/sel_map/sel_map_segmentation/sel_map_segmentation/src/sel_map_segmentation/cameraSensor.py:57: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.)
  self.depth = torch.from_numpy(depth).float().to(self.network.device, non_blocking=True)
[ERROR] [1710521972.433301]: bad callback: <bound method Subscriber.callback of <message_filters.Subscriber object at 0x7f955172c7f0>>
Traceback (most recent call last):
  File "/opt/ros/noetic/lib/python3/dist-packages/rospy/topics.py", line 750, in _invoke_callback
    cb(msg)
  File "/opt/ros/noetic/lib/python3/dist-packages/message_filters/__init__.py", line 76, in callback
    self.signalMessage(msg)
  File "/opt/ros/noetic/lib/python3/dist-packages/message_filters/__init__.py", line 58, in signalMessage
    cb(*(msg + args))
  File "/opt/ros/noetic/lib/python3/dist-packages/message_filters/__init__.py", line 330, in add
    self.signalMessage(*msgs)
  File "/opt/ros/noetic/lib/python3/dist-packages/message_filters/__init__.py", line 58, in signalMessage
    cb(*(msg + args))
  File "/home/yc/catkin_map/catkin_michgen/src/sel_map/sel_map/scripts/main_kinect.py", line 114, in syncedCallback
    map.update(pose, rgbd, intrinsic=intrinsic, R=R, min_depth=0.5, max_depth=8.0)
  File "/home/yc/catkin_map/catkin_michgen/src/sel_map/sel_map_mapper/src/sel_map_mapper/elmap.py", line 267, in update
    points = self.camera.getProjectedPointCloudWithLabels(intrinsic=intrinsic, R=R, min_depth=min_depth, max_depth=max_depth)
  File "/home/yc/catkin_map/catkin_michgen/src/sel_map/sel_map_segmentation/sel_map_segmentation/src/sel_map_segmentation/cameraSensor.py", line 249, in getProjectedPointCloudWithLabels
    pc, scores = self.projectMeasurementsIntoSensorFrame_torch(intrinsic=intrinsic, R=R, min_depth=min_depth, max_depth=max_depth)
  File "/home/yc/catkin_map/catkin_michgen/src/sel_map/sel_map_segmentation/sel_map_segmentation/src/sel_map_segmentation/cameraSensor.py", line 142, in projectMeasurementsIntoSensorFrame_torch
    scores = torch.transpose(torch.reshape(self.scores, (pixel_length, self.scores.shape[2])), 0, 1)
RuntimeError: shape '[368640, 59]' is invalid for input of size 54374400

I try the /depth/image_raw, /depth/camera_info, /rgb/image_raw and /depth/image_raw, /depth_to_rgb/camera_info, /rgb/image_raw, however, such error always exist.
Can you tell me how to fix it? thangs for your reply here.

Mesh display: no visual available, can't draw mesh! (maybe no data has been received yet?)

Hello, it's my honor to learn about your work. when I put this project on my device, I find that there is something wrong about "roslaunch sel_map spot_sel.launch"

[ INFO] [1667551188.586010063]: Mesh Display: Update
[ERROR] [1667551188.586152061]: Mesh display: no visual available, can't draw mesh! (maybe no data has been received yet?)
[MeshDisplay] onEnable-------------------------------->
[ INFO] [1667551188.586368137]: Mesh Display: Update
[ERROR] [1667551188.586400533]: Mesh display: no visual available, can't draw mesh! (maybe no data has been received yet?)
[ INFO] [1667551188.589839548]: No initial data available, waiting for callback to trigger ...
[ INFO] [1667551188.589980674]: Mesh Display: Update
[ERROR] [1667551188.590021492]: Mesh display: no visual available, can't draw mesh! (maybe no data has been received yet?)
[ INFO] [1667551188.590136463]: Mesh Display: Update
[ERROR] [1667551188.590174442]: Mesh display: no visual available, can't draw mesh! (maybe no data has been received yet?)
[ INFO] [1667551188.592721792]: Mesh Display: Update
[ERROR] [1667551188.592792062]: Mesh display: no visual available, can't draw mesh! (maybe no data has been received yet?)

after that, I check it out, and I think that, this problem is happened about "void MeshDisplay::initialServiceCall()", because:

ros::NodeHandle n;
m_uuidClient = n.serviceClient<mesh_msgs::GetUUIDs>("get_uuid");

mesh_msgs::GetUUIDs srv_uuids;
if (m_uuidClient.call(srv_uuids))
{
std::vector<std::string> uuids = srv_uuids.response.uuids;
if (uuids.size() > 0)

the value of "m_uuidClient.call(srv_uuids)" is none, so that, "[ INFO] [1667551188.589839548]: No initial data available, waiting for callback to trigger ..." is output on screen, but I can't find the reason, maybe it's related to ros package:mesh_tools?

and then, I really don't know how to continue, could you give me some advice, thanks a lot.

Incorrect transform between camera and robot

Hi, thanks for your excellen work! When I was running your code and the spot_comp3.bag, the transform between the camera and the robot seems to be incorrect. Because the map is inclined to the robot shown in the rviz, but the real word seems to be horizontal according to the rgb image. And I tried every sequence of the parameters offered in the spot_bag.yaml / spot.yaml (transform between camera_link and front_rail), but the visualization results are all the same.

image

I was wondering if the transform between the camera and robot is hardcoded in the code, as is done in line 82 of main.py in the main branch?

R = np.array([[0,0,1],[1,0,0],[0,1,0]]) #TODO

How can I get the coefficient of friction in rviz?

Sorry to bother you after a long time, I get part of the mapping effect when running “spot-comp8.bag” under the default configuration as shown in the figure, but the colors shown in the figure I only find that means the classification of the terrain category.
image
Besides I want to get an image similar to the second row of Figure 5 in your paper to represent the image of the coefficient of friction, like this.
image

and what should I do?Maybe I can change something in elmap.py? Waiting for your reply, thanks.

Error in CSAIL_ResNet50ppm_ADE. yaml?

Sorry to bother you aggain, thanks again for your work here.
When I try to use MIT Resnet, the error shows as follows:

[sel_map/static_tf_linker-5] process has finished cleanly
log file: /home/yc/.ros/log/26ff1b50-e363-11ee-9c22-3b683d376962/sel_map-static_tf_linker-5*.log
Loading weights for net_encoder
Loading weights for net_decoder
Traceback (most recent call last):
  File "/home/yc/catkin_map/catkin_michgen/src/sel_map/sel_map/scripts/main_kinect.py", line 243, in <module>
    sel_map_node(mesh_bounds, elem_size, threshold)
  File "/home/yc/catkin_map/catkin_michgen/src/sel_map/sel_map/scripts/main_kinect.py", line 201, in sel_map_node
    segmentation_network = CameraSensor()
  File "/home/yc/catkin_map/catkin_michgen/src/sel_map/sel_map_segmentation/sel_map_segmentation/src/sel_map_segmentation/cameraSensor.py", line 46, in __init__
    self.network = SemsegNetworkWrapper(model=model_name, args=args)
  File "/home/yc/catkin_map/catkin_michgen/src/sel_map/sel_map_segmentation/mit_semseg_wrapper/src/mit_semseg_wrapper/semsegNetwork.py", line 77, in __init__
    net_decoder = ModelBuilder.build_decoder(
  File "/home/yc/catkin_map/catkin_michgen/src/sel_map/sel_map_segmentation/mit_semseg_wrapper/src/mit_semseg/models/models.py", line 155, in build_decoder
    net_decoder.load_state_dict(
  File "/home/yc/.conda/envs/selmap/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2152, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for PPMDeepsup:
	size mismatch for conv_last.1.weight: copying a param with shape torch.Size([150, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([512]).
	size mismatch for conv_last.1.bias: copying a param with shape torch.Size([150]) from checkpoint, the shape in current model is torch.Size([512]).
[sel_map/sel_map-6] process has died [pid 744608, exit code 1, cmd /home/yc/catkin_map/catkin_michgen/src/sel_map/sel_map/scripts/main_kinect.py 4 4 2.5 0.05 20 mesh:=/mesh mesh/costs:=/mesh/costs get_materials:=/get_materials __name:=sel_map __log:=/home/yc/.ros/log/26ff1b50-e363-11ee-9c22-3b683d376962/sel_map-sel_map-6.log].
log file: /home/yc/.ros/log/26ff1b50-e363-11ee-9c22-3b683d376962/sel_map-sel_map-6*.log

My params are as follows:

  <arg name="semseg_config"         default="CSAIL_ResNet50ppm_ADE_onehot.yaml"/>
  <arg name="terrain_properties"  default="csail_semseg_properties.yaml"/>

I tried my own bag and you provide spot bag, all shows such error.
So, what I can do to fix such erro?
Hope to receive your help. thank you.

Can you provide spot_comp8.data

Hi,

Thank you for your great work. Can you provide the rosbag to help me test it?

rosbag play spot_comp8.data --clock --pause

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.