Giter Site home page Giter Site logo

nvidia-isaac-ros / isaac_ros_dnn_inference Goto Github PK

View Code? Open in Web Editor NEW
97.0 4.0 14.0 304 KB

Hardware-accelerated DNN model inference ROS 2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU

Home Page: https://developer.nvidia.com/isaac-ros-gems

License: Apache License 2.0

CMake 5.68% C++ 90.43% C 3.89%
ros dnn tensorrt triton triton-inference-server tensorrt-inference tao deeplearning deep-learning nvidia

isaac_ros_dnn_inference's People

Contributors

hemalshahnv avatar hguillen avatar jaiveersinghnv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

isaac_ros_dnn_inference's Issues

Compile warning, launch execution failed

admin@ubuntu:/workspaces/isaac_ros-dev$ ros2 launch isaac_ros_triton isaac_ros_triton.launch.py model_name:=peoplesemsegnet_shuffleseg model_repository_paths:=['/tmp/models'] input_binding_names:=['input_2:0'] output_binding_names:=['argmax_1']
[INFO] [launch]: All log files can be found below /home/admin/.ros/log/2023-06-15-19-27-52-702315-ubuntu-278149
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [component_container-1]: process started with pid [278160]
[component_container-1] [INFO] [1686828473.007430077] [triton.triton_container]: Load Library: /workspaces/isaac_ros-dev/install/isaac_ros_triton/lib/libtriton_node.so
[component_container-1] [INFO] [1686828473.014030018] [triton.triton_container]: Found class: rclcpp_components::NodeFactoryTemplatenvidia::isaac_ros::dnn_inference::TritonNode
[component_container-1] [INFO] [1686828473.014056098] [triton.triton_container]: Instantiate class: rclcpp_components::NodeFactoryTemplatenvidia::isaac_ros::dnn_inference::TritonNode
[component_container-1] [INFO] [1686828473.015374486] [NitrosContext]: [NitrosContext] Creating a new shared context
[component_container-1] [INFO] [1686828473.015421434] [triton]: [NitrosNode] Initializing NitrosNode
[component_container-1] [INFO] [1686828473.015659433] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/std/libgxf_std.so
[component_container-1] [INFO] [1686828473.017262832] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_gxf_helpers.so
[component_container-1] [INFO] [1686828473.018503638] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_sight.so
[component_container-1] [INFO] [1686828473.019666434] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_atlas.so
[component_container-1] [INFO] [1686828473.021583371] [NitrosContext]: [NitrosContext] Loading application: '/workspaces/isaac_ros-dev/install/isaac_ros_nitros/share/isaac_ros_nitros/config/type_adapter_nitros_context_graph.yaml'
[component_container-1] [INFO] [1686828473.021855671] [NitrosContext]: [NitrosContext] Initializing application...
[component_container-1] [INFO] [1686828473.022698073] [NitrosContext]: [NitrosContext] Running application...
[component_container-1] 2023-06-15 19:27:53.022 WARN gxf/std/program.cpp@456: No system specified. Nothing to do
[component_container-1] [INFO] [1686828473.022911259] [triton]: [NitrosNode] Starting NitrosNode
[component_container-1] [INFO] [1686828473.022917792] [triton]: [NitrosNode] Loading built-in preset extension specs
[component_container-1] [INFO] [1686828473.023918241] [triton]: [NitrosNode] Loading built-in extension specs
[component_container-1] [INFO] [1686828473.023926284] [triton]: [NitrosNode] Loading preset extension specs
[component_container-1] [INFO] [1686828473.024739440] [triton]: [NitrosNode] Loading extension specs
[component_container-1] [INFO] [1686828473.024745947] [triton]: [NitrosNode] Loading generator rules
[component_container-1] [INFO] [1686828473.024841632] [triton]: [NitrosNode] Loading extensions
[component_container-1] [INFO] [1686828473.024903519] [triton]: [NitrosContext] Loading extension: gxf/lib/libgxf_message_compositor.so
[component_container-1] [INFO] [1686828473.025135964] [triton]: [NitrosContext] Loading extension: gxf/lib/cuda/libgxf_cuda.so
[component_container-1] [INFO] [1686828473.026031461] [triton]: [NitrosContext] Loading extension: gxf/lib/serialization/libgxf_serialization.so
[component_container-1] [INFO] [1686828473.027335358] [triton]: [NitrosContext] Loading extension: gxf/triton/libgxf_triton_ext.so
[component_container-1] [INFO] [1686828473.058721706] [triton]: [NitrosNode] Loading graph to the optimizer
[component_container-1] [INFO] [1686828473.059675821] [triton]: [NitrosNode] Running optimization
[component_container-1] [INFO] [1686828473.113501563] [triton]: [NitrosNode] Obtaining graph IO group info from the optimizer
[component_container-1] [INFO] [1686828473.117961615] [triton]: [NitrosNode] Creating negotiated publishers/subscribers
[component_container-1] [INFO] [1686828473.118109880] [triton]: [NitrosPublisherSubscriberGroup] Pinning the component "triton_request/input" (type="nvidia::gxf::DoubleBufferReceiver") to use its compatible format only: "nitros_tensor_list_nchw_rgb_f32"
[component_container-1] [INFO] [1686828473.120015378] [triton]: [NitrosPublisherSubscriberGroup] Pinning the component "sink/sink" (type="nvidia::isaac_ros::MessageRelay") to use its compatible format only: "nitros_tensor_list_nhwc_rgb_f32"
[component_container-1] [INFO] [1686828473.120219941] [triton]: [NitrosNode] Starting negotiation...
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/triton' in container '/triton/triton_container'
[component_container-1] [INFO] [1686828474.120561423] [triton]: [NitrosNode] Starting post negotiation setup
[component_container-1] [INFO] [1686828474.120693233] [triton]: [NitrosNode] Getting data format negotiation results
[component_container-1] [INFO] [1686828474.120721110] [triton]: [NitrosSubscriber] Negotiation ended with no results
[component_container-1] [INFO] [1686828474.120738940] [triton]: [NitrosSubscriber] Use the compatible subscriber: topic_name="/tensor_pub", data_format="nitros_tensor_list_nchw_rgb_f32"
[component_container-1] [INFO] [1686828474.120816673] [triton]: [NitrosPublisher] Negotiation ended with no results
[component_container-1] [INFO] [1686828474.120833262] [triton]: [NitrosPublisher] Use only the compatible publisher: topic_name="/tensor_sub", data_format="nitros_tensor_list_nhwc_rgb_f32"
[component_container-1] [INFO] [1686828474.120858565] [triton]: [NitrosNode] Exporting the final graph based on the negotiation results
[component_container-1] [INFO] [1686828474.131647631] [triton]: [NitrosNode] Wrote the final top level YAML graph to "/workspaces/isaac_ros-dev/install/isaac_ros_triton/share/isaac_ros_triton/ORKVHIVHRA.yaml"
[component_container-1] [INFO] [1686828474.131661076] [triton]: [NitrosNode] Calling user's pre-load-graph callback
[component_container-1] [INFO] [1686828474.131679835] [triton]: [NitrosNode] Loading application
[component_container-1] [INFO] [1686828474.131684232] [triton]: [NitrosContext] Loading application: '/workspaces/isaac_ros-dev/install/isaac_ros_triton/share/isaac_ros_triton/ORKVHIVHRA.yaml'
[component_container-1] 2023-06-15 19:27:54.132 WARN gxf/std/yaml_file_loader.cpp@952: Using unregistered parameter 'dummy_rx' in component 'requester'.
[component_container-1] [INFO] [1686828474.132451121] [triton]: [NitrosNode] Linking Nitros pub/sub to the loaded application
[component_container-1] [INFO] [1686828474.132491800] [triton]: [NitrosNode] Calling user's post-load-graph callback
[component_container-1] [INFO] [1686828474.132496913] [triton]: In TritonNode postLoadGraphCallback().
[component_container-1] [INFO] [1686828474.132522546] [triton]: [NitrosContext] Initializing application...
[component_container-1] WARNING: infer_trtis_server.cpp:1219 NvDsTritonServerInit suggest to set model_control_mode:none. otherwise may cause unknow issues.
[component_container-1] I0615 11:27:54.229629 278160 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x7f568c000000' with size 268435456
[component_container-1] I0615 11:27:54.229740 278160 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
[component_container-1] I0615 11:27:54.230304 278160 server.cc:563]
[component_container-1] +------------------+------+
[component_container-1] | Repository Agent | Path |
[component_container-1] +------------------+------+
[component_container-1] +------------------+------+
[component_container-1]
[component_container-1] I0615 11:27:54.230313 278160 server.cc:590]
[component_container-1] +---------+------+--------+
[component_container-1] | Backend | Path | Config |
[component_container-1] +---------+------+--------+
[component_container-1] +---------+------+--------+
[component_container-1]
[component_container-1] I0615 11:27:54.230317 278160 server.cc:633]
[component_container-1] +-------+---------+--------+
[component_container-1] | Model | Version | Status |
[component_container-1] +-------+---------+--------+
[component_container-1] +-------+---------+--------+
[component_container-1]
[component_container-1] I0615 11:27:54.250527 278160 metrics.cc:864] Collecting metrics for GPU 0: NVIDIA GeForce RTX 3080
[component_container-1] I0615 11:27:54.250740 278160 metrics.cc:757] Collecting CPU metrics
[component_container-1] I0615 11:27:54.250822 278160 tritonserver.cc:2264]
[component_container-1] +----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[component_container-1] | Option | Value |
[component_container-1] +----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[component_container-1] | server_id | triton |
[component_container-1] | server_version | 2.26.0 |
[component_container-1] | server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data statistics trace logging |
[component_container-1] | model_repository_path[0] | /tmp/models |
[component_container-1] | model_control_mode | MODE_EXPLICIT |
[component_container-1] | strict_model_config | 1 |
[component_container-1] | rate_limit | OFF |
[component_container-1] | pinned_memory_pool_byte_size | 268435456 |
[component_container-1] | cuda_memory_pool_byte_size{0} | 67108864 |
[component_container-1] | response_cache_byte_size | 0 |
[component_container-1] | min_supported_compute_capability | 6.0 |
[component_container-1] | strict_readiness | 1 |
[component_container-1] | exit_timeout | 30 |
[component_container-1] +----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[component_container-1]
[component_container-1] [INFO] [1686828474.251106529] [triton]: [NitrosContext] Running application...
[component_container-1] [INFO] [1686828474.251163596] [triton]: [NitrosNode] Starting a heartbeat timer (eid=16)
[component_container-1] I0615 11:27:54.252079 278160 model_lifecycle.cc:459] loading: peoplesemsegnet_shuffleseg:1
[component_container-1] I0615 11:27:54.273771 278160 tensorrt.cc:5442] TRITONBACKEND_Initialize: tensorrt
[component_container-1] I0615 11:27:54.273783 278160 tensorrt.cc:5452] Triton TRITONBACKEND API version: 1.10
[component_container-1] I0615 11:27:54.273787 278160 tensorrt.cc:5458] 'tensorrt' TRITONBACKEND API version: 1.10
[component_container-1] I0615 11:27:54.273789 278160 tensorrt.cc:5486] backend configuration:
[component_container-1] {"cmdline":{"auto-complete-config":"false","min-compute-capability":"6.000000","backend-directory":"/opt/tritonserver/backends","default-max-batch-size":"4"}}
[component_container-1] I0615 11:27:54.273969 278160 tensorrt.cc:5591] TRITONBACKEND_ModelInitialize: peoplesemsegnet_shuffleseg (version 1)
[component_container-1] I0615 11:27:54.274626 278160 tensorrt.cc:5640] TRITONBACKEND_ModelInstanceInitialize: peoplesemsegnet_shuffleseg (GPU device 0)
[component_container-1] I0615 11:27:54.288726 278160 tensorrt.cc:5678] TRITONBACKEND_ModelInstanceFinalize: delete instance state
[component_container-1] I0615 11:27:54.288740 278160 tensorrt.cc:5617] TRITONBACKEND_ModelFinalize: delete model state
[component_container-1] E0615 11:27:54.288747 278160 model_lifecycle.cc:596] failed to load 'peoplesemsegnet_shuffleseg' version 1: Internal: unable to create stream: the provided PTX was compiled with an unsupported toolchain.
[component_container-1] ERROR: infer_trtis_server.cpp:1057 Triton: failed to load model peoplesemsegnet_shuffleseg, triton_err_str:Invalid argument, err_msg:load failed for model 'peoplesemsegnet_shuffleseg': version 1 is at UNAVAILABLE state: Internal: unable to create stream: the provided PTX was compiled with an unsupported toolchain.;
[component_container-1]
[component_container-1] ERROR: infer_trtis_backend.cpp:54 failed to load model: peoplesemsegnet_shuffleseg, nvinfer error:NVDSINFER_TRITON_ERROR
[component_container-1] ERROR: infer_simple_runtime.cpp:33 failed to initialize backend while ensuring model:peoplesemsegnet_shuffleseg ready, nvinfer error:NVDSINFER_TRITON_ERROR
[component_container-1] ERROR: Error in createNNBackend() <infer_simple_context.cpp:76> [UID = 16]: failed to initialize triton simple runtime for model:peoplesemsegnet_shuffleseg, nvinfer error:NVDSINFER_TRITON_ERROR
[component_container-1] ERROR: Error in initialize() <infer_base_context.cpp:79> [UID = 16]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_TRITON_ERROR
[component_container-1] 2023-06-15 19:27:54.288 ERROR /workspaces/isaac_ros-dev/src/isaac_ros_dnn_inference/isaac_ros_triton/gxf/triton/inferencers/triton_inferencer_impl.cpp@326: Failure to initialize Inference Context for 'peoplesemsegnet_shuffleseg'
[component_container-1] 2023-06-15 19:27:54.288 ERROR gxf/std/entity_executor.cpp@540: Entity [ORKVHIVHRA_triton_request] must be in Lifecycle::kStarted or Lifecycle::kIdle stage before stopping. Current state is StartPending
[component_container-1] 2023-06-15 19:27:54.288 WARN gxf/std/greedy_scheduler.cpp@241: Error while executing entity 16 named 'ORKVHIVHRA_triton_request': GXF_FAILURE
[component_container-1] 2023-06-15 19:27:55.251 ERROR gxf/std/entity_executor.cpp@203: Entity with eid 16 not found!
[component_container-1] [WARN] [1686828475.251504660] [triton]: [NitrosNode] The heartbeat entity (eid=16) was stopped. The graph may have been terminated.
[component_container-1] [INFO] [1686828475.255870227] [triton]: [NitrosNode] Terminating the running application
[component_container-1] [INFO] [1686828475.255909899] [triton]: [NitrosContext] Interrupting GXF...
[component_container-1] [INFO] [1686828475.255938944] [triton]: [NitrosContext] Waiting on GXF...
[component_container-1] 2023-06-15 19:27:55.255 ERROR gxf/std/program.cpp@497: wait failed. Deactivating...
[component_container-1] 2023-06-15 19:27:55.256 ERROR gxf/core/runtime.cpp@1251: Graph wait failed with error: GXF_FAILURE
[component_container-1] [ERROR] [1686828475.256345548] [triton]: [NitrosContext] GxfGraphWait Error: GXF_FAILURE
[component_container-1] [INFO] [1686828475.256352619] [triton]: [NitrosNode] Application termination done
[INFO] [component_container-1]: process has finished cleanly [pid 278160]
image

When I execute colcon build --symlink-install,I am faced with the following problems.
image

The launch does not work. Is it related to this? My network here is not very good, when compiling will encounter warnings.I look forward to your reply. Thank you

Error running in ROS galactic libgxf_ros_bridge.so: undefined symbol: _Z16_demangle_symbolPKc

Hello. My system currently runs in ROS2 Galactic instead of Foxy. I am trying to run the detection script from https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_pose_estimation in the container but am getting an error with libgxf_ros_bridge.so and I suspect it is because I am using galactic instead of foxy. Is there any way I could get access to a different .so file or is there another way to fix this issue?

ros2 launch /workspaces/isaac_ros-dev/isaac_ros_pose_estimation/isaac_ros_dope/launch/isaac_ros_dope_tensor_rt.launch.py 
[INFO] [launch]: All log files can be found below /home/admin/.ros/log/2022-02-15-22-14-04-514228-operator-6066
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [component_container-1]: process started with pid [6079]
[component_container-1] [INFO] [1644963245.380737123] [dope_container]: Load Library: /workspaces/isaac_ros-dev/install/isaac_ros_dnn_encoders/lib/libdnn_image_encoder_node.so
[component_container-1] [INFO] [1644963245.679776047] [dope_container]: Found class: rclcpp_components::NodeFactoryTemplate<isaac_ros::dnn_inference::DnnImageEncoderNode>
[component_container-1] [INFO] [1644963245.679936212] [dope_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<isaac_ros::dnn_inference::DnnImageEncoderNode>
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/dope_encoder' in container '/dope_container'
[component_container-1] [INFO] [1644963245.700632235] [dope_container]: Load Library: /workspaces/isaac_ros-dev/install/isaac_ros_tensor_rt/lib/libtensor_rt_node.so
[component_container-1] [INFO] [1644963245.706709553] [dope_container]: Found class: rclcpp_components::NodeFactoryTemplate<isaac_ros::dnn_inference::TensorRTNode>
[component_container-1] [INFO] [1644963245.706819289] [dope_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<isaac_ros::dnn_inference::TensorRTNode>
[component_container-1] [INFO] [1644963245.714073339] [dope_inference]: /workspaces/isaac_ros-dev/install/isaac_ros_tensor_rt/share/isaac_ros_tensor_rt
[component_container-1] [INFO] [1644963245.714154812] [dope_inference]: Creating context
[component_container-1] 2022-02-15 22:14:06.112 ERROR gxf/std/extension_loader.cpp@109: /workspaces/isaac_ros-dev/install/isaac_ros_nvengine/share/isaac_ros_nvengine/gxf/libgxf_ros_bridge.so: undefined symbol: _Z16_demangle_symbolPKc
[component_container-1] [ERROR] [1644963246.112833164] [dope_inference]: LoadExtensionManifest Error: GXF_EXTENSION_FILE_NOT_FOUND
[component_container-1] [ERROR] [1644963246.113712813] [dope_inference]: GXF Entity find failed
[component_container-1] [ERROR] [1644963246.113748958] [dope_inference]: GXF Entity find failed
[component_container-1] [ERROR] [1644963246.113773770] [dope_inference]: GXF Entity find failed
[component_container-1] [ERROR] [1644963246.113795166] [dope_inference]: GXF Entity find failed
[component_container-1] [ERROR] [1644963246.113818041] [dope_inference]: GXF Entity find failed
[component_container-1] [ERROR] [1644963246.113836551] [dope_inference]: GXF Entity find failed
[component_container-1] [ERROR] [1644963246.113864376] [dope_inference]: GXF Entity find failed
[component_container-1] [ERROR] [1644963246.113885312] [dope_inference]: GXF Entity find failed
[component_container-1] [ERROR] [1644963246.113904713] [dope_inference]: GXF Entity find failed
[component_container-1] [ERROR] [1644963246.113923506] [dope_inference]: GXF Entity find failed
[component_container-1] [ERROR] [1644963246.113941735] [dope_inference]: GXF Entity find failed
[component_container-1] [ERROR] [1644963246.113964475] [dope_inference]: GXF Entity find failed
[component_container-1] [ERROR] [1644963246.113982612] [dope_inference]: GXF Entity find failed
[component_container-1] [ERROR] [1644963246.114001445] [dope_inference]: GXF Entity find failed
[component_container-1] [INFO] [1644963246.114021548] [dope_inference]: Initializing...
[component_container-1] [INFO] [1644963246.114330965] [dope_inference]: Running...
[component_container-1] 2022-02-15 22:14:06.114 WARN  gxf/std/program.cpp@305: No system specified. Nothing to do

ros2 topic list does not show /tensor_pub, /tensor_sub, etc.

Hi,
I tried the ros2 topic list but can not see topics like tensor_pub, tensor_sub, or image_publisher.
Even I tried running with export ROS_DOMAIN_ID=1, I can not see the topics.
Btw in initial release I can see the topics, but with the latest commit I cann't.
Any advice?

SUCCESSFUL on Compilation but FAIL when Running nvidia::isaac_ros::dnn_inference::TritonNode

I was trying to run the triton node using container, however when it's Loading extension: gxf/triton/libgxf_triton_ext.so, it throws error that libtritonserver.so: cannot open shared object file: No such file or directory as per screenshot below.

Screenshot from 2024-03-26 10-11-49

I checked the existence of this libtritonserver.so and found that it's not being generated. Therefore, I have tried to clean off the build/ and install/ directories and redo colcon build several times, but none of them resolve the issue.

Any solution for this issue?

Best,
Samuel

isaac_ros_nitros_image_type/nitros_image.hpp: No such file or directory

Hi,
On jetson Xavier NX, L4T: 35.4.1, Jetpack:5.1.2:
Try to run command:
cd /workspaces/isaac_ros-dev &&
colcon build --symlink-install &&
source install/setup.bash

Got error messages:
This may be promoted to an error in a future release of colcon-override-check.
Starting >>> isaac_ros_common
Starting >>> isaac_ros_test
Starting >>> nvblox_msgs
Starting >>> nvblox_ros_common
Starting >>> nvblox
Starting >>> realsense2_camera_msgs
Finished <<< isaac_ros_common [10.3s]
Starting >>> isaac_ros_bi3d_interfaces
Finished <<< nvblox_ros_common [10.3s]
Starting >>> isaac_ros_pointcloud_interfaces
Finished <<< isaac_ros_test [20.4s]
Starting >>> nvblox_cpu_gpu_tools
Finished <<< realsense2_camera_msgs [25.7s]
Starting >>> nvblox_performance_measurement_msgs
Finished <<< nvblox_msgs [27.9s]
Starting >>> isaac_ros_dnn_image_encoder
Finished <<< isaac_ros_bi3d_interfaces [19.7s]
Finished <<< isaac_ros_pointcloud_interfaces [19.1s]
Starting >>> isaac_ros_tensor_rt
Starting >>> isaac_ros_triton
--- stderr: isaac_ros_dnn_image_encoder
/workspaces/isaac_ros-dev/src/isaac_ros_dnn_inference/isaac_ros_dnn_image_encoder/src/dnn_image_encoder_node.cpp:25:10: fatal error: isaac_ros_nitros_image_type/nitros_image.hpp: No such file or directory
25 | #include "isaac_ros_nitros_image_type/nitros_image.hpp"
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [CMakeFiles/dnn_image_encoder_node.dir/build.make:76: CMakeFiles/dnn_image_encoder_node.dir/src/dnn_image_encoder_node.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:137: CMakeFiles/dnn_image_encoder_node.dir/all] Error 2
make: *** [Makefile:146: all] Error 2

Failed <<< isaac_ros_dnn_image_encoder [18.7s, exited with code 2]
Aborted <<< nvblox_cpu_gpu_tools [29.5s]
Aborted <<< isaac_ros_triton [27.3s]
Aborted <<< isaac_ros_tensor_rt [29.7s]
Aborted <<< nvblox_performance_measurement_msgs [55.6s]

Documentation Typo

At the Isaac ROS Documentation the input_width is used twice.

ROS Parameter Type Default Description
input_image_width uint16_t 0 The input image width.
input_image_width uint16_t 0 The input image height.

I assume this is meant to be input_image_width and input_image_height

error running examples

Hi, I installed the issac ros from source, without docker, when I run examples, at the moment, trt and unet lauch.py files I get errors of gfx and some missing files.

ros2 launch ./isaac_ros_tensor_rt.py
[INFO] [launch]: All log files can be found below /home/imother/.ros/log/2021-11-14-20-14-44-133478-imother-799864
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [component_container-1]: process started with pid [800292]
[component_container-1] [INFO] [1636920885.332517312] [isaac_ros_tensor_rt.tensor_rt_container]: Load Library: /home/imother/ros2_isaac_ws/install/isaac_ros_tensor_rt/lib/libtensor_rt_node.so
[component_container-1] [INFO] [1636920885.337121942] [isaac_ros_tensor_rt.tensor_rt_container]: Found class: rclcpp_components::NodeFactoryTemplate<isaac_ros::dnn_inference::TensorRTNode>
[component_container-1] [INFO] [1636920885.337256022] [isaac_ros_tensor_rt.tensor_rt_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<isaac_ros::dnn_inference::TensorRTNode>
[component_container-1] [INFO] [1636920885.359633408] [tensor_rt]: /home/imother/ros2_isaac_ws/install/isaac_ros_tensor_rt/share/isaac_ros_tensor_rt
[component_container-1] [INFO] [1636920885.363188241] [tensor_rt]: Creating context
[component_container-1] 2021-11-14 20:14:45.476 ERROR gxf/std/extension_loader.cpp@109: librmw_cyclonedds_cpp.so: cannot open shared object file: No such file or directory
[component_container-1] [ERROR] [1636920885.476188550] [tensor_rt]: LoadExtensionManifest Error: GXF_EXTENSION_FILE_NOT_FOUND
[component_container-1] [ERROR] [1636920885.485840148] [tensor_rt]: GXF Entity find failed
[component_container-1] [ERROR] [1636920885.486131765] [tensor_rt]: GXF Entity find failed
[component_container-1] [ERROR] [1636920885.486270006] [tensor_rt]: GXF Entity find failed
[component_container-1] [ERROR] [1636920885.486320694] [tensor_rt]: GXF Entity find failed
[component_container-1] [ERROR] [1636920885.486366358] [tensor_rt]: GXF Entity find failed
[component_container-1] [ERROR] [1636920885.486512823] [tensor_rt]: GXF Entity find failed
[component_container-1] [ERROR] [1636920885.486570327] [tensor_rt]: GXF Entity find failed
[component_container-1] [ERROR] [1636920885.486614040] [tensor_rt]: GXF Entity find failed
[component_container-1] [ERROR] [1636920885.486653848] [tensor_rt]: GXF Entity find failed
[component_container-1] [ERROR] [1636920885.486693592] [tensor_rt]: GXF Entity find failed
[component_container-1] [ERROR] [1636920885.486787736] [tensor_rt]: GXF Entity find failed
[component_container-1] [ERROR] [1636920885.486840569] [tensor_rt]: GXF Entity find failed
[component_container-1] [ERROR] [1636920885.486925337] [tensor_rt]: GXF Entity find failed
[component_container-1] [ERROR] [1636920885.486972697] [tensor_rt]: GXF Entity find failed
[component_container-1] [INFO] [1636920885.487014041] [tensor_rt]: Initializing...
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/tensor_rt' in container '/isaac_ros_tensor_rt/tensor_rt_container'
[component_container-1] [INFO] [1636920885.487970462] [tensor_rt]: Running...

by the way, how to choose the image topic to make the inference?

PUBLIC key Issues

While running the dnn inference package on X86, ubuntu 20.04 I get the public key not available error. Error pasted below.

"W: GPG error: https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY A4B469963BF863CC
E: The repository 'https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 InRelease' is not signed. "

How to mitigate this ?

dnn_encoder GXF_OUT_OF_MEMORY

Hi, I get a memory error if I just try running the dnn_encoder node by itself.
Here is the error:
[component_container_mt-1] [INFO] [1678771154.245686357] [dnn_image_encoder]: [NitrosNode] Linking Nitros pub/sub to the loaded application [component_container_mt-1] [INFO] [1678771154.245800463] [dnn_image_encoder]: [NitrosNode] Calling user's post-load-graph callback [component_container_mt-1] [INFO] [1678771154.245811388] [dnn_image_encoder]: In DNN Image Encoder Node postLoadGraphCallback(). [component_container_mt-1] [INFO] [1678771154.245908426] [dnn_image_encoder]: [NitrosContext] Initializing applicaiton... [component_container_mt-1] 2023-03-14 05:19:14.360 ERROR gxf/std/entity_warden.cpp@379: Failed to initialize component 00050 (allocator) [component_container_mt-1] 2023-03-14 05:19:14.360 ERROR gxf/core/runtime.cpp@551: Could not initialize entity 'NQAKHWCELM_reshaper' (E45): GXF_OUT_OF_MEMORY [component_container_mt-1] 2023-03-14 05:19:14.360 ERROR gxf/std/program.cpp@191: Failed to activate entity 00045 named NQAKHWCELM_reshaper: GXF_OUT_OF_MEMORY [component_container_mt-1] 2023-03-14 05:19:14.360 ERROR gxf/std/program.cpp@193: Deactivating... [component_container_mt-1] 2023-03-14 05:19:14.378 ERROR gxf/core/runtime.cpp@1158: Graph activation failed with error: GXF_OUT_OF_MEMORY [component_container_mt-1] [ERROR] [1678771154.378456398] [dnn_image_encoder]: [NitrosContext] GxfGraphActivate Error: GXF_OUT_OF_MEMORY [component_container_mt-1] [ERROR] [1678771154.378511054] [dnn_image_encoder]: [NitrosNode] runGraphAsync Error: GXF_OUT_OF_MEMORY [component_container_mt-1] terminate called after throwing an instance of 'std::runtime_error' [component_container_mt-1] what(): [NitrosNode] runGraphAsync Error: GXF_OUT_OF_MEMORY [ERROR] [component_container_mt-1]: process has died [pid 11030, exit code -6, cmd '/opt/ros/humble/install/lib/rclcpp_components/component_container_mt --ros-args -r __node:=tensor_rt_container -r __ns:=/isaac_ros_tensor_rt'].

Here is the relevant code, im just launching the dnn encoder node (though with large size, but im not sure why its causing a memory error)

# The Image to Tensor encoder encoder_node = ComposableNode( name="dnn_image_encoder", package="isaac_ros_dnn_encoders", plugin="nvidia::isaac_ros::dnn_inference::DnnImageEncoderNode", parameters=[ { "network_image_width": 2432, "network_image_height": 2048, } ], remappings=[("encoded_tensor", "tensor_pub")], )

This is with the isaac_ros_dev-x86_64 image, using isaac_ros_common/scripts/run_dev.sh

Support for DetectNet in TensorRT

Thanks a lot for the great Isaac project!

I am interested in using DetectNet with TensorRT optimization, but native model support is only provided for dope & Unet. On the isaac_ros_object_detection repo it explicitly says that there is no support for tensorrt and DetectNet.

My question is if this will be support in the future or if I will have to implement this myself?

ros2 launch isaac_ros_triton demo fails to run

I'm building the NVIDIA-ISAAC ROS dnn interface for the first time and am having a few issues. Running Ubuntu 22.04 and a GTX 745. NVIDIA graphics driver version is 530.41.03.

I get this error once I run the commands:

./scripts/run_dev.sh

Then I source.

ros2 launch isaac_ros_triton isaac_ros_triton.launch.py model_name:=peoplesemsegnet_shuffleseg model_repository_paths:=['/tmp/models'] input_binding_names:=['input_2:0'] output_binding_names:=['argmax_1']

I also had a stderr 1 problem with building isaac_ros_triton, but I was able to run the launch command above and receive this output:

admin@rosie:/workspaces/isaac_ros-dev$ ros2 launch isaac_ros_triton isaac_ros_triton.launch.py model_name:=peoplesemsegnet_shuffleseg model_repository_paths:=['/tmp/models'] input_binding_names:=['input_2:0'] output_binding_names:=['argmax_1']
[INFO] [launch]: All log files can be found below /home/admin/.ros/log/2023-05-22-11-32-42-146748-rosie-19142
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [component_container-1]: process started with pid [19153]
[component_container-1] [INFO] [1684769563.335663644] [triton.triton_container]: Load Library: /workspaces/isaac_ros-dev/install/isaac_ros_triton/lib/libtriton_node.so
[component_container-1] [INFO] [1684769563.484380874] [triton.triton_container]: Found class: rclcpp_components::NodeFactoryTemplatenvidia::isaac_ros::dnn_inference::TritonNode
[component_container-1] [INFO] [1684769563.484490698] [triton.triton_container]: Instantiate class: rclcpp_components::NodeFactoryTemplatenvidia::isaac_ros::dnn_inference::TritonNode
[component_container-1] [INFO] [1684769563.502691448] [NitrosContext]: [NitrosContext] Creating a new shared context
[component_container-1] [INFO] [1684769563.503017916] [triton]: [NitrosNode] Initializing NitrosNode
[component_container-1] [INFO] [1684769563.506695254] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/std/libgxf_std.so
[component_container-1] [INFO] [1684769563.568062075] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_gxf_helpers.so
[component_container-1] [INFO] [1684769563.644097089] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_sight.so
[component_container-1] [INFO] [1684769563.702077173] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_atlas.so
[component_container-1] [INFO] [1684769563.898933652] [NitrosContext]: [NitrosContext] Loading application: '/workspaces/isaac_ros-dev/install/isaac_ros_nitros/share/isaac_ros_nitros/config/type_adapter_nitros_context_graph.yaml'
[component_container-1] [INFO] [1684769563.927854268] [NitrosContext]: [NitrosContext] Initializing application...
[component_container-1] [INFO] [1684769563.935790905] [NitrosContext]: [NitrosContext] Running application...
[component_container-1] 2023-05-22 11:32:43.935 WARN gxf/std/program.cpp@456: No system specified. Nothing to do
[component_container-1] [INFO] [1684769563.937480483] [triton]: [NitrosNode] Starting NitrosNode
[component_container-1] [INFO] [1684769563.937518583] [triton]: [NitrosNode] Loading built-in preset extension specs
[component_container-1] [INFO] [1684769563.967474604] [triton]: [NitrosNode] Loading built-in extension specs
[component_container-1] [INFO] [1684769563.967543490] [triton]: [NitrosNode] Loading preset extension specs
[component_container-1] [INFO] [1684769563.969069677] [triton]: [NitrosNode] Loading extension specs
[component_container-1] [INFO] [1684769563.969127887] [triton]: [NitrosNode] Loading generator rules
[component_container-1] [INFO] [1684769563.977597079] [triton]: [NitrosNode] Loading extensions
[component_container-1] [INFO] [1684769563.980307306] [triton]: [NitrosContext] Loading extension: gxf/lib/cuda/libgxf_cuda.so
[component_container-1] [INFO] [1684769564.011143782] [triton]: [NitrosContext] Loading extension: gxf/lib/serialization/libgxf_serialization.so
[component_container-1] [INFO] [1684769564.085232735] [triton]: [NitrosContext] Loading extension: gxf/triton/libgxf_triton_ext.so
[component_container-1] [INFO] [1684769564.619203417] [triton]: [NitrosNode] Loading graph to the optimizer
[component_container-1] [INFO] [1684769564.630495515] [triton]: [NitrosNode] Running optimization
[component_container-1] [INFO] [1684769564.737519976] [triton]: [NitrosNode] Obtaining graph IO group info from the optimizer
[component_container-1] [INFO] [1684769564.746418018] [triton]: [NitrosNode] Creating negotiated publishers/subscribers
[component_container-1] [INFO] [1684769564.746753729] [triton]: [NitrosPublisherSubscriberGroup] Pinning the component "triton_request/input" (type="nvidia::gxf::DoubleBufferReceiver") to use its compatible format only: "nitros_tensor_list_nchw_rgb_f32"
[component_container-1] [INFO] [1684769564.786416919] [triton]: [NitrosPublisherSubscriberGroup] Pinning the component "vault/vault" (type="nvidia::gxf::Vault") to use its compatible format only: "nitros_tensor_list_nhwc_rgb_f32"
[component_container-1] [INFO] [1684769564.787220356] [triton]: [NitrosNode] Starting negotiation...
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/triton' in container '/triton/triton_container'
[component_container-1] [INFO] [1684769565.787677681] [triton]: [NitrosNode] Starting post negotiation setup
[component_container-1] [INFO] [1684769565.787813236] [triton]: [NitrosNode] Getting data format negotiation results
[component_container-1] [INFO] [1684769565.787850076] [triton]: [NitrosSubscriber] Negotiation ended with no results
[component_container-1] [INFO] [1684769565.787874167] [triton]: [NitrosSubscriber] Use the compatible subscriber: topic_name="/tensor_pub", data_format="nitros_tensor_list_nchw_rgb_f32"
[component_container-1] [INFO] [1684769565.787950572] [triton]: [NitrosPublisher] Negotiation ended with no results
[component_container-1] [INFO] [1684769565.787978155] [triton]: [NitrosPublisher] Use only the compatible publisher: topic_name="/tensor_sub", data_format="nitros_tensor_list_nhwc_rgb_f32"
[component_container-1] [INFO] [1684769565.788007983] [triton]: [NitrosNode] Exporting the final graph based on the negotiation results
[component_container-1] [INFO] [1684769565.807820694] [triton]: [NitrosNode] Wrote the final top level YAML graph to "/workspaces/isaac_ros-dev/install/isaac_ros_triton/share/isaac_ros_triton/UVXSRMUCPB.yaml"
[component_container-1] [INFO] [1684769565.807893318] [triton]: [NitrosNode] Calling user's pre-load-graph callback
[component_container-1] [INFO] [1684769565.807901690] [triton]: [NitrosNode] Loading application
[component_container-1] [INFO] [1684769565.807909050] [triton]: [NitrosContext] Loading application: '/workspaces/isaac_ros-dev/install/isaac_ros_triton/share/isaac_ros_triton/UVXSRMUCPB.yaml'
[component_container-1] 2023-05-22 11:32:45.809 WARN gxf/std/yaml_file_loader.cpp@952: Using unregistered parameter 'dummy_rx' in component 'requester'.
[component_container-1] [INFO] [1684769565.809555864] [triton]: [NitrosNode] Linking Nitros pub/sub to the loaded application
[component_container-1] [INFO] [1684769565.809629808] [triton]: [NitrosNode] Calling user's post-load-graph callback
[component_container-1] [INFO] [1684769565.809642293] [triton]: In TritonNode postLoadGraphCallback().
[component_container-1] [INFO] [1684769565.809684659] [triton]: [NitrosContext] Initializing application...
[component_container-1] WARNING: infer_trtis_server.cpp:1219 NvDsTritonServerInit suggest to set model_control_mode:none. otherwise may cause unknow issues.
[component_container-1] I0522 15:32:46.407227 19153 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x202d80000' with size 268435456
[component_container-1] I0522 15:32:46.414698 19153 cuda_memory_manager.cc:115] CUDA memory pool disabled
[component_container-1] I0522 15:32:46.679047 19153 metrics.cc:864] Collecting metrics for GPU 0: NVIDIA GeForce GTX 745
[component_container-1] I0522 15:32:46.687160 19153 metrics.cc:757] Collecting CPU metrics
[component_container-1] I0522 15:32:46.687425 19153 tritonserver.cc:2264]
[component_container-1] +----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[component_container-1] | Option | Value |
[component_container-1] +----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[component_container-1] | server_id | triton |
[component_container-1] | server_version | 2.26.0 |
[component_container-1] | server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data statistics trace logging |
[component_container-1] | model_repository_path[0] | /tmp/models |
[component_container-1] | model_control_mode | MODE_EXPLICIT |
[component_container-1] | strict_model_config | 1 |
[component_container-1] | rate_limit | OFF |
[component_container-1] | pinned_memory_pool_byte_size | 268435456 |
[component_container-1] | response_cache_byte_size | 0 |
[component_container-1] | min_supported_compute_capability | 6.0 |
[component_container-1] | strict_readiness | 1 |
[component_container-1] | exit_timeout | 30 |
[component_container-1] +----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[component_container-1]
[component_container-1] I0522 15:32:46.696038 19153 server.cc:261] No server context available. Exiting immediately.
[component_container-1] ERROR: infer_trtis_server.cpp:994 Triton: failed to create repo server, triton_err_str:Internal, err_msg:failed to stat file /tmp/models
[component_container-1] ERROR: infer_trtis_server.cpp:840 failed to initialize trtserver on repo dir: root: "/tmp/models"
[component_container-1] log_level: 2
[component_container-1] strict_model_config: true
[component_container-1] tf_disable_soft_placement: true
[component_container-1] min_compute_capacity: 6
[component_container-1] backend_dir: "/opt/tritonserver/backends"
[component_container-1] model_control_mode: "explicit"
[component_container-1]
[component_container-1] ERROR: infer_trtis_server.cpp:1224 NvDsTritonServerInit failed to get global triton instance
[component_container-1] 2023-05-22 11:32:46.696 ERROR /workspaces/isaac_ros-dev/src/isaac_ros_dnn_inference/isaac_ros_triton/gxf/triton/extensions/triton/triton_server.cpp@91: Error in NvDsTritonServerInit
[component_container-1] 2023-05-22 11:32:46.696 ERROR gxf/std/entity_warden.cpp@380: Failed to initialize component 00015 (server)
[component_container-1] 2023-05-22 11:32:46.696 ERROR gxf/core/runtime.cpp@616: Could not initialize entity 'UVXSRMUCPB_triton_server' (E14): GXF_FAILURE
[component_container-1] 2023-05-22 11:32:46.696 ERROR gxf/std/program.cpp@205: Failed to activate entity 00014 named UVXSRMUCPB_triton_server: GXF_FAILURE
[component_container-1] 2023-05-22 11:32:46.696 ERROR gxf/std/program.cpp@207: Deactivating...
[component_container-1] 2023-05-22 11:32:46.696 ERROR gxf/core/runtime.cpp@1227: Graph activation failed with error: GXF_FAILURE
[component_container-1] [ERROR] [1684769566.696283013] [triton]: [NitrosContext] GxfGraphActivate Error: GXF_FAILURE
[component_container-1] [ERROR] [1684769566.696343066] [triton]: [NitrosNode] runGraphAsync Error: GXF_FAILURE
[component_container-1] terminate called after throwing an instance of 'std::runtime_error'
[component_container-1] what(): [NitrosNode] runGraphAsync Error: GXF_FAILURE
[ERROR] [component_container-1]: process has died [pid 19153, exit code -6, cmd '/opt/ros/humble/install/lib/rclcpp_components/component_container --ros-args -r __node:=triton_container -r __ns:=/triton'].

Running the colcon build tests outputs this:

stderr: isaac_ros_image_proc
Errors while running CTest
Output from these tests are in: /workspaces/isaac_ros-dev/build/isaac_ros_image_proc/Testing/Temporary/LastTest.log
Use "--rerun-failed --output-on-failure" to re-run the failed cases verbosely.

Finished <<< isaac_ros_image_proc [53.7s] [ with test failures ]
Starting >>> isaac_ros_stereo_image_proc
stderr: isaac_ros_stereo_image_proc
Errors while running CTest
Output from these tests are in: /workspaces/isaac_ros-dev/build/isaac_ros_stereo_image_proc/Testing/Temporary/LastTest.log
Use "--rerun-failed --output-on-failure" to re-run the failed cases verbosely.

Finished <<< isaac_ros_stereo_image_proc [53.9s] [ with test failures ]
Starting >>> isaac_ros_tensor_rt
Finished <<< isaac_ros_tensor_rt [2min 34s]
Starting >>> isaac_ros_triton
Finished <<< isaac_ros_triton [1min 23s]
Starting >>> isaac_ros_dnn_encoders
Finished <<< isaac_ros_dnn_encoders [15.3s]
Starting >>> isaac_ros_image_pipeline
Finished <<< isaac_ros_image_pipeline [0.82s]

Summary: 30 packages finished [8min 20s]
2 packages had stderr output: isaac_ros_image_proc isaac_ros_stereo_image_proc
2 packages had test failures: isaac_ros_image_proc isaac_ros_stereo_image_proc

Do you have any idea why this fails?
Thank you very much.

Triton inference very slow

I am trying to run inference with triton on a Jetson Xavier AGX. I am using it on MAX-N but only seem to get 1-2 fps on PeopleNet with half precision. I am using the settings and configs in isaac_ros_object_detection and I am running everything in the docker container from isaac_ros_common. I am using an intel realsense camera with 1280x800 images. Building seems to go fine and when launch the node with "ros2 launch isaac_ros_detectnet isaac_ros_detectnet.launch.py" it displays the following information:

triton_start.txt

When running it shows:

[component_container-1] 2022-07-01 13:13:38.271 DEBUG extensions/triton/inferencers/triton_inferencer_impl.cpp@394: Triton Async Event DONE for index = 1430 [component_container-1] 2022-07-01 13:13:38.271 DEBUG extensions/triton/inferencers/triton_inferencer_impl.cpp@423: Trying to load inference for index: 1430 [component_container-1] 2022-07-01 13:13:38.271 DEBUG extensions/triton/inferencers/triton_inferencer_impl.cpp@433: Successfully loaded inference for: 1430 [component_container-1] 2022-07-01 13:13:38.272 DEBUG extensions/triton/inferencers/triton_inferencer_impl.cpp@483: Raw Outputs size = 2 [component_container-1] 2022-07-01 13:13:38.272 DEBUG extensions/triton/inferencers/triton_inferencer_impl.cpp@495: Batch size output 'output_bbox/BiasAdd' = 1 [component_container-1] 2022-07-01 13:13:38.272 DEBUG extensions/triton/inferencers/triton_inferencer_impl.cpp@495: Batch size output 'output_cov/Sigmoid' = 1 [component_container-1] 2022-07-01 13:13:38.272 DEBUG extensions/triton/inferencers/triton_inferencer_impl.cpp@575: incomplete_inference_count_ = 0 [component_container-1] 2022-07-01 13:13:38.272 DEBUG extensions/triton/inferencers/triton_inferencer_impl.cpp@577: Last inference reached; setting Async state to WAIT [component_container-1] 2022-07-01 13:13:39.384 DEBUG extensions/triton/inferencers/triton_inferencer_impl.cpp@305: input tensor name = input_1 [component_container-1] 2022-07-01 13:13:39.401 DEBUG external/com_nvidia_gxf/gxf/std/scheduling_terms.cpp@434: Sending event notification for entity 8

Everything seems to run but just very slow. For a second we thought it might be running on the CPU because also the --gpus .. is not attached in docker run but that didn't change anything so that doesn't seem to be the case either. Any idea what we are doing wrong and how we can get the reported speed?

colcon build error: libcudnn.so.8, not found (try using -rpath or -rpath-link)

I am trying to run this on AGX Xavier in docker container. After cloning three repo: isaac_ros_image_segmentation, isaac_ros_common, and Isaac_ros_dnn_inference, when I run colcon build, I get the following error.

/usr/bin/ld: warning: libcudnn.so.8, needed by /usr/lib/libopencv_dnn.so.4.5.0, not found (try using -rpath or -rpath-link) /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' collect2: error: ld returned 1 exit status make[2]: *** [dnn_image_encoder] Error 1 make[1]: *** [CMakeFiles/dnn_image_encoder.dir/all] Error 2 make: *** [all] Error 2

I also tried git lfs pull. But that didn't work. I tried exporting the path of libcudnn.so.8 into the library path, but that is not working as well. What can be the problem here?

Triton version

Hi,

Is there a way to build Triton for a different version and replace it with the one in gxf?

I can see it uses JetPack 502, but I'm not sure how it is synced with the Triton development.

Thank you!

How to deploy onnx models requiring multiple inputs?

Hi there. I was trying to deploy a pointpillars .onnx model using the dnn_inference package. The model takes three tensor lists as input. Normally, I should write an encoder that publishes all three required nitros tensorlist data with different topics. However, I don't know how to connect the encoder with the tensor_rt_node.
image

I read part of the source code of this package and it seems that the tensor_rt_node does not support subscribing to multiple topics. However, in the tensor_rt_inference GXF extension, I found that the model loaded by the TensorRtInference component can fetch data from multiple rx components (inferred from the tick() function) and thus complete the populating of multiple inputs.
I was confused. Do I need to modify the source code for tensor_rt_node?
What should I do?

After further examination of the source code, I have developed some ideas.
Maybe I need to modify the dnn_image_encoder_node.yaml file (adding the DoubleBufferTransmitter components) and modify the nitros::NitrosPublisherSubscriberConfigMap CONFIG_MAP variable accordingly (in tensor_rt_node.cpp file) based on the names of the new components.
Am I thinking right?

TAO vs isaac_ros_dnn_inference encoder param mismatch

For models exported by tao toolkit v4.0 (yolov v4 for my case):
It seems like encoder_image_mean and encoder_image_stddev needs to be set to 0 and 1/255 respectively to get any proper inference (thereby nullifying the normalization). When normalizing image between 0 to 1, TAO exported models never output correct predictions.

Is there any way to turn off normalization by a parameter for encoder node?

Run colcon build --symlink-install failed

--- stderr: isaac_ros_nitros                                                                                      
/usr/bin/ld:/workspaces/isaac_ros-dev/src/isaac_ros_nitros/isaac_ros_nitros/gxf/lib/gxf_x86_64_cuda_11_7/core/libgxf_core.so: file format not recognized; treating as linker script
/usr/bin/ld:/workspaces/isaac_ros-dev/src/isaac_ros_nitros/isaac_ros_nitros/gxf/lib/gxf_x86_64_cuda_11_7/core/libgxf_core.so:1: syntax error
collect2: error: ld returned 1 exit status
make[2]: *** [CMakeFiles/isaac_ros_nitros.dir/build.make:311: libisaac_ros_nitros.so] Error 1
make[1]: *** [CMakeFiles/Makefile2:139: CMakeFiles/isaac_ros_nitros.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
---
Failed   <<< isaac_ros_nitros [0.52s, exited with code 2]
Aborted  <<< isaac_ros_dnn_inference_test [0.17s]  

cannot remap /image topic for dnn_image_encoder node

Hello everyone,
i'm trying to pass to the dnn_image_encoder node images coming from other topics, try remapping the /different topic other than /image, but the node is not subscribing to the new topic as it was doing with the previous /image.
The zed_node is running inside another container.

this is the command i runned:

ros2 launch isaac_ros_yolov8 isaac_ros_yolov8_visualize.zed.launch.py model_file_path:=src/isaac_ros_object_detection/resources/yolov8s.onnx engine_file_path:=src/isaac_ros_object_detection/resources/yolov8s.plan input_binding_names:=['images'] output_binding_names:=['output0'] network_image_width:=640 network_image_height:=640 force_engine_update:=False image_mean:=[0.0,0.0,0.0] image_stddev:=[1.0,1.0,1.0] input_image_width:=640 input_image_height:=360 confidence_threshold:=0.25 nms_threshold:=0.45
This is the code and the rqt graph:
image
image

Thanks in advanced for your time,
Franzhd, Team Roboto.

Possible to change encoder output format?

I am using the Triton inference node to run a tensorflow saved model.

It receives an input type of UINT8

input [
  {
    name: "input_tensor"
    data_type: TYPE_UINT8
    dims:[ 1, -1, -1, 3 ]
  }
]

Is it possible for the encoder node to pass on a Tensor of data_type: 1 (uint8) instead of the output such as below?
Or Is this perhaps a node I need to make myself.

header:
  stamp:
    sec: 1683796795
    nanosec: 601918396
  frame_id: camera
tensors:
- name: input_tensor
  shape:
    rank: 4
    dims:
    - 1
    - 3
    - 512
    - 512
  data_type: 9
  strides:
  - 3145728
  - 1048576
  - 2048
  - 4
  data:
  - 225
  - 205
  - 206
  - 190
  - 225
 ...

isaac_ros_nitros header file dependency

Hi,

Request to verify if required to add explicitly the following in isaac_ros_dnn_encoders/CMakeLists.txt:

find_package(isaac_ros_nitros REQUIRED)

The header file is required by dnn_image_encoder_node.hpp

I was not able to get compile without adding find_package. Thanks in advance for your help.

colcon build --symlink-install error

Hello. I am now faced with the following problems
image
admin@ubuntu:/workspaces/isaac_ros-dev$ colcon build --symlink-install
Starting >>> isaac_ros_common
Starting >>> isaac_ros_test
Finished <<< isaac_ros_common [0.34s]
Starting >>> isaac_ros_gxf
Starting >>> isaac_ros_nitros_interfaces
Starting >>> isaac_ros_tensor_list_interfaces
Starting >>> isaac_ros_apriltag_interfaces
Starting >>> isaac_ros_pointcloud_interfaces
Starting >>> isaac_ros_bi3d_interfaces
Finished <<< isaac_ros_gxf [0.25s]
Finished <<< isaac_ros_nitros_interfaces [0.57s]
Finished <<< isaac_ros_test [0.91s]
Starting >>> isaac_ros_nitros
Finished <<< isaac_ros_pointcloud_interfaces [0.57s]
Finished <<< isaac_ros_bi3d_interfaces [0.58s]
Finished <<< isaac_ros_apriltag_interfaces [0.62s]
Finished <<< isaac_ros_tensor_list_interfaces [0.66s]
Starting >>> isaac_ros_dnn_inference_test
Finished <<< isaac_ros_nitros [0.28s]
Starting >>> isaac_ros_nitros_camera_info_type
Starting >>> isaac_ros_nitros_image_type
Starting >>> isaac_ros_nitros_tensor_list_type
Starting >>> isaac_ros_nitros_disparity_image_type
Starting >>> isaac_ros_nitros_point_cloud_type
Starting >>> isaac_ros_nitros_april_tag_detection_array_type
Starting >>> isaac_ros_nitros_compressed_image_type
Starting >>> isaac_ros_nitros_detection2_d_array_type
Starting >>> isaac_ros_nitros_flat_scan_type
Starting >>> isaac_ros_nitros_imu_type
Starting >>> isaac_ros_nitros_occupancy_grid_type
Starting >>> isaac_ros_nitros_pose_array_type
Starting >>> isaac_ros_nitros_pose_cov_stamped_type
Starting >>> isaac_ros_nitros_std_msg_type
Finished <<< isaac_ros_dnn_inference_test [0.33s]
Finished <<< isaac_ros_nitros_image_type [0.52s]
Finished <<< isaac_ros_nitros_tensor_list_type [0.52s]
Starting >>> isaac_ros_tensor_rt
Starting >>> isaac_ros_triton
Finished <<< isaac_ros_nitros_point_cloud_type [0.54s]
Finished <<< isaac_ros_nitros_compressed_image_type [0.53s]
Finished <<< isaac_ros_nitros_camera_info_type [0.60s]
Finished <<< isaac_ros_nitros_disparity_image_type [0.58s]
Starting >>> isaac_ros_image_proc
Starting >>> isaac_ros_stereo_image_proc
Finished <<< isaac_ros_nitros_occupancy_grid_type [0.59s]
Finished <<< isaac_ros_nitros_flat_scan_type [0.63s]
Finished <<< isaac_ros_nitros_april_tag_detection_array_type [0.66s]
Finished <<< isaac_ros_nitros_imu_type [0.64s]
Finished <<< isaac_ros_nitros_detection2_d_array_type [0.67s]
Finished <<< isaac_ros_nitros_std_msg_type [0.65s]
Finished <<< isaac_ros_nitros_pose_cov_stamped_type [0.68s]
Finished <<< isaac_ros_nitros_pose_array_type [0.70s]
Finished <<< isaac_ros_tensor_rt [0.39s]
Finished <<< isaac_ros_stereo_image_proc [0.33s]
Finished <<< isaac_ros_image_proc [0.37s]
Starting >>> isaac_ros_dnn_encoders
Starting >>> isaac_ros_image_pipeline
Finished <<< isaac_ros_image_pipeline [0.16s]
Finished <<< isaac_ros_dnn_encoders [0.18s]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
[Processing: isaac_ros_triton]
--- stderr: isaac_ros_triton
CMake Warning (dev) at /workspaces/isaac_ros-dev/build/isaac_ros_triton/_deps/protobuf-src/cmake/install.cmake:60 (message):
The file
"/workspaces/isaac_ros-dev/build/isaac_ros_triton/_deps/protobuf-src/src/google/protobuf/stubs/io_win32.h"
is listed in
"/workspaces/isaac_ros-dev/build/isaac_ros_triton/_deps/protobuf-src/cmake/cmake/extract_includes.bat.in"
but there not exists. The file will not be installed.
Call Stack (most recent call first):
/workspaces/isaac_ros-dev/build/isaac_ros_triton/_deps/protobuf-src/cmake/CMakeLists.txt:231 (include)
This warning is for project developers. Use -Wno-dev to suppress it.


Finished <<< isaac_ros_triton [12min 21s]

Summary: 30 packages finished [12min 23s]
1 package had stderr output: isaac_ros_triton

I compiled it again and it passed, but ran an error
admin@ubuntu:/workspaces/isaac_ros-dev$ ros2 launch isaac_ros_triton isaac_ros_triton.launch.py model_name:=peoplesemsegnet_shuffleseg model_repository_paths:=['/tmp/models'] input_binding_names:=['input_2:0'] output_binding_names:=['argmax_1']
[INFO] [launch]: All log files can be found below /home/admin/.ros/log/2023-05-08-19-27-24-197165-ubuntu-38276
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [component_container-1]: process started with pid [38287]
[component_container-1] [INFO] [1683545244.506770490] [triton.triton_container]: Load Library: /workspaces/isaac_ros-dev/install/isaac_ros_triton/lib/libtriton_node.so
[component_container-1] [INFO] [1683545244.515595606] [triton.triton_container]: Found class: rclcpp_components::NodeFactoryTemplatenvidia::isaac_ros::dnn_inference::TritonNode
[component_container-1] [INFO] [1683545244.515607822] [triton.triton_container]: Instantiate class: rclcpp_components::NodeFactoryTemplatenvidia::isaac_ros::dnn_inference::TritonNode
[component_container-1] [INFO] [1683545244.516896817] [NitrosContext]: [NitrosContext] Creating a new shared context
[component_container-1] [INFO] [1683545244.516939368] [triton]: [NitrosNode] Initializing NitrosNode
[component_container-1] [INFO] [1683545244.517151223] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/std/libgxf_std.so
[component_container-1] [INFO] [1683545244.518583265] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_gxf_helpers.so
[component_container-1] [INFO] [1683545244.519689770] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_sight.so
[component_container-1] [INFO] [1683545244.520837142] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_atlas.so
[component_container-1] [INFO] [1683545244.522522658] [NitrosContext]: [NitrosContext] Loading application: '/workspaces/isaac_ros-dev/install/isaac_ros_nitros/share/isaac_ros_nitros/config/type_adapter_nitros_context_graph.yaml'
[component_container-1] [INFO] [1683545244.522826900] [NitrosContext]: [NitrosContext] Initializing application...
[component_container-1] [INFO] [1683545244.523598733] [NitrosContext]: [NitrosContext] Running application...
[component_container-1] 2023-05-08 19:27:24.523 WARN gxf/std/program.cpp@456: No system specified. Nothing to do
[component_container-1] [INFO] [1683545244.523811451] [triton]: [NitrosNode] Starting NitrosNode
[component_container-1] [INFO] [1683545244.523818955] [triton]: [NitrosNode] Loading built-in preset extension specs
[component_container-1] [INFO] [1683545244.524726445] [triton]: [NitrosNode] Loading built-in extension specs
[component_container-1] [INFO] [1683545244.524732624] [triton]: [NitrosNode] Loading preset extension specs
[component_container-1] [INFO] [1683545244.525567989] [triton]: [NitrosNode] Loading extension specs
[component_container-1] [INFO] [1683545244.525573983] [triton]: [NitrosNode] Loading generator rules
[component_container-1] [INFO] [1683545244.525682813] [triton]: [NitrosNode] Loading extensions
[component_container-1] [INFO] [1683545244.525782215] [triton]: [NitrosContext] Loading extension: gxf/lib/cuda/libgxf_cuda.so
[component_container-1] [INFO] [1683545244.526770314] [triton]: [NitrosContext] Loading extension: gxf/lib/serialization/libgxf_serialization.so
[component_container-1] [INFO] [1683545244.528213645] [triton]: [NitrosContext] Loading extension: gxf/triton/libgxf_triton_ext.so
[component_container-1] [INFO] [1683545244.566561048] [triton]: [NitrosNode] Loading graph to the optimizer
[component_container-1] [INFO] [1683545244.568408689] [triton]: [NitrosNode] Running optimization
[component_container-1] [INFO] [1683545244.624025549] [triton]: [NitrosNode] Obtaining graph IO group info from the optimizer
[component_container-1] [INFO] [1683545244.628675965] [triton]: [NitrosNode] Creating negotiated publishers/subscribers
[component_container-1] [INFO] [1683545244.628837947] [triton]: [NitrosPublisherSubscriberGroup] Pinning the component "triton_request/input" (type="nvidia::gxf::DoubleBufferReceiver") to use its compatible format only: "nitros_tensor_list_nchw_rgb_f32"
[component_container-1] [INFO] [1683545244.630889230] [triton]: [NitrosPublisherSubscriberGroup] Pinning the component "vault/vault" (type="nvidia::gxf::Vault") to use its compatible format only: "nitros_tensor_list_nhwc_rgb_f32"
[component_container-1] [INFO] [1683545244.631115386] [triton]: [NitrosNode] Starting negotiation...
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/triton' in container '/triton/triton_container'
[component_container-1] [INFO] [1683545245.631405477] [triton]: [NitrosNode] Starting post negotiation setup
[component_container-1] [INFO] [1683545245.631538294] [triton]: [NitrosNode] Getting data format negotiation results
[component_container-1] [INFO] [1683545245.631565391] [triton]: [NitrosSubscriber] Negotiation ended with no results
[component_container-1] [INFO] [1683545245.631581754] [triton]: [NitrosSubscriber] Use the compatible subscriber: topic_name="/tensor_pub", data_format="nitros_tensor_list_nchw_rgb_f32"
[component_container-1] [INFO] [1683545245.631688876] [triton]: [NitrosPublisher] Negotiation ended with no results
[component_container-1] [INFO] [1683545245.631705792] [triton]: [NitrosPublisher] Use only the compatible publisher: topic_name="/tensor_sub", data_format="nitros_tensor_list_nhwc_rgb_f32"
[component_container-1] [INFO] [1683545245.631729856] [triton]: [NitrosNode] Exporting the final graph based on the negotiation results
[component_container-1] [INFO] [1683545245.655225949] [triton]: [NitrosNode] Wrote the final top level YAML graph to "/workspaces/isaac_ros-dev/install/isaac_ros_triton/share/isaac_ros_triton/SSPKHGPSGC.yaml"
[component_container-1] [INFO] [1683545245.655273208] [triton]: [NitrosNode] Calling user's pre-load-graph callback
[component_container-1] [INFO] [1683545245.655277548] [triton]: [NitrosNode] Loading application
[component_container-1] [INFO] [1683545245.655282462] [triton]: [NitrosContext] Loading application: '/workspaces/isaac_ros-dev/install/isaac_ros_triton/share/isaac_ros_triton/SSPKHGPSGC.yaml'
[component_container-1] 2023-05-08 19:27:25.656 WARN gxf/std/yaml_file_loader.cpp@952: Using unregistered parameter 'dummy_rx' in component 'requester'.
[component_container-1] [INFO] [1683545245.656199292] [triton]: [NitrosNode] Linking Nitros pub/sub to the loaded application
[component_container-1] [INFO] [1683545245.656242672] [triton]: [NitrosNode] Calling user's post-load-graph callback
[component_container-1] [INFO] [1683545245.656248151] [triton]: In TritonNode postLoadGraphCallback().
[component_container-1] [INFO] [1683545245.656274709] [triton]: [NitrosContext] Initializing application...
[component_container-1] WARNING: infer_trtis_server.cpp:1219 NvDsTritonServerInit suggest to set model_control_mode:none. otherwise may cause unknow issues.
[component_container-1] I0508 11:27:25.745748 38287 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x7fad20000000' with size 268435456
[component_container-1] I0508 11:27:25.745860 38287 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
[component_container-1] I0508 11:27:25.746443 38287 server.cc:563]
[component_container-1] +------------------+------+
[component_container-1] | Repository Agent | Path |
[component_container-1] +------------------+------+
[component_container-1] +------------------+------+
[component_container-1]
[component_container-1] I0508 11:27:25.746452 38287 server.cc:590]
[component_container-1] +---------+------+--------+
[component_container-1] | Backend | Path | Config |
[component_container-1] +---------+------+--------+
[component_container-1] +---------+------+--------+
[component_container-1]
[component_container-1] I0508 11:27:25.746456 38287 server.cc:633]
[component_container-1] +-------+---------+--------+
[component_container-1] | Model | Version | Status |
[component_container-1] +-------+---------+--------+
[component_container-1] +-------+---------+--------+
[component_container-1]
[component_container-1] I0508 11:27:25.767131 38287 metrics.cc:864] Collecting metrics for GPU 0: NVIDIA GeForce RTX 3080
[component_container-1] I0508 11:27:25.767250 38287 metrics.cc:757] Collecting CPU metrics
[component_container-1] I0508 11:27:25.767315 38287 tritonserver.cc:2264]
[component_container-1] +----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[component_container-1] | Option | Value |
[component_container-1] +----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[component_container-1] | server_id | triton |
[component_container-1] | server_version | 2.26.0 |
[component_container-1] | server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data statistics trace logging |
[component_container-1] | model_repository_path[0] | /tmp/models |
[component_container-1] | model_control_mode | MODE_EXPLICIT |
[component_container-1] | strict_model_config | 1 |
[component_container-1] | rate_limit | OFF |
[component_container-1] | pinned_memory_pool_byte_size | 268435456 |
[component_container-1] | cuda_memory_pool_byte_size{0} | 67108864 |
[component_container-1] | response_cache_byte_size | 0 |
[component_container-1] | min_supported_compute_capability | 6.0 |
[component_container-1] | strict_readiness | 1 |
[component_container-1] | exit_timeout | 30 |
[component_container-1] +----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[component_container-1]
[component_container-1] [INFO] [1683545245.767601971] [triton]: [NitrosContext] Running application...
[component_container-1] I0508 11:27:25.768601 38287 model_lifecycle.cc:459] loading: peoplesemsegnet_shuffleseg:1
[component_container-1] I0508 11:27:25.790066 38287 tensorrt.cc:5442] TRITONBACKEND_Initialize: tensorrt
[component_container-1] I0508 11:27:25.790080 38287 tensorrt.cc:5452] Triton TRITONBACKEND API version: 1.10
[component_container-1] I0508 11:27:25.790082 38287 tensorrt.cc:5458] 'tensorrt' TRITONBACKEND API version: 1.10
[component_container-1] I0508 11:27:25.790083 38287 tensorrt.cc:5486] backend configuration:
[component_container-1] {"cmdline":{"auto-complete-config":"false","min-compute-capability":"6.000000","backend-directory":"/opt/tritonserver/backends","default-max-batch-size":"4"}}
[component_container-1] I0508 11:27:25.790251 38287 tensorrt.cc:5591] TRITONBACKEND_ModelInitialize: peoplesemsegnet_shuffleseg (version 1)
[component_container-1] I0508 11:27:25.790912 38287 tensorrt.cc:5640] TRITONBACKEND_ModelInstanceInitialize: peoplesemsegnet_shuffleseg (GPU device 0)
[component_container-1] I0508 11:27:25.805078 38287 tensorrt.cc:5678] TRITONBACKEND_ModelInstanceFinalize: delete instance state
[component_container-1] I0508 11:27:25.805101 38287 tensorrt.cc:5617] TRITONBACKEND_ModelFinalize: delete model state
[component_container-1] E0508 11:27:25.805110 38287 model_lifecycle.cc:596] failed to load 'peoplesemsegnet_shuffleseg' version 1: Internal: unable to create stream: the provided PTX was compiled with an unsupported toolchain.
[component_container-1] ERROR: infer_trtis_server.cpp:1057 Triton: failed to load model peoplesemsegnet_shuffleseg, triton_err_str:Invalid argument, err_msg:load failed for model 'peoplesemsegnet_shuffleseg': version 1 is at UNAVAILABLE state: Internal: unable to create stream: the provided PTX was compiled with an unsupported toolchain.;
[component_container-1]
[component_container-1] ERROR: infer_trtis_backend.cpp:54 failed to load model: peoplesemsegnet_shuffleseg, nvinfer error:NVDSINFER_TRITON_ERROR
[component_container-1] ERROR: infer_simple_runtime.cpp:33 failed to initialize backend while ensuring model:peoplesemsegnet_shuffleseg ready, nvinfer error:NVDSINFER_TRITON_ERROR
[component_container-1] ERROR: Error in createNNBackend() <infer_simple_context.cpp:76> [UID = 16]: failed to initialize triton simple runtime for model:peoplesemsegnet_shuffleseg, nvinfer error:NVDSINFER_TRITON_ERROR
[component_container-1] ERROR: Error in initialize() <infer_base_context.cpp:79> [UID = 16]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_TRITON_ERROR
[component_container-1] 2023-05-08 19:27:25.805 ERROR /workspaces/isaac_ros-dev/src/isaac_ros_dnn_inference/isaac_ros_triton/gxf/triton/inferencers/triton_inferencer_impl.cpp@326: Failure to initialize Inference Context for 'peoplesemsegnet_shuffleseg'
[component_container-1] 2023-05-08 19:27:25.805 ERROR gxf/std/entity_executor.cpp@541: Entity [SSPKHGPSGC_triton_request] must be in kStarted or kIdle or kTickPending or kTicking stage before stopping. Current state is StartPending
[component_container-1] 2023-05-08 19:27:25.805 ERROR gxf/std/entity_executor.cpp@203: Entity with eid 34 not found!
[component_container-1] [ERROR] [1683545245.805212937] [triton]: [NitrosPublisher] Vault ("vault/vault", eid=34) was stopped. The graph may have been terminated due to an error.
[component_container-1] 2023-05-08 19:27:25.805 WARN gxf/std/greedy_scheduler.cpp@235: Error while executing entity 16 named 'SSPKHGPSGC_triton_request': GXF_FAILURE
[component_container-1] terminate called after throwing an instance of 'std::runtime_error'
[component_container-1] what(): [NitrosPublisher] Vault ("vault/vault", eid=34) was stopped. The graph may have been terminated due to an error.
[ERROR] [component_container-1]: process has died [pid 38287, exit code -6, cmd '/opt/ros/humble/install/lib/rclcpp_components/component_container --ros-args -r __node:=triton_container -r __ns:=/triton'].
image

Subscribe to multiple topics?

Hello,

I hope you are doing well. I am currently working with a TensorRT node in a ROS2 environment and have encountered a situation where I would like to receive input from multiple camera topics. I wanted to check if it is possible to have the same TensorRT node subscribe to multiple topics and, if so, how I can identify which output corresponds to the input topic.

Current Setup:

  • I have one TensorRT engine plan.
  • I have a TensorRT node that publishes output data to a topic named outputs.
  • There are two camera topics available: /cam1/color/image_raw and /cam2/color/image_raw.
  • I have a DNN image encoder node that encodes these as tensors. Let's say it's /cam1/tensor and /cam2/tensor

Objective:

  • To have the TensorRT node subscribe to both /cam1/tensor and /cam2/tensor`.
  • To determine which input topic (/cam1/color/image_raw or /cam2/color/image_raw) the published output in the outputs topic corresponds to.
    I would be grateful if you could provide guidance on how to achieve this. If this is not feasible, I would appreciate any recommendations for alternative approaches.

Thank you for your time and assistance.

DOPE -Running Rviz2 on Jetson

Hello,

I'm following the instructions for Inference on DOPE using TensorRT on a Jetson Xavier NX, and have been successful through step 8.

However, step 9 is to run rviz2. I am running this from the docker pulled from isaac_ros_common, but the ROS2 build is base, and does not appear to have rviz2

I have looked upstream at the the jetson-containers repo where the base image is created, but when I try to create a new base which includes ROS2 desktop rather than base, it is failing, and in reading some of the issues, even Dusty was not able to get full desktop ROS2 to build successfully.

For step 9 in the DOPE instructions where you mention running RVIZ, did you get that to work on a Jetson? If so, can you let me know how?
Thanks!

README git repo paths need updates

Hi,
From line 83 in README.md, the following git repo path should be updated as the following:

   cd your_ws/src
   git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference
   git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common

Thanks.
B.R.,
Eric

Native compiling error with cuda 11.4 or 11.6, VPI 1.1.11 or 1.2,in any scenario

Hi, I receive this error with any configuration a try of Cuda,VPI,dnn,cuDNN,OpenCV, it happened me in Jetson NX a while ago too, always in 20.04

colcon build && source install/setup.bash
[0.283s] WARNING:colcon.colcon_core.verb:Some selected packages are already built in one or more underlay workspaces:
'image_geometry' is in: /opt/ros/foxy
'image_transport' is in: /opt/ros/foxy
'rcpputils' is in: /opt/ros/foxy
'camera_calibration_parsers' is in: /opt/ros/foxy
'cv_bridge' is in: /opt/ros/foxy
'camera_info_manager' is in: /opt/ros/foxy
If a package in a merged underlay workspace is overridden and it installs headers, then all packages in the overlay must sort their include directories by workspace order. Failure to do so may result in build failures or undefined behavior at run time.
If the overridden package is used by another package in any underlay, then the overriding package in the overlay must be API and ABI compatible or undefined behavior at run time may occur.

If you understand the risks and want to override a package anyways, add the following to the command line:
--allow-overriding camera_calibration_parsers camera_info_manager cv_bridge image_geometry image_transport rcpputils

This may be promoted to an error in a future release of colcon-core.
Starting >>> rcpputils
Starting >>> image_transport
Starting >>> isaac_ros_test
Starting >>> isaac_ros_nvengine_interfaces
Starting >>> isaac_ros_common
Starting >>> nvblox_msgs
Starting >>> image_geometry
Starting >>> isaac_ros_apriltag_interfaces
Starting >>> isaac_ros_visual_slam_interfaces
Starting >>> vision_msgs
Starting >>> nvblox_isaac_sim
Finished <<< isaac_ros_test [1.48s]
Finished <<< nvblox_isaac_sim [1.46s]
Finished <<< isaac_ros_common [8.47s]
Finished <<< image_geometry [13.3s]
Finished <<< isaac_ros_nvengine_interfaces [13.3s]
Starting >>> isaac_ros_nvengine
Finished <<< nvblox_msgs [13.5s]
Starting >>> nvblox_ros
Starting >>> nvblox_nav2
Starting >>> nvblox_rviz_plugin
Finished <<< rcpputils [13.7s]
Starting >>> cv_bridge
Starting >>> camera_calibration_parsers
Finished <<< isaac_ros_apriltag_interfaces [15.0s]
Finished <<< isaac_ros_visual_slam_interfaces [17.5s]
Finished <<< vision_msgs [22.6s]
Finished <<< camera_calibration_parsers [9.09s]
Starting >>> camera_info_manager
Finished <<< image_transport [25.8s]
Finished <<< camera_info_manager [4.91s]
Starting >>> image_common
Finished <<< image_common [0.89s]
Finished <<< cv_bridge [16.0s]
Starting >>> isaac_ros_dnn_encoders
Starting >>> isaac_ros_image_proc
Starting >>> isaac_ros_stereo_image_proc
Starting >>> opencv_tests
Starting >>> vision_opencv
Finished <<< isaac_ros_nvengine [17.1s]
Starting >>> isaac_ros_tensor_rt
Starting >>> isaac_ros_dnn_inference_test
Finished <<< vision_opencv [1.37s]
Finished <<< opencv_tests [1.40s]
Finished <<< nvblox_nav2 [17.9s]
Finished <<< isaac_ros_tensor_rt [5.45s]
Finished <<< nvblox_rviz_plugin [22.8s]
Finished <<< isaac_ros_dnn_inference_test [9.20s]
Starting >>> isaac_ros_triton
Finished <<< isaac_ros_triton [4.91s]
--- stderr: isaac_ros_dnn_encoders
/usr/bin/ld: libdnn_image_encoder_node.so: undefined reference to `cv::dnn::dnn4_v20211220::blobFromImage(cv::InputArray const&, double, cv::Size const&, cv::Scalar_ const&, bool, bool, int)'
collect2: error: ld returned 1 exit status
make[2]: *** [CMakeFiles/dnn_image_encoder.dir/build.make:163: dnn_image_encoder] Error 1
make[1]: *** [CMakeFiles/Makefile2:80: CMakeFiles/dnn_image_encoder.dir/all] Error 2
make: *** [Makefile:141: all] Error 2

Failed <<< isaac_ros_dnn_encoders [15.7s, exited with code 2]
Aborted <<< isaac_ros_image_proc [22.9s]
Aborted <<< isaac_ros_stereo_image_proc [48.6s]
Aborted <<< nvblox_ros [1min 41s]

Summary: 23 packages finished [1min 55s]
1 package failed: isaac_ros_dnn_encoders
3 packages aborted: isaac_ros_image_proc isaac_ros_stereo_image_proc nvblox_ros
3 packages had stderr output: isaac_ros_dnn_encoders isaac_ros_image_proc isaac_ros_stereo_image_proc
8 packages not processed
igcs@igcs:~/isaac_ws$

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.