Giter Site home page Giter Site logo

nvidia-ai-iot / ros2_deepstream Goto Github PK

View Code? Open in Web Editor NEW
78.0 10.0 15.0 4.29 MB

ROS 2 package for NVIDIA DeepStream applications on Jetson Platforms

License: MIT License

Python 99.49% Shell 0.51%
ros2 eloquent jetson robotics real-time deepstream deep-learning

ros2_deepstream's Introduction

DeepStream_ROS2

ROS2 nodes for DeepStream applications.

NVIDIA Developer Blog

This work is based on sample applications from the DeepStream Python Apps project. The packages have been tested on NVIDIA Jetson AGX Xavier with Ubuntu 18.04, ROS Eloquent, DeepStream SDK 5.0 (or later) and TensorRT. The project accesses some files in the DeepStream 5.0 root location (/opt/nvidia/deepstream/deepstream/samples/).

This project includes ROS2 publisher nodes which take a single/multiple video streams as input from webcam or from file:

  1. single_stream node: This performs 2 inference tasks on a single video input:
  • Object Detection: Detects 4 classes of objects: Vehicle, Person, Road Sign, Two wheeler.

  • Output of this inference is published on topic 'infer_detection'.

  • Attribute Classification: For objects of class 'Vehicle', 3 categories of atrributes are identified: color, make and type.

  • Output of this inference is published on topic 'infer_classification'.

  1. multi_stream node: This takes multiple video files as input, performs the same inference tasks and publishes to topics multi_detection and multi_classification.

Sample ROS2 subscriber nodes have also been provided in subscriber_pkg, subscribing to the following topics:

Node Topic
sub_detection infer_detection
sub_classification infer_classification
sub_multi_detection multi_detection
sub_multi_classification multi_classification

Prerequisites

Ubuntu 18.04

Python 3.6

DeepStream SDK 5.0 or later

NumPy

OpenCV

vision_msgs

Gst Python v1.14.5 (should already be installed on Jetson)

If missing, install using following commands:

sudo apt update

sudo apt install python3-gi python3-dev python3-gst-1.0 -y

Running the ROS2 nodes

  1. Clone this repo into the src folder inside your ROS2 workspace (creating a ROS2 workspace) using the following command:

git clone https://github.com/NVIDIA-AI-IOT/ros2_deepstream.git

The directory structure should look like this:

.
+- dev_ws
   +- src
      +- ros2_deepstream
         +- common
         +- config_files
            +- dstest1_pgie_config.txt (several other config files)
         +- single_stream_pkg
         +- multi_stream_pkg
         +- subscriber_pkg
            +- resource
            +- subscriber_pkg
            +- test
            +- package.xml
            +- setup.cfg
            +- setup.py     
  1. To build the package, navigate back to your workspace and run the following:

colcon build

  1. Source your main ROS 2 installation:

source /opt/ros/eloquent/setup.bash

  1. Then, to source your workspace, run the following command from your workspace:

. install/setup.bash

  1. To run the single_stream publisher node, run the following command by specifying the input_source. This command will take some time to start and print log messages to the console.

ros2 run single_stream_pkg single_stream --ros-args -p input_source:="/dev/video0"

This project has been tested using a Logitech C270 usb webcam to capture camera stream as input. H.264/H.265 video streams can also be given as input as shown later in this repo.

  1. To run the subscribers, open separate terminals, navigate to your ros workspace and repeat step 4 in each.

sub_detection subscribes to output from detection inference.

ros2 run subscriber_pkg sub_detection

sub_classification subscribes to output from classification inference.

ros2 run subscriber_pkg sub_classification

To understand the application workflow better:

alt text

The pipeline uses a GStreamer tee element to branch out and perform different tasks after taking video input. In this example, we perform only two tasks but more tasks can be added to the pipeline easily.

An example output:

alt text

Message received by the node subscribing to topic infer_detection:

[vision_msgs.msg.Detection2D(header=std_msgs.msg.Header(stamp=builtin_interfaces.msg.Time(sec=0, nanosec=0), frame_id=''), results=[vision_msgs.msg.ObjectHypothesisWithPose(id='Car', score=0.4975374639034271, pose=geometry_msgs.msg.PoseWithCovariance(pose=geometry_msgs.msg.Pose(position=geometry_msgs.msg.Point(x=0.0, y=0.0, z=0.0), orientation=geometry_msgs.msg.Quaternion(x=0.0, y=0.0, z=0.0, w=1.0)), covariance=array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0.])))], bbox=vision_msgs.msg.BoundingBox2D(center=geometry_msgs.msg.Pose2D(x=733.5, y=70.3125, theta=0.0), size_x=627.0, size_y=303.75), source_img=sensor_msgs.msg.Image(header=std_msgs.msg.Header(stamp=builtin_interfaces.msg.Time(sec=0, nanosec=0), frame_id=''), height=0, width=0, encoding='', is_bigendian=0, step=0, data=[]), is_tracking=False, tracking_id='')]

The infer_detection topic publishes messages in the vision_msgs Detection2DArray type.

Message received by the node subscribing to topic infer_classification:

[vision_msgs.msg.ObjectHypothesis(id='blue', score=0.9575958847999573), vision_msgs.msg.ObjectHypothesis(id='bmw', score=0.6080179214477539), vision_msgs.msg.ObjectHypothesis(id='sedan', score=0.8021238446235657)]

The infer_classification topic publishes messages in the vision_msgs Classification2D type. These messages contain information about the color, make and type of detected cars alongwith their confidence scores.

Multi input publisher node

For applications that take videos from multiple input sources, we have provided node multi_stream. This takes multiple H.264/H.265 video streams as input and performs inference (detection and classification). Output is published on topics multi_detection and multi_classification in Detection2DArray and Classification2D types respectively.

Run the multi_stream publisher using the following command (check that workspace is sourced by following steps 3 and 4 above). This command will take some time to start and print log messages to the console.

ros2 run multi_stream_pkg multi_stream --ros-args -p input_sources:="['file://<absolute path to file1.mp4>', 'file://<absolute path to file2.mp4>']"

For instance, you can use some sample videos that come with the DeepStream installation:

ros2 run multi_stream_pkg multi_stream --ros-args -p input_sources:="['file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4', 'file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_qHD.mp4']"

The command above takes input from two sources. This can be modified to take input from one or more sources by specifying the input file names in the list input_sources.

To run the sample subscribers, open separate terminals, navigate to your ros workspace and repeat step 4 above in each.

sub_multi_detection subscribes to topic multi_detection.

ros2 run subscriber_pkg sub_multi_detection

sub_multi_classification subscribes to topic multi_classification.

ros2 run subscriber_pkg sub_multi_classification

An example output:

alt text

Performance

Fps of stream 1 is 36.6
Fps of stream 0 is 36.6

Fps of stream 1 is 40.4
Fps of stream 0 is 40.0

FPS with one input video source for multi_stream node was observed to be between 30-40; with two input sources was observed to be between 20-30; and with three sources was observed to be between 20-25 (with JAX in MODE 15W).

To see the rate at which data is being published on topic multi_detection, open a separate terminal and source it (Step 3 above). Make sure the publisher node is running in another terminal and run the following command:

ros2 topic hz multi_detection

Replace multi_detection with multi_classification to see the publishing rate on topic multi_classification.

Sample average rate for multi_detection: 75.96

Sample average rate for inference: 46.599-118.048

Contact Us

Please let us know if you run into any issues here.

Related ROS 2 Projects

  • ros2_torch_trt : ROS 2 Real Time Classification and Detection
  • ros2_jetson_stats : ROS 2 package for monitoring and controlling NVIDIA Jetson Platform resources
  • ros2_trt_pose : ROS 2 package for "trt_pose": real-time human pose estimation on NVIDIA Jetson Platform

ros2_deepstream's People

Contributors

ak-nv avatar asawareebhide avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ros2_deepstream's Issues

Import Error pyds

Device: Jetson Nano 4GB
OS: Ubuntu 20.04
Ros 2: Foxy
SDK: 6.0
Python: 3.8

When running the ros2 run single_stream_pkg single_stream --ros-args -p input_source:="/dev/video0" command I receive the following error message "ImportError: libpython3.6m.so.1.0: cannot open shared object file: No such file or directory". I fully installed the deepstream SDK and successfully ran the given examples. I also installed the deepstream python apps and ran that successfully.

The workstation is located in /home/ros2_ws/src/ros2_deepstream

Changing the input streamer or file path to ros2 node publisher

Hey all, and thank you for this incredible project!
I would like to know what are the changes that need to be done in order to convert the input of the node from camera/mp4 file path to a topic of a node that publisher bag file of frames.
which class should be changed or what kind of topic should I use? or other modifications I should take in consider.
best regrades,
AK

~

I'm apologizing for open issue accidentally.

No module named 'vision_msgs'

When I installed the dependencies and use command ' ros2 run single_stream_pkg single_stream --ros-args -p input_source:="/dev/video0"', some errors happened: No module named 'vision_msgs'.
How could I do?

in single_stream_class.py: bbox y coordinate wrong

In single_stream_class.py, line 156, I think the minus sign is wrong:

bounding_box.center.y = float(top - (height/2))

should be

bounding_box.center.y = float(top + (height/2))

Please, correct me, if I'm wrong.
Otherwise the package works fine for me.
Thanks for your awesome work!

How to use custom trained model?

Hi,

Great work! How to use this repo with my own custom trained model? (Detection, Classification, etc) any examples?

Best,
Hooman

Error when running single_stream sample application in Docker container created using DockerFile.deepstream.ros2.eloquent

I've built a Docker container using this Dockerfile from the repo:
https://github.com/NVIDIA-AI-IOT/ros2_jetson/blob/main/docker/DockerFile.deepstream.ros2.eloquent

I then follow steps here (to run an example) inside the container:
https://github.com/NVIDIA-AI-IOT/ros2_deepstream

  1. cd /workspace/ros2_ws/src/ros2_deepstream/
  2. colcon build
  3. source /opt/ros/eloquent/setup.bash
  4. . install/setup.bash
  5. ros2 run single_stream_pkg single_stream --ros-args -p input_source:="/dev/video0"

Steps 1-4 run fine, but running step #5 I receive this error:

** (process:139): WARNING **: 18:44:29.567: Failed to load shared library 'libgstreamer-1.0.so.0' referenced by the typelib: /lib/aarch64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /usr/lib/aarch64-linux-gnu/libgstreamer-1.0.so.0)
Traceback (most recent call last):
File "/workspace/ros2_ws/src/ros2_deepstream/install/single_stream_pkg/lib/single_stream_pkg/single_stream", line 11, in
load_entry_point('single-stream-pkg==0.0.0', 'console_scripts', 'single_stream')()
File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 480, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 2693, in load_entry_point
return ep.load()
File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 2324, in load
return self.resolve()
File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 2330, in resolve
module = import(self.module_name, fromlist=['name'], level=0)
File "/workspace/ros2_ws/src/ros2_deepstream/install/single_stream_pkg/lib/python3.6/site-packages/single_stream_pkg/single_stream.py", line 24, in
from single_stream_pkg.single_stream_class import InferencePublisher
File "/workspace/ros2_ws/src/ros2_deepstream/install/single_stream_pkg/lib/python3.6/site-packages/single_stream_pkg/single_stream_class.py", line 39, in
from gi.repository import GObject, Gst
File "", line 971, in _find_and_load
File "", line 955, in _find_and_load_unlocked
File "", line 656, in _load_unlocked
File "", line 626, in _load_backward_compatible
File "/usr/lib/python3/dist-packages/gi/importer.py", line 146, in load_module
dynamic_module = load_overrides(introspection_module)
File "/usr/lib/python3/dist-packages/gi/overrides/init.py", line 125, in load_overrides
override_mod = importlib.import_module(override_package_name)
File "/usr/lib/python3.6/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/usr/lib/python3/dist-packages/gi/overrides/Gst.py", line 58, in
class Bin(Gst.Bin):
File "/usr/lib/python3/dist-packages/gi/module.py", line 181, in getattr
interfaces = tuple(interface for interface in get_interfaces_for_object(info)
File "/usr/lib/python3/dist-packages/gi/module.py", line 105, in get_interfaces_for_object
interfaces.append(getattr(module, name))
File "/usr/lib/python3/dist-packages/gi/overrides/init.py", line 39, in getattr
return getattr(self.introspection_module, name)
File "/usr/lib/python3/dist-packages/gi/module.py", line 220, in getattr
wrapper = metaclass(name, bases, dict
)
File "/usr/lib/python3/dist-packages/gi/types.py", line 234, in init
register_interface_info(cls.info.get_g_type())
TypeError: must be an interface

This shared object file () is located here (inside the container);
root@ubuntu:/# find . -name libgstreamer-1.0.so.0
./usr/lib/aarch64-linux-gnu/libgstreamer-1.0.so.0
./usr/lib/aarch64-linux-gnu/tegra/libgstreamer-1.0.so.0

So I tried:

  • export LD_LIBRARY_PATH=/usr/lib/aarch64-linux-gnu/libgstreamer-1.0.so.0:$LD_LIBRARY_PATH
  • (receive same error as above)
  • export LD_LIBRARY_PATH=/usr/lib/aarch64-linux-gnu/tegra/libgstreamer-1.0.so.0:$LD_LIBRARY_PATH
  • (receive same error as above)

I'm not sure where to go from here. It seems the container is not built correctly to run code from the ros2_deepstream repo, even though that is the intended purpose of these containers?

Docker container build process fails on Xavier NX

On an Xavier NX, I've checked the repo out and run "sh docker_build.sh", but it fails on step #27 (RUN pip3 install pycuda --verbose) with the following error:

.
.
.
Updating cache with response from "https://files.pythonhosted.org/packages/bf/10/ff66fea6d1788c458663a84d88787bae15d45daa16f6b3ef33322a51fc7e/MarkupSafe-2.0.1.tar.gz"
Caching due to etag
Running setup.py (path:/tmp/pip-build-8glu0gil/MarkupSafe/setup.py) egg_info for package MarkupSafe
Running command python setup.py egg_info
Traceback (most recent call last):
File "", line 1, in
File "/tmp/pip-build-8glu0gil/MarkupSafe/setup.py", line 61, in
run_setup(True)
File "/tmp/pip-build-8glu0gil/MarkupSafe/setup.py", line 44, in run_setup
ext_modules=ext_modules if with_binary else [],
File "/usr/lib/python3/dist-packages/setuptools/init.py", line 129, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python3.6/distutils/core.py", line 121, in setup
dist.parse_config_files()
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 494, in parse_config_files
ignore_option_errors=ignore_option_errors)
File "/usr/lib/python3/dist-packages/setuptools/config.py", line 106, in parse_configuration
meta.parse()
File "/usr/lib/python3/dist-packages/setuptools/config.py", line 382, in parse
section_parser_method(section_options)
File "/usr/lib/python3/dist-packages/setuptools/config.py", line 355, in parse_section
self[name] = value
File "/usr/lib/python3/dist-packages/setuptools/config.py", line 173, in setitem
value = parser(value)
File "/usr/lib/python3/dist-packages/setuptools/config.py", line 430, in _parse_version
version = self._parse_attr(value)
File "/usr/lib/python3/dist-packages/setuptools/config.py", line 305, in _parse_attr
module = import_module(module_name)
File "/usr/lib/python3.6/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 994, in _gcd_import
File "", line 971, in _find_and_load
File "", line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'markupsafe'
Cleaning up...
Removing source in /tmp/pip-build-8glu0gil/pycuda
Removing source in /tmp/pip-build-8glu0gil/pytools
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-8glu0gil/MarkupSafe/
Exception information:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 353, in run
wb.build(autobuilding=True)
File "/usr/lib/python3/dist-packages/pip/wheel.py", line 749, in build
self.requirement_set.prepare_files(self.finder)
File "/usr/lib/python3/dist-packages/pip/req/req_set.py", line 380, in prepare_files
ignore_dependencies=self.ignore_dependencies))
File "/usr/lib/python3/dist-packages/pip/req/req_set.py", line 634, in _prepare_file
abstract_dist.prep_for_dist()
File "/usr/lib/python3/dist-packages/pip/req/req_set.py", line 129, in prep_for_dist
self.req_to_install.run_egg_info()
File "/usr/lib/python3/dist-packages/pip/req/req_install.py", line 439, in run_egg_info
command_desc='python setup.py egg_info')
File "/usr/lib/python3/dist-packages/pip/utils/init.py", line 725, in call_subprocess
% (command_desc, proc.returncode, cwd))
pip.exceptions.InstallationError: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-8glu0gil/MarkupSafe/
The command '/bin/sh -c pip3 install pycuda --verbose' returned a non-zero code: 1

Initially I thought maybe this was due to building on an Xavier, rather than an x86 platform. But the base image used in the Dockerfile is "nvcr.io/nvidia/deepstream-l4t:5.0.1-20.09-samples", which should be for arm64.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.