ml6team / deepstream-python Goto Github PK
View Code? Open in Web Editor NEWNVIDIA Deepstream 6.1 Python boilerplate
License: MIT License
NVIDIA Deepstream 6.1 Python boilerplate
License: MIT License
Try to run your code on Jetson Xavier and get the error, that can not understand:
Traceback (most recent call last):
File "run.py", line 10, in <module>
run_pipeline(args.source_uri)
File "/mnt/ext/deepstream-python/deepstream/app/core.py", line 15, in run_pipeline
output_format="mp4",
File "/mnt/ext/deepstream-python/deepstream/app/pipeline.py", line 106, in __init__
self._create_elements()
File "/mnt/ext/deepstream-python/deepstream/app/pipeline.py", line 357, in _create_elements
self.sink_bin = self._create_mp4_sink_bin()
File "/mnt/ext/deepstream-python/deepstream/app/pipeline.py", line 284, in _create_mp4_sink_bin
mp4_sink_bin.add_pad(Gst.GhostPad("sink", nvvidconv3.get_static_pad("sink")))
TypeError: GObject.__init__() takes exactly 0 arguments (2 given)
It's hard to understand why this error occurs, any suggestions ?
I'm still learning docker, deepstream etc, so maybe it's very obvious or not relevant but...
In these 3 files:
deepstream/app/config.py
deepstream/configs/pgies/pgie.txt
deepstream/configs/pgies/segmentation.txt
there is a reference to
/opt/nvidia/deepstream/deepstream-6.1
but if that would be changed to:
/opt/nvidia/deepstream/deepstream
(which is a symbolic link to the 6.1 or 6.2 etc)
Then (most/all of) the script would work also with 6.2 (or 6.0)
Maybe this should be comment rather than an issue?
(Sorry for 1 more issue, but I hope it's useful)
To make the Dockefile work with DS 6.2, there are 3 important changes that are needed
Obviously, the first line should change from:
FROM nvcr.io/nvidia/deepstream:6.1-devel
to
FROM nvcr.io/nvidia/deepstream:6.2-devel
But there are 2 more things:
Add this line after line 7:
ENV CUDA_MODULE_LOADING=LAZY
That's new in 6.2 and I presume it will be ignored in 6.1 and before
Add this line between current line 15 and 16:
RUN bash /opt/nvidia/deepstream/deepstream/user_additional_install.sh
As that will install the extra libraries taht are no longer part of the default package since 6.2
Thanks for the great repo! Hi, I am a beginner in DeepStream and learning on how to convert pytorch-based pipeline to DeepStream based pipeline to improve runtime performance on Jetson devices. I want to implement functions such as line-cross counting similar to this video and also want to stream outputs to websocket server via kafka or directly. I am struggling to integrate these functions to DeepStream app and kinda lost on where to start right now. Can you please guide me on this? Thanks in advance!
Hello,
I changed the pipeline as it works in jetson Nano. I integrated yolov7-tiny model and osnet_x1_0_msmt17 both trt and onnx file. However, when i run the code with reid_pipeline function, it detected person with 1 id. when The person get out of camera view and after a few seconds, the person get in camera view, the id changed. What should i do? When i examined the repo, I found that reid_search.py is under the scripts file. should i add the python file to main code in order to assign wrong id as the code is running?
i just build docker image and when i run the command i got this error
docker run -it --gpus all -v ~/deepstream-python/output:/app/output deepstream python3 run.py 'file:///app/data/videos/sample_720p.h264'
INFO:app.pipeline.Pipeline:Playing from URI file:///app/data/videos/sample_720p.h264
(gst-plugin-scanner:7): GStreamer-WARNING **: 05:47:11.221: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory
(gst-plugin-scanner:7): GStreamer-WARNING **: 05:47:11.223: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory
INFO:app.pipeline.Pipeline:Creating Pipeline
INFO:app.pipeline.Pipeline:Creating Source bin
INFO:app.pipeline.Pipeline:Creating URI decode bin
INFO:app.pipeline.Pipeline:Creating Stream mux
INFO:app.pipeline.Pipeline:Creating PGIE
INFO:app.pipeline.Pipeline:Creating Tracker
INFO:app.pipeline.Pipeline:Creating Converter 1
INFO:app.pipeline.Pipeline:Creating Caps filter 1
INFO:app.pipeline.Pipeline:Creating Tiler
INFO:app.pipeline.Pipeline:Creating Converter 2
INFO:app.pipeline.Pipeline:Creating OSD
INFO:app.pipeline.Pipeline:Creating Queue 1
INFO:app.pipeline.Pipeline:Creating Converter 3
INFO:app.pipeline.Pipeline:Creating Caps filter 2
INFO:app.pipeline.Pipeline:Creating Encoder
INFO:app.pipeline.Pipeline:Creating Parser
INFO:app.pipeline.Pipeline:Creating Container
INFO:app.pipeline.Pipeline:Creating Sink
INFO:app.pipeline.Pipeline:Linking elements in the Pipeline: source-bin-00 -> stream-muxer -> primary-inference -> tracker -> convertor1 -> capsfilter1 -> nvtiler -> convertor2 -> onscreendisplay -> queue1 -> mp4-sink-bin
INFO:app.pipeline.Pipeline:Starting pipeline
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
INFO:app.pipeline.Pipeline:Decodebin child added: source
INFO:app.pipeline.Pipeline:Decodebin child added: decodebin0
[NvMultiObjectTracker] De-initialized
Error: gst-resource-error-quark: Resource not found. (3): gstfilesrc.c(532): gst_file_src_start (): /GstPipeline:pipeline0/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin/GstFileSrc:source:
No such file "/app/data/videos/sample_720p.h264"
INFO:app.pipeline.Pipeline:Exiting pipeline
Hi! Can I replace the cosine_metric model provided by deepsort based on the osnet provided?
I'm not quite sure, but I think there is an error in line 65 of deepstream/configs/pgies/pgie.txt
It reads:
model-engine-file=../opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine
which should probably be:
model-engine-file=/opt/nvidia/deepstream/deepstream-/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine
However, when I try this with DS 6.2 and still new to DS, I don't have this engine file there either. So I'm a bit confused, but based on other lines in that same file, it seems likely to be a typo?
It is very useful to have mqtt functionality to publish metadata in deepstream
Hardware Platform: RTX A4000
NVIDIA-SMI 535.54.03
CUDA Version: 12.2
SO: Ubuntu 22.04
Python: 3.10.6
I can see an warning/error ("WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1482 ") when executing run.py. It complains about not being able to open "resnet10.caffemodel_b1_gpu0_fp16.engine", and this should be a problem, but everything ends OK, inference is being done correctly, and the output too. I expected it not to work, but it works, why? How to avoid this warning?
user@user-Default-string:~/flavio/deepstream-python/deepstream$ sudo docker run -it --gpus all -v ~/flavio/deepstream-python/output:/app/output 9d2546e50693 python3 run.py 'file:///app/data/videos/sample_720p.h264'
[sudo] password for user:
INFO:app.pipeline.Pipeline:Playing from URI file:///app/data/videos/sample_720p.h264
(gst-plugin-scanner:7): GStreamer-WARNING **: 18:43:36.486: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory
(gst-plugin-scanner:7): GStreamer-WARNING **: 18:43:36.519: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory
INFO:app.pipeline.Pipeline:Creating Pipeline
INFO:app.pipeline.Pipeline:Creating Source bin
INFO:app.pipeline.Pipeline:Creating URI decode bin
INFO:app.pipeline.Pipeline:Creating Stream mux
INFO:app.pipeline.Pipeline:Creating PGIE
INFO:app.pipeline.Pipeline:Creating Tracker
INFO:app.pipeline.Pipeline:Creating Converter 1
INFO:app.pipeline.Pipeline:Creating Caps filter 1
INFO:app.pipeline.Pipeline:Creating Tiler
INFO:app.pipeline.Pipeline:Creating Converter 2
INFO:app.pipeline.Pipeline:Creating OSD
INFO:app.pipeline.Pipeline:Creating Queue 1
INFO:app.pipeline.Pipeline:Creating Converter 3
INFO:app.pipeline.Pipeline:Creating Caps filter 2
INFO:app.pipeline.Pipeline:Creating Encoder
INFO:app.pipeline.Pipeline:Creating Parser
INFO:app.pipeline.Pipeline:Creating Container
INFO:app.pipeline.Pipeline:Creating Sink
INFO:app.pipeline.Pipeline:Linking elements in the Pipeline: source-bin-00 -> stream-muxer -> primary-inference -> tracker -> convertor1 -> capsfilter1 -> nvtiler -> convertor2 -> onscreendisplay -> queue1 -> mp4-sink-bin
INFO:app.pipeline.Pipeline:Starting pipeline
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1482 Deserialize engine failed because file path: /app/configs/pgies/../opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine open error
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
INFO:app.pipeline.Pipeline:Decodebin child added: source
INFO:app.pipeline.Pipeline:Decodebin child added: decodebin0
INFO:app.pipeline.Pipeline:Decodebin child added: h264parse0
INFO:app.pipeline.Pipeline:Decodebin child added: capsfilter0
INFO:app.pipeline.Pipeline:Decodebin child added: nvv4l2decoder0
INFO:app.pipeline.Pipeline:Decodebin pad added
INFO:app.pipeline.Pipeline:Frame Number=0 Number of Objects=0 Vehicle_count=0 Person_count=0
INFO:app.pipeline.Pipeline:Frame Number=1 Number of Objects=0 Vehicle_count=0 Person_count=0
INFO:app.pipeline.Pipeline:Frame Number=2 Number of Objects=0 Vehicle_count=0 Person_count=0
INFO:app.pipeline.Pipeline:Frame Number=3 Number of Objects=0 Vehicle_count=0 Person_count=0
INFO:app.pipeline.Pipeline:Frame Number=4 Number of Objects=0 Vehicle_count=0 Person_count=0
INFO:app.pipeline.Pipeline:Frame Number=5 Number of Objects=10 Vehicle_count=4 Person_count=6
INFO:app.pipeline.Pipeline:Frame Number=6 Number of Objects=10 Vehicle_count=3 Person_count=7
INFO:app.pipeline.Pipeline:Frame Number=7 Number of Objects=10 Vehicle_count=4 Person_count=6
INFO:app.pipeline.Pipeline:Frame Number=8 Number of Objects=8 Vehicle_count=4 Person_count=4
INFO:app.pipeline.Pipeline:Frame Number=9 Number of Objects=9 Vehicle_count=4 Person_count=5
INFO:app.pipeline.Pipeline:Frame Number=10 Number of Objects=10 Vehicle_count=3 Person_count=7
INFO:app.pipeline.Pipeline:Frame Number=11 Number of Objects=11 Vehicle_count=3 Person_count=8
INFO:app.pipeline.Pipeline:Frame Number=12 Number of Objects=11 Vehicle_count=3 Person_count=8
INFO:app.pipeline.Pipeline:Frame Number=13 Number of Objects=9 Vehicle_count=2 Person_count=7
INFO:app.pipeline.Pipeline:Frame Number=14 Number of Objects=7 Vehicle_count=2 Person_count=5
INFO:app.pipeline.Pipeline:Frame Number=15 Number of Objects=10 Vehicle_count=2 Person_count=8
... and so on.
Hi,
I am working with DeepStream and I have found your work. It is really interesting and I want to apply the ReID to my app. I have added to the pipeline and I want to obtain the features, however it always enters in the if statement:
if user_meta.base_meta.meta_type != pyds.NvDsMetaType.NVDSINFER_TENSOR_OUTPUT_META:
Why is this happening? Can you help me?
I attach the scrip and the configuration files I am using.
Thank you!
Mikel
config_infer_primary_yoloV5.txt
config_tracker_NvDCF_perf.txt
deepstream_demux_multi_in_multi_out_ReID_pipeline.txt
osnet.txt
Hi, Thanks for your great work. I have a query regarding the multi-camera stream pipeline. In my case, I have a config file having 4 video sources in which I am applying object detection using Yolov5. Later after detecting objects, I have to calculate distances and had to do other customization on bounding boxes and acquired frames. How can I acquire this data in my python script? Any suggestion, please? @joxis
Hi hope you doing good.
First and foremost i am kinda new to gstream please give me a road map where i can master in gstream like by watching tutorials yeah for python based gstream or any other good stuff for learning it could help me alot thanks .
Goal
def run_pipeline(video_uri: str):
pipeline = Pipeline(
video_uri=video_uri,
pgie_config_path=os.path.join(CONFIGS_DIR, "pgies/yolov4_saftey.txt"), # here i made changes <--------------
tracker_config_path=os.path.join(CONFIGS_DIR, "trackers/nvdcf.txt"),
output_format="mp4",
)
pipeline.run()
docker run -it --gpus all -v ~/deepstream-python/output:/app/output deepstream python3 run.py 'file:///app/data/videos/sample_720p.h264'
Error
/Desktop/farid/deepstream-python/deepstream$ docker run -it --gpus all -v ~/deepstream-python/output:/app/output deepstream python3 run.py 'file:///app/data/videos/sample_720p.h264'
INFO:app.pipeline.Pipeline:Playing from URI file:///app/data/videos/sample_720p.h264
(gst-plugin-scanner:7): GStreamer-WARNING **: 05:11:46.981: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory
(gst-plugin-scanner:7): GStreamer-WARNING **: 05:11:46.983: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory
INFO:app.pipeline.Pipeline:Creating Pipeline
INFO:app.pipeline.Pipeline:Creating Source bin
INFO:app.pipeline.Pipeline:Creating URI decode bin
INFO:app.pipeline.Pipeline:Creating Stream mux
INFO:app.pipeline.Pipeline:Creating PGIE
INFO:app.pipeline.Pipeline:Creating Tracker
INFO:app.pipeline.Pipeline:Creating Converter 1
INFO:app.pipeline.Pipeline:Creating Caps filter 1
INFO:app.pipeline.Pipeline:Creating Tiler
INFO:app.pipeline.Pipeline:Creating Converter 2
INFO:app.pipeline.Pipeline:Creating OSD
INFO:app.pipeline.Pipeline:Creating Queue 1
INFO:app.pipeline.Pipeline:Creating Converter 3
INFO:app.pipeline.Pipeline:Creating Caps filter 2
INFO:app.pipeline.Pipeline:Creating Encoder
INFO:app.pipeline.Pipeline:Creating Parser
INFO:app.pipeline.Pipeline:Creating Container
INFO:app.pipeline.Pipeline:Creating Sink
INFO:app.pipeline.Pipeline:Linking elements in the Pipeline: source-bin-00 -> stream-muxer -> primary-inference -> tracker -> convertor1 -> capsfilter1 -> nvtiler -> convertor2 -> onscreendisplay -> queue1 -> mp4-sink-bin
INFO:app.pipeline.Pipeline:Starting pipeline
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:30 Could not open lib: /app/data, error string: /app/data: cannot open shared object file: No such file or directory
[NvMultiObjectTracker] De-initialized
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: /app/app/../configs/pgies/yolov4_saftey.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
INFO:app.pipeline.Pipeline:Exiting pipeline
my config file yolov4_saftey.txt
[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=/app/data/pgies/yolov4/export_retrain/labels.txt
model-engine-file=/app/data/pgies/yolov4/export_retrain/trt.engine
int8-calib-file=/app/data/pgies/yolov4/export_retrain/cal.bin
tlt-encoded-model = /app/data/pgies/yolov4/yolov4_resnet18_epoch_080.etlt
tlt-model-key=NGpmbHN0ZTNrZHFkOGRxNnFsbW9rbXNxbnU6Yzc5NWM5MjQtZDE1YS00NTYxLTg3YzgtNTU2MWVhNDg1M2M3
infer-dims=3;384;1248
force-implicit-batch-dim=1
maintain-aspect-ratio=1
batch-size=1
network-mode=0
uff-input-order=0
uff-input-blob-name=Input
num-detected-classes=6
interval=0
gie-unique-id=1
network-type=0
cluster-mode=3
process-mode=1
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/app/data/pgies/libnvds_infercustomparser_tao.so
[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0
First of all, thanks for the work, I'm trying to work with your code with ds 6.1.1. that I run from docker container created from nvcr.io/nvidia/deepstream:6.1.1-triton
When the output is rtsp it works properly, when I try to write mp4 files it does not work:
INFO:app.pipeline.Pipeline:Creating Encoder
ERROR:app.pipeline.Pipeline:Unable to create Encoder
ERROR:app.pipeline.Pipeline:If the following error is encountered:
/usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block
Preload the offending library:
export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1
Traceback (most recent call last):
File "run.py", line 10, in <module>
run_pipeline(args.source_uri)
File "/root/deepstream-python/deepstream/app/core.py", line 11, in run_pipeline
pipeline = Pipeline(
File "/root/deepstream-python/deepstream/app/pipeline.py", line 106, in __init__
self._create_elements()
File "/root/deepstream-python/deepstream/app/pipeline.py", line 357, in _create_elements
self.sink_bin = self._create_mp4_sink_bin()
File "/root/deepstream-python/deepstream/app/pipeline.py", line 266, in _create_mp4_sink_bin
encoder.set_property("bitrate", 33000000)
AttributeError: 'NoneType' object has no attribute 'set_property'
The error is not explicit, I think that I miss some codec or smth like that in my container, may be you can point thai I miss
Hi! I have this problems:
Git LFS: (0 of 8 files) 0 B / 246.55 MB
batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.
error: failed to fetch some objects from 'https://github.com/ml6team/deepstream-python.git/info/lfs'
You can help me?
Hardware Platform: RTX A4000
NVIDIA-SMI 535.54.03
CUDA Version: 12.2
SO: Ubuntu 22.04
Python: 3.10.6
I am trying to create the docker image from Dockerfile, ans I kept the original version from the dockerfile (FROM nvcr.io/nvidia/deepstream:6.1-devel). It seems that when compiling the python bindings, the is an error:
opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/bindings/src/bindnvosd.cpp:36:37:
error: 'MODE_NONE' was not declared in this scope; did you mean
'pydsdoc::NvOSD::NvOSD_Mode::MODE_NONE'?
36 | .value("MODE_NONE", MODE_NONE,
pydsdoc::NvOSD::NvOSD_Mode::MODE_NONE)
| ^~~~~~~~~
|
pydsdoc::NvOSD::NvOSD_Mode::MODE_NONE
The complete log is in this link: https://drive.google.com/file/d/1k8p9qYR1NwJagE_0manqSacghIO_KIek/view?usp=sharing.
How to override this?
Traceback (most recent call last):
File "run.py", line 10, in
run_pipeline(args.source_uri)
File "/home/ubuntu/deepstream-python/deepstream/app/core.py", line 11, in run_pipeline
pipeline = Pipeline(
File "/home/ubuntu/deepstream-python/deepstream/app/pipeline.py", line 106, in init
self._create_elements()
File "/home/ubuntu/deepstream-python/deepstream/app/pipeline.py", line 357, in _create_elements
self.sink_bin = self._create_mp4_sink_bin()
File "/home/ubuntu/deepstream-python/deepstream/app/pipeline.py", line 284, in _create_mp4_sink_bin
mp4_sink_bin.add_pad(Gst.GhostPad("sink", nvvidconv3.get_static_pad("sink")))
TypeError: GObject.init() takes exactly 0 arguments (2 given)
Hi thanks for the great work.
I ran the pipeline in jetson xavier NX with deepstream 6.1.1. the results are weird like for every frame there are 8-10 persons but reid generated only for 3-4 persons. tried with bot nvdcf and deepsort with osnet_x0_25_msmt17.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.