Giter Site home page Giter Site logo

marcoslucianops / deepstream-yolo-pose Goto Github PK

View Code? Open in Web Editor NEW
108.0 6.0 24.0 71 KB

NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 application for YOLO-Pose models

License: MIT License

Makefile 6.20% C 41.69% C++ 16.54% Python 35.56%
deepstream nvidia nvidia-deepstream-sdk object-detection pose-estimation pytorch tensorrt ultralytics yolo yolov8

deepstream-yolo-pose's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

deepstream-yolo-pose's Issues

Python version (deepstream.py) throws TypeErrors for keypoints with negative coordinates

In some cases it may occur that theestimated joint keypoints provided by the model have negative coordinates.

In this case in the function parse_pose from_meta() the setting of the circle_params with xc and yc as well as the setting of the line_params with x1, y1,x2, y2 will throw a TypeError. Setting these params to negative values is not allowed.

My suggestion is to make sure, the given coordinates are not negative like:
xc = max(0, int((data[i * 3 + 0] - pad_x) / gain))
yc = max(0, int((data[i * 3 + 1] - pad_y) / gain))
instead of:
xc = int((data[i * 3 + 0] - pad_x) / gain)
yc = int((data[i * 3 + 1] - pad_y) / gain)

Same for the calculation of x1, x2, y1, y2.

The engine can only predict no any class

Our organization had the issue that we use Yolov8n model, we use deepstream-yolo change it to onnx format. and we follow the https://github.com/marcoslucianops/DeepStream-Yolo-Pose/blob/master/docs/YOLOv8_Pose.md YOLOv8-Pose usage step to setup the deepstream. But only one class can detected.
Our process was:

  1. python3 export_yoloV8_pose.py -w yolov8s-pose.pt --dynamic --simplify
  2. Copy onnx to deepstream folder.
  3. Use onnx change engine
    ./trtexec --onnx=yolov8npose.onnx --saveEngine=yolov8npose.engine
  4. Compile nvdsinfer_custom_impl_Yolo plugin
  5. Edit the config_infer_primary_yoloV8_pose.txt
    (...
    onnx-file=yolov8npose.onnx
    model-engine-file=yolov8npose.engine
    network-mode=0
    num-detected-classes=3
    network-type=3
    cluster-mode=4
    ...
    parse-bbox-func-name=NvDsInferParseYoloPose)
    ...
    [class-attrs-all]
    pre-cluster-threshold=0.25
    topk=300
  6. Edit label.txt
  7. Run RTSP Programe

Using webcam as source

Is it possible to use web cam as source? It is possible in deepstream-yolo by changing a config file for deepstream-app as V4L2 CAMERA.. but I didt find this posibility in this project.

Segmentation fault (core dumped)

When I run the command ./deepstream -s file:///home/wyh/DeepStream-Yolo-Pose/bodypose.mp4 -c config_infer_primary_yoloV8_pose.txt -w 1280 -e 720 in the docker of deepstream6.2, an error occurs.
Segmentation fault (core dumped)
image

Backend has maxBatchSize 1 whereas 2 has been requested, model_b2_gpu0_fp32.engine failed to match config params

  • started from a .pt trained yolov5L 6.1 model
  • .onnx exported with python3 export_yoloV5.py -w model.pt --simplify
  • it works for batch size 1 but not for batch size 2
  • file .engine is generated : model_b2_gpu0_fp32.engine
  • environment is deepstream 6.1.1 on the Jetson Orin NX

the run fails with these messages

thank you!

0:00:07.082318833 29 0xffff7cd19490 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/model_b2_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x640x640
1 OUTPUT kFLOAT boxes 25200x4
2 OUTPUT kFLOAT scores 25200x1
3 OUTPUT kFLOAT classes 25200x1

0:00:07.232329468 29 0xffff7cd19490 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1841> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested
0:00:07.234135304 29 0xffff7cd19490 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2018> [UID = 1]: deserialized backend context :/model_b2_gpu0_fp32.engine failed to match config params, trying rebuild
0:00:07.287486636 29 0xffff7cd19490 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:367: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.

Building the TensorRT Engine

ERROR: decodebin did not pick NVIDIA decoder plugin

1> run:
~/work/yolo_deepstream/DeepStream-Yolo-Pose$ ./deepstream -s file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 -c config_infer_primary_yoloV8_pose.txt

2> error logs:
SOURCE: file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
CONFIG_INFER: config_infer_primary_yoloV8_pose.txt
STREAMMUX_BATCH_SIZE: 1
STREAMMUX_WIDTH: 1920
STREAMMUX_HEIGHT: 1080
GPU_ID: 0
PERF_MEASUREMENT_INTERVAL_SEC: 5
JETSON: FALSE

gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /home/work/yolo_deepstream/DeepStream-Yolo-Pose/yolov8s-pose.onnx_b1_gpu0_fp32.engine open error
0:00:03.835371860 17386 0x5629562a3600 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 1]: deserialize engine from file :/home/work/yolo_deepstream/DeepStream-Yolo-Pose/yolov8s-pose.onnx_b1_gpu0_fp32.engine failed
0:00:03.836187181 17386 0x5629562a3600 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 1]: deserialize backend context from engine from file :/home/work/yolo_deepstream/DeepStream-Yolo-Pose/yolov8s-pose.onnx_b1_gpu0_fp32.engine failed, try rebuild
0:00:03.836206038 17386 0x5629562a3600 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:369: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
0:01:19.504473333 17386 0x5629562a3600 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2034> [UID = 1]: serialize cuda engine to file: /home/work/yolo_deepstream/DeepStream-Yolo-Pose/yolov8s-pose.onnx_b1_gpu0_fp32.engine successfully
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT images 3x640x640
1 OUTPUT kFLOAT output0 56x8400

0:01:19.513500180 17386 0x5629562a3600 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:config_infer_primary_yoloV8_pose.txt sucessfully

DEBUG: FPS of stream 1: 0.00 (0.00)
ERROR: decodebin did not pick NVIDIA decoder plugin
DEBUG: FPS of stream 1: 0.00 (0.00)
DEBUG: FPS of stream 1: 0.00 (0.00)

Receiving an error while running demo on Jetson

Traceback (most recent call last):
File "deepstream.py", line 178, in tracker_src_pad_buffer_probe
parse_pose_from_meta(frame_meta, obj_meta)
File "deepstream.py", line 129, in parse_pose_from_meta
x1 = int((data[(skeleton[i][0] - 1) * 3 + 0] - pad_x) / gain)
IndexError: index 45 is out of bounds for axis 0 with size 0

Error :no module "ultralytics.yolo"

微信图片_20240126111650
export_yolov8_pose.py error: The submodule name is incorrectly written during import
error :no module "ultralytics.yolo"
I change code to "from ultralytics.utils.torch_utils import select_device"
it can run successfully !

LLVM

My laptop still has more than 7 free RAM but I got this error.
Do you know why?
Thank you

image

image

What about multi class estimation

I'm trying to use this moddel for multiple classes. However I get core dumped.
When I tested it on a model with only one class it works.
What changes are needed in oirder to be able to work with more thatn one classs?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.