Giter Site home page Giter Site logo

zhouyuchong / face-recognition-deepstream Goto Github PK

View Code? Open in Web Editor NEW
48.0 0.0 10.0 582 KB

Deepstream app use retinaface and arcface for face recognition.

License: MIT License

Python 89.89% Makefile 2.55% C++ 7.56%
deepstream retinaface arcface face-recognition

face-recognition-deepstream's People

Contributors

zhouyuchong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

face-recognition-deepstream's Issues

Where are these files?Please help me

Hello !
I want to do face recognition by DS python. I have build DS-python ,created yolov5 , retina-face and arc face trt engine just like you said
But,where are these files in the config txt?

  1. ../../models/retinaface/nvdsinfer_customparser/libnvdsinfer_custom_impl_retinaface.so in config_retinaface.txt
  2. ../../models/yolov5/yolov5s/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so in config_yolov5.txt
  3. and your test file, i don't have this file file:///opt/nvidia/deepstream/deepstream-6.0/sources/pythonapps/videos/samplevideo.h264

Can you provide me with more information for reference? Pleas help me, thank you

[Question] File Missing (libplugin_rface.so) during Installation

During installation, I couldn't find a related file, "libplugin_rface.so".

Installation step 2
   Modify line 24 in src/kbds/app/face.py (actually src/kbdes/app/face/face.py)
   related code line

I downloaded the pre trained weight and other files from the Google Drive in Pre Trained section in ReadMe, but the "libplogin_rface.so" isn't included.

Where could I find the file?

Or should I compile to generate the file at the retinaface link in the ReadMe?

where is the output

Hello,

Thank you for creating this repository!

I can't find the output, i'm not sure if the script is actually reading the video, I get the following output (for one stream):


localhost:9092
--------- start app ------------------
pipeline ++++++++++
/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream_python_apps/apps/face-recognition-deepstream/src/kbds/configs/face/config_retinaface.txt
/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream_python_apps/apps/face-recognition-deepstream/src/kbds/configs/face/config_arcface.txt
localhost;9092;deepstream

Before thread++++++++++++++++++++++++++++
After thread++++++++++++++++++++++++++++
(True, 'success, state <enum GST_STATE_CHANGE_SUCCESS of type Gst.StateChangeReturn>')
-----------------------------------------------------------------------------
set index of src task-0 to 0
add set return : <enum GST_STATE_CHANGE_ASYNC of type Gst.StateChangeReturn>
Starting pipeline 

GstMessageError, gerror=(GError)NULL, debug=(string)"gstnvmsgbroker.cpp\(401\):\ legacy_gst_nvmsgbroker_start\ \(\):\ /GstPipeline:pipeline0/GstNvMsgBroker:nvmsg-broker:\012unable\ to\ connect\ to\ broker\ library"; ##########################
what is search index:    None
Traceback (most recent call last):
  File "/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream_python_apps/apps/face-recognition-deepstream/test/../src/kbds/core/pipeline.py", line 124, in bus_call_abs
    search_index = search_index[0].split('-')[2][:-1]
TypeError: 'NoneType' object is not subscriptable
GstMessageError, gerror=(GError)NULL, debug=(string)"gstbasesink.c\(5367\):\ gst_base_sink_change_state\ \(\):\ /GstPipeline:pipeline0/GstNvMsgBroker:nvmsg-broker:\012Failed\ to\ start"; ##########################
what is search index:    None
Traceback (most recent call last):
  File "/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream_python_apps/apps/face-recognition-deepstream/test/../src/kbds/core/pipeline.py", line 124, in bus_call_abs
    search_index = search_index[0].split('-')[2][:-1]
TypeError: 'NoneType' object is not subscriptable
(True, 'success')
src: <Stream task-0 task-0-Stream> 
source_bin: <__gi__.GstURIDecodeBin object at 0x7f5bd2a80800 (GstURIDecodeBin at 0x3d0c760)>
<enum GST_STATE_CHANGE_ASYNC of type Gst.StateChangeReturn>
++++++++++++++++++++++++++++++++++++++++++
doneeeeeeeeeeeeeeeeeeee
gstname= video/x-raw
pad name:  sink_0
Decodebin linked to pipeline
GstMessageError, gerror=(GError)NULL, debug=(string)"gstmultiqueue.c\(2065\):\ gst_multi_queue_loop\ \(\):\ /GstPipeline:pipeline0/GstURIDecodeBin:source-bin-0/GstDecodeBin:decodebin1/GstMultiQueue:multiqueue0:\012streaming\ stopped\,\ reason\ error\ \(-5\)", details=(structure)"details\,\ flow-return\=\(int\)-5\;"; ##########################
what is search index:    <re.Match object; span=(147, 160), match='source-bin-0/'>
After change search index:    0
delete finished
id: task-0,  error: gst-stream-error-quark: Internal data stream error. (1), delete.

**PERF:  {'stream0': 0.0} 


**PERF:  {'stream0': 0.0} 


**PERF:  {'stream0': 0.0} 


**PERF:  {'stream0': 0.0} 


**PERF:  {'stream0': 0.0} 

the folder test/images with it's directories (aligned and original) remains unchanged .

after this log there is no visualization or saved results, please let me know what i'm doing wrong and how i can access the results.

How to do aligment if we use retinaface on SGIE

Thank you for sharing your code.
I would like to use another detection and used it for PGIE. In PGIE, I can detect the face and pass the face detected to SGIE which used retinaface net. However, When I draw the box of retinaface, the bbox and landmark is located in wrong place in the image. Do you know how to solve that problem?
My pipeline is PGIE(face, human, car...) -> SGIE (only use face object to pass to retinaface network) -> alignment -> recognition.
Thank you in advance !!!

[Question] Skip Alignment?

Hello;

I try to run "test/face_test_demo.py" without Alignment step. I followed all steps in Usage and able to execute the code.

Unfortunately, the output included errors of "failed to build network since there is no model file matched." :

isi@D-003:~/project/face-recognition-deepstream/test$ python3 face_test_demo.py 
--------- start app
Unknown or legacy key specified 'alignment' for group [property]
Unknown or legacy key specified 'user-meta' for group [property]
localhost;9092;deepstream
(True, 'success, state <enum GST_STATE_CHANGE_SUCCESS of type Gst.StateChangeReturn>')
-----------------------------------------------------------------------------
set index of src task-0 to 0
add set return : <enum GST_STATE_CHANGE_NO_PREROLL of type Gst.StateChangeReturn>
Starting pipeline 


**PERF:  {'stream0': 0.0, 'stream1': 0.0, 'stream2': 0.0, 'stream3': 0.0, 'stream4': 0.0, 'stream5': 0.0, 'stream6': 0.0, 'stream7': 0.0, 'stream8': 0.0, 'stream9': 0.0, 'stream10': 0.0, 'stream11': 0.0, 'stream12': 0.0, 'stream13': 0.0, 'stream14': 0.0, 'stream15': 0.0} 

WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.

**PERF:  {'stream0': 0.0, 'stream1': 0.0, 'stream2': 0.0, 'stream3': 0.0, 'stream4': 0.0, 'stream5': 0.0, 'stream6': 0.0, 'stream7': 0.0, 'stream8': 0.0, 'stream9': 0.0, 'stream10': 0.0, 'stream11': 0.0, 'stream12': 0.0, 'stream13': 0.0, 'stream14': 0.0, 'stream15': 0.0} 

0:00:13.096044384  7044 0xfffecc008c60 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<secondary-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 2]: deserialized trt engine from :/home/isi/pre_trained/arm/arcface-r100.engine
INFO: [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT data            3x112x112       
1   OUTPUT kFLOAT prob            512x1x1         

0:00:13.263057216  7044 0xfffecc008c60 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<secondary-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 2]: Use deserialized engine model: /home/isi/pre_trained/arm/arcface-r100.engine
0:00:13.283955808  7044 0xfffecc008c60 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary-nvinference-engine> [UID 2]: Load new model:/home/isi/project/face-recognition-deepstream/src/kbds/configs/face/config_arcface.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
~~ CLOG[/dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvmultiobjecttracker/include/modules/NvMultiObjectTracker/NvTrackerParams.hpp, getConfigRoot() @line 52]: [NvTrackerParams::getConfigRoot()] !!![WARNING] Invalid low-level config file caused an exception, but will go ahead with the default config values
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
~~ CLOG[/dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvmultiobjecttracker/include/modules/NvMultiObjectTracker/NvTrackerParams.hpp, getConfigRoot() @line 52]: [NvTrackerParams::getConfigRoot()] !!![WARNING] Invalid low-level config file caused an exception, but will go ahead with the default config values
[NvMultiObjectTracker] Initialized
0:00:13.495362912  7044 0xfffecc008c60 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.

**PERF:  {'stream0': 0.0, 'stream1': 0.0, 'stream2': 0.0, 'stream3': 0.0, 'stream4': 0.0, 'stream5': 0.0, 'stream6': 0.0, 'stream7': 0.0, 'stream8': 0.0, 'stream9': 0.0, 'stream10': 0.0, 'stream11': 0.0, 'stream12': 0.0, 'stream13': 0.0, 'stream14': 0.0, 'stream15': 0.0} 

0:00:16.779126688  7044 0xfffecc008c60 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/home/isi/pre_trained/arm/retina_r50.engine
INFO: [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT data            3x480x640       
1   OUTPUT kFLOAT prob            189001x1x1      

0:00:16.958094368  7044 0xfffecc008c60 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1841> [UID = 1]: Backend has maxBatchSize 1 whereas 16 has been requested
0:00:16.958145376  7044 0xfffecc008c60 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2018> [UID = 1]: deserialized backend context :/home/isi/pre_trained/arm/retina_r50.engine failed to match config params, trying rebuild
0:00:16.964277568  7044 0xfffecc008c60 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:18.778852512  7044 0xfffecc008c60 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
0:00:18.940563936  7044 0xfffecc008c60 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2029> [UID = 1]: build backend context failed
0:00:18.941227200  7044 0xfffecc008c60 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1266> [UID = 1]: generate backend failed, check config file settings
0:00:18.942414528  7044 0xfffecc008c60 WARN                 nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:18.942426016  7044 0xfffecc008c60 WARN                 nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Config file path: /home/isi/project/face-recognition-deepstream/src/kbds/configs/face/config_retinaface.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Traceback (most recent call last):
  File "/home/isi/project/face-recognition-deepstream/test/../src/kbds/core/pipeline.py", line 122, in bus_call_abs
    search_index = search_index[0].split('-')[2][:-1]
TypeError: 'NoneType' object is not subscriptable
[NvMultiObjectTracker] De-initialized
(True, 'success')
src: <Stream task-0 task-0-Stream> 
source_bin: <__gi__.GstURIDecodeBin object at 0xffff3dc4b840 (GstURIDecodeBin at 0x2000780)>
<enum GST_STATE_CHANGE_NO_PREROLL of type Gst.StateChangeReturn>

**PERF:  {'stream0': 0.0, 'stream1': 0.0, 'stream2': 0.0, 'stream3': 0.0, 'stream4': 0.0, 'stream5': 0.0, 'stream6': 0.0, 'stream7': 0.0, 'stream8': 0.0, 'stream9': 0.0, 'stream10': 0.0, 'stream11': 0.0, 'stream12': 0.0, 'stream13': 0.0, 'stream14': 0.0, 'stream15': 0.0} 

set index of src task-5 to 1
add set return : <enum GST_STATE_CHANGE_NO_PREROLL of type Gst.StateChangeReturn>
(True, 'success')
src: <Stream task-5 task-5-Stream> 
source_bin: <__gi__.GstURIDecodeBin object at 0xfffeed98bdc0 (GstURIDecodeBin at 0x2001ce0)>
<enum GST_STATE_CHANGE_SUCCESS of type Gst.StateChangeReturn>
Traceback (most recent call last):
  File "/home/isi/project/face-recognition-deepstream/test/../src/kbds/core/pipeline.py", line 498, in cb_decodebin_child_added
    obj.set_property("gpu_id", self.gpu_id)
TypeError: object of type `nvv4l2decoder' does not have property `gpu_id'
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
gstname= video/x-raw
pad name:  sink_1
Decodebin linked to pipeline

**PERF:  {'stream0': 0.0, 'stream1': 0.0, 'stream2': 0.0, 'stream3': 0.0, 'stream4': 0.0, 'stream5': 0.0, 'stream6': 0.0, 'stream7': 0.0, 'stream8': 0.0, 'stream9': 0.0, 'stream10': 0.0, 'stream11': 0.0, 'stream12': 0.0, 'stream13': 0.0, 'stream14': 0.0, 'stream15': 0.0} 

^CTraceback (most recent call last):
  File "face_test_demo.py", line 84, in <module>
    time.sleep(1)
KeyboardInterrupt
^CException ignored in: <module 'threading' from '/usr/lib/python3.8/threading.py'>
Traceback (most recent call last):
  File "/usr/lib/python3.8/threading.py", line 1388, in _shutdown
    lock.acquire()
KeyboardInterrupt: 

It looks like code couldn't read ENGINE file (pre trained neural network weight) for the first neural network. The first neural network the RetinaFace and the same weight is actually work at similar code of the programmer of this repository.

I built the weight followed the suggestion in my hardware.

This is my configuration file for RetinaFace, "config_retinaface.txt":

[property]

gpu-id=0
#0=RGB, 1=BGR
model-color-format=1
model-engine-file=/home/isi/pre_trained/arm/retina_r50.engine
labelfile-path=/home/isi/project/face-recognition-deepstream/models/retinaface/labels.txt

process-mode=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
gie-unique-id=1
network-type=0
output-blob-names=prob
## 0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
#cluster-mode=2
maintain-aspect-ratio=1
batch-size=32
num-detected-classes=1
output-tensor-meta=1

# custom detection parser
parse-bbox-func-name=NvDsInferParseCustomRetinaface
custom-lib-path=/home/isi/project/face-recognition-deepstream/models/retinaface/nvdsinfer_customparser/libnvdsinfer_custom_impl_retinaface.so
net-scale-factor=1.0
offsets=104.0;117.0;123.0
force-implicit-batch-dim=0
# number of consecutive batches to skip for inference
interval=0


[class-attrs-all]
# bbox threshold
pre-cluster-threshold=0.6
# nms threshold
# post-cluster-threshold=0.4
nms-iou-threshold=0.5

Do I miss any step?

Any suggestion will be appreciated.

Thank you.

Runtime environment:
   OS: Ubuntu 20.04 in AArch 64 (Arm)
   DeepStream: 6.1
   Python: 3.8

CUDA error 222 at

@zhouyuchong when trying to create engine file of yolo getting this error in deepstream 6.1 but the same was working in deepstream 6.0 any idea

CUDA error 222 at /opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/face-recognition-deepstream/convetor/tensorrtx/yolov5/yololayer.cu:37yolov5: /opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/face-recognition-deepstream/convetor/tensorrtx/yolov5/yololayer.cu:37: nvinfer1::YoloLayerPlugin::YoloLayerPlugin(int, int, int, int, const std::vector<Yolo::YoloKernel>&): Assertion `0' failed.
Aborted (core dumped)

Anonymous face

No matter which video, I run it always gives print
THIS AN ANONYMOUS FACE, DISCARD
is there a way to solve it?

Error while running the make file

The below error is generated when running the make file in ./face-recognition-deepstream/models/retinaface/nvdsinfer_customparser

g++ -o libnvdsinfer_custom_impl_retinaface.so nvdsparse_retinaface.cpp -Wall -std=c++11 -Wno-error=deprecated-declarations -shared -fPIC -I/opt/nvidia/deepstream/deepstream-6.1/sources/includes -I/usr/local/cuda-11.6/include -shared -Wl,--start-group -lnvinfer_plugin -lnvinfer -lnvparsers -Wl,--end-group
nvdsparse_retinaface.cpp:23:10: fatal error: nvdsinfer_custom_impl.h: No such file or directory
23 | #include "nvdsinfer_custom_impl.h"
| ^~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.

[Question] Frame per Second (FPS) Becomes Zero, **PERF: {'stream0': 0.0}

Hello;

I am running the test code, "face_test_demo.py". It looks like executing all pipeline components (including Kafka) but my frame rate suddenly becomes zero.

Here is runtime screen when it happened ( entire runtime screen - readable by Chrome Browser ):

.
.
.
:00:43.000497888 15720      0x1ffa0a0 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /home/isi/pre_trained/arm/retina_r50.engine
0:00:43.004738880 15720      0x1ffa0a0 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:/home/isi/project/face-recognition-deepstream/src/kbds/configs/face/config_retinaface.txt sucessfully
(True, 'success')
src: <Stream task-5 task-5-Stream> 
source_bin: <__gi__.GstURIDecodeBin object at 0xffff461d78c0 (GstURIDecodeBin at 0xfffdb4018f00)>
<enum GST_STATE_CHANGE_SUCCESS of type Gst.StateChangeReturn>
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 279 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 279 
gstname= video/x-raw
pad name:  sink_0
/home/isi/project/face-recognition-deepstream/test/../src/kbds/core/pipeline.py:491: Warning: g_hash_table_lookup: assertion 'hash_table != NULL' failed
  sinkpad = self.streammux.get_request_pad(pad_name)
/home/isi/project/face-recognition-deepstream/test/../src/kbds/core/pipeline.py:491: Warning: g_hash_table_insert_internal: assertion 'hash_table != NULL' failed
  sinkpad = self.streammux.get_request_pad(pad_name)
Decodebin linked to pipeline

**PERF:  {'stream0': 14.8} 

======== delete src test =======
[NvMultiObjectTracker] De-initialized
delete finished

**PERF:  {'stream0': 14.79} 


**PERF:  {'stream0': 0.0} 


**PERF:  {'stream0': 0.0} 

I was suspicious of the bandwidth limit in my network, so I reduced image width and height in "face.py" code (code line, but the situation is persitsance. And the image nor NPY file made.

I also see the warning of "Warning: g_hash_table_insert_internal: assertion 'hash_table != NULL' failed". If this is main cause how could I fix it? The code line doesn't indicate where the Hash Table existed.

What could be issues? Any suggestion appreciated.

Thank You.

Runtime Environment:
   OS: Ubuntu 20.04 in AArch 64
   DeepStream 6.1 with Python 3.8
   Nivida Orin

hi, I unable add your face landmarks(face-detect-deepstream) of you to this pipeline. Please, help me!

I tried to include the face landmarks that I asked you @zhouyuchong before into this pipeline but it seems to have a data flow error that I can't fix at all.
I don't seem to see result_landmark return results when I add queue4.link(tgie)
tgie.link(queue5)
queue5.link(tiler)
tiler.link(queue6)
main.py.txt
it even lost bounding box object when i set process-mode=2(get-current). How to fix these error ?
Screenshot from 2022-10-20 15-53-16

Can we retinaface in C app

@zhouyuchong I have created another issue just wanted know how to add only retinaface on deepstream-test3.c I have added config_retinaface.txt in the folder and ran the app its giving this error I know we need to add the libRetinafaceDecoder.so but don't know how to add it in c any help would be grate.

ERROR: [TRT]: 3: getPluginCreator could not find plugin: Decode_TRT version: 1
ERROR: [TRT]: 1: [pluginV2Runner.cpp::load::291] Error Code 1: Serialization (Serialization assertion creator failed.Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry)
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::75] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/deepstream_python_apps/models/retinaface/retina_r50_copy1.engine
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:00.885165654 187037 0x56126e524470 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 2]: build engine file failed
0:00:00.885186017 187037 0x56126e524470 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 2]: build backend context failed
0:00:00.885193779 187037 0x56126e524470 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 2]: generate backend failed, check config file settings
0:00:00.885475261 187037 0x56126e524470 WARN                 nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary-nvinference-engine> error: Failed to create NvDsInferContext instance
0:00:00.885486402 187037 0x56126e524470 WARN                 nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary-nvinference-engine> error: Config file path: config_retinaface.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running...
ERROR from element primary-nvinference-engine: Failed to create NvDsInferContext instance
Error details: gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:dstest3-pipeline/GstNvInfer:primary-nvinference-engine:
Config file path: config_retinaface.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback
Deleting pipeline

Is there a way to face align in Pad_probe?

You modified the gst-nvinfer source code, but can't you align and save it as a surfacemap?
I'm getting a memory access error when using pyds, but I wonder if I'm doing something wrong.

Can you please share the .wts for the arcface?

I am trying to reproduce exactly your new updated code for face recognition, but the problem is insight-face has been updated a lot from that time. Your model on GDrive was generated on CUDA COMPATIBILITY is 7_5. It will be a great help if you can share the .wts file so I can generate it on my CUDA_8_6.

Arcface inference results in l_user_meta (of "obj_user_meta_list") always 'None'

Hi,
I am using deepstream 6.3 for running the source code.
From the logs I could understand that model loading and inference are working, however the inference result from arcface model that is supposed to be in the variable "l_user_meta" is always "None".

The warning that I am getting are given below
Screenshot from 2024-02-23 16-37-51

I have wrote some log statements in the code to see the value of "l_user_meta" as shown below
Screenshot from 2024-02-23 16-38-27

The log file content is given below
`INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT data 3x112x112
1 OUTPUT kFLOAT prob 512x1x1

gstnvtracker: Loading low-level lib at /app/default_tracker_63/libnvds_nvmultiobjecttracker_63.so
[NvMultiObjectTracker] Initialized
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT data 3x480x640
1 OUTPUT kFLOAT prob 189001x1x1

[NvMultiObjectTracker] De-initialized
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT data 3x112x112
1 OUTPUT kFLOAT prob 512x1x1

gstnvtracker: Loading low-level lib at /app/default_tracker_63/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT data 3x480x640
1 OUTPUT kFLOAT prob 189001x1x1

[NvMultiObjectTracker] De-initialized
--------- start app
localhost;9092;deepstream
(True, 'success, state ')


set index of src task-0 to 0
add set return :
Starting pipeline

Warning: gst-library-error-quark: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized. (5): gstnvinfer.cpp(887): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:secondary-nvinference-engine

Warning: gst-library-error-quark: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized. (5): gstnvinfer.cpp(887): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference

**PERF: {'stream0': 0.0}

(True, 'success')
gstname= video/x-raw
pad name: sink_0
Decodebin linked to pipeline
src:
source_bin: <gi.GstURIDecodeBin object at 0x7f498795c380 (GstURIDecodeBin at 0x26b87d0)>

**PERF: {'stream0': 0.0}

set index of src task-5 to 1
add set return :
error, source state change failure
(True, 'success')
src:
source_bin: <gi.GstURIDecodeBin object at 0x7f49879601c0 (GstURIDecodeBin at 0x26b8c90)>

delete finished
id: task-5, error: gst-resource-error-quark: Resource not found. (3), delete.
Log : ------------Inside sgie_sink_pad_buffer_probe------------
Log : ------------Inside sgie_sink_pad_buffer_probe------------
Log : ------------Inside sgie_sink_pad_buffer_probe------------
detect face-1 with resolution 40x49
detect face-0 with resolution 18x24
Log : ------------Inside sgie_sink_pad_buffer_probe------------
Log -----------------l_user_meta = None
Log -----------------l_user_meta = None
id: None encounters error: gst-core-error-quark: GStreamer error: state change failed and some element failed to post a proper error message with the reason for the failure. (4), delete.

**PERF: {'stream0': 3.37}

get status of task-5: (False, 'source id(task-5) not exist', None)
get status of task-1111: (False, 'source id(task-1111) not exist', None)

**PERF: {'stream0': 0.0}

======== delete src test =======
delete finished

**PERF: {'stream0': 0.0}

======== delete src test =======

**PERF: {'stream0': 0.0}

set index of src task-5 to 0
add set return :
error, source state change failure
Starting pipeline

Warning: gst-library-error-quark: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized. (5): gstnvinfer.cpp(887): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:secondary-nvinference-engine

Warning: gst-library-error-quark: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized. (5): gstnvinfer.cpp(887): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference

**PERF: {'stream0': 0.0}

(True, 'success')
src:
source_bin: <gi.GstURIDecodeBin object at 0x7f498795cec0 (GstURIDecodeBin at 0x26b9150)>

delete finished
id: task-5, error: gst-resource-error-quark: Resource not found. (3), delete.

**PERF: {'stream0': 0.0}

======== delete src test =======

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

**PERF: {'stream0': 0.0}

`
All the the steps mentioned are followed with only difference in deepstream version, I am using 6.3.
could that be the issue?? Please help

Embeddings Not Matching

I am extracting embeddings using the image-data and their corresponding embeddings, but when I compare these embeddings they are very similar to each other even if faces are different

ERROR: [TRT]: 3: getPluginCreator could not find plugin: Decode_TRT version: 1

Hello @zhouyuchong I tried to run the latest code with latest infer-plugin installed getting this error. But I have the old version of the code that's unserialising the model but not the new code

NvMultiObjectTracker] Initialized
0:00:06.060145189 151171 0x7fb1c8006100 WARN                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5

**PERF:  {'stream0': 0.0, 'stream1': 0.0, 'stream2': 0.0, 'stream3': 0.0, 'stream4': 0.0, 'stream5': 0.0, 'stream6': 0.0, 'stream7': 0.0, 'stream8': 0.0, 'stream9': 0.0, 'stream10': 0.0, 'stream11': 0.0, 'stream12': 0.0, 'stream13': 0.0, 'stream14': 0.0, 'stream15': 0.0} 

ERROR: [TRT]: 3: getPluginCreator could not find plugin: Decode_TRT version: 1
ERROR: [TRT]: 1: [pluginV2Runner.cpp::load::291] Error Code 1: Serialization (Serialization assertion creator failed.Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry)
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::75] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/deepstream_python_apps/models/retinaface/retina_r50.engine
0:00:06.617339261 151171 0x7fb1c8006100 WARN                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1888> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/deepstream_python_apps/models/retinaface/retina_r50.engine failed
0:00:06.617380495 151171 0x7fb1c8006100 WARN                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1993> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/deepstream_python_apps/models/retinaface/retina_r50.engine failed, try rebuild
0:00:06.617391388 151171 0x7fb1c8006100 INFO                 nvinfer gstnvinfer.cpp:683:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:06.618088857 151171 0x7fb1c8006100 ERROR                nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:06.618108014 151171 0x7fb1c8006100 ERROR                nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:06.618118811 151171 0x7fb1c8006100 ERROR                nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:06.618523031 151171 0x7fb1c8006100 WARN                 nvinfer gstnvinfer.cpp:883:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:06.618533300 151171 0x7fb1c8006100 WARN                 nvinfer gstnvinfer.cpp:883:gst_nvinfer_start:<primary-inference> error: Config file path: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/deepstream_python_apps/apps/fr/face-recognition-deepstream/src/kbds/configs/face/config_retinaface.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
gst-resource-error-quark: Failed to create NvDsInferContext instance (1)
Traceback (most recent call last):
  File "/opt/nvidia/deepstream/deepstream-6.0/sources/apps/deepstream_python_apps/apps/fr/face-recognition-deepstream/test/../src/kbds/core/pipeline.py", line 122, in bus_call_abs
    search_index = search_index[0].split('-')[2][:-1]
TypeError: 'NoneType' object is not subscriptable
[NvMultiObjectTracker] De-initialized
(True, 'success')
src: <Stream task-0 task-0-Stream> 
source_bin: <__gi__.GstURIDecodeBin object at 0x7fb23ce07b80 (GstURIDecodeBin at 0x383a820)>
<enum GST_STATE_CHANGE_ASYNC of type Gst.StateChangeReturn>
gstname= video/x-raw
pad name:  sink_0
Decodebin linked to pipeline
gstname= audio/x-raw
gst-stream-error-quark: Internal data stream error. (1)
delete finished

What does FaceState mean?

Great work!

Can you elaborate what does State means here?

@unique
class FaceState(Enum):
Init = 0
State1 = 1
State2 = 2
State3 = 3
State4 = 4

All the objects detected by retinafce is not in the deepstream meta

I added this line to check the retinaface detections

 
        if(r.score<=VIS_THRESH) continue;
        std::cout << r.score << "\n";

and also a probe to the primary detector and print the bounding box info. some objects in the metafile is missing when compared with the print statement in .so file

  def dummy_probe(self, pad, info, u_data):

        gst_buffer = info.get_buffer()
        if not gst_buffer:
            logger.error("Unable to get GstBuffer ")
            return

        batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
        if not batch_meta:
            return Gst.PadProbeReturn.OK

        l_frame = batch_meta.frame_meta_list
        while l_frame is not None:
            try:
                frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
            except StopIteration:
                continue
            try:
                source_id = frame_meta.source_id
                payload = {}
                payload['objects'] = []
                l_obj = frame_meta.obj_meta_list
                while l_obj is not None:
                    try:
                        obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
                    except StopIteration:
                        continue

                    object_ = {}
                    object_["y"] = obj_meta.rect_params.top / self.frame_height
                    object_['x'] = obj_meta.rect_params.left / self.frame_width
                    object_['w'] = obj_meta.rect_params.width / self.frame_width
                    object_['h'] = obj_meta.rect_params.height / self.frame_height
                    object_['class_id'] = self.labels[obj_meta.class_id]
                    object_['trackingId'] = obj_meta.object_id
                    object_['confidence'] = obj_meta.confidence
                    logger.debug(object_)
                    payload['objects'].append(object_)

                    try:
                        l_obj = l_obj.next
                    except StopIteration:
                        break
                print('\n'.join([str(o['confidence']) for o in payload['objects']]))

            except Exception as e:
                logger.error(str(e))
            try:
                l_frame = l_frame.next
            except StopIteration:
                break

        return Gst.PadProbeReturn.OK

Which deepstream and tensorrt did you use in this

Hello @zhouyuchong I am trying to run the app getting this erro

ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::75] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1528 Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/deepstream_python_apps/models/arcface/arcface-r100.engine

My configuration
Docker
deepstream-6
Tensorrt 8.0
I even tried to generate the engine file as mention in the readme this.I have successfully generated .wts but when i am doing make its giving me error

/tensorrtx/arcface/prelu.h(31): error: member function declared with "override" does not override a base class member
/tensorrtx/arcface/prelu.h(64): error: exception specification for virtual function "nvinfer1::PReluPlugin::detachFromContext" is incompatible with that of overridden function "nvinfer1::IPluginV2Ext::detachFromContext"

any idea how to solve this or generate the weight file for my configuration

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.