nndam / deepstream-face-recognition Goto Github PK
View Code? Open in Web Editor NEWFace detection -> alignment -> feature extraction with deepstream
Face detection -> alignment -> feature extraction with deepstream
Hi i tried to run this face detector model alone in Jetson Xavier NX board.
i changed glibconfig.h
path to -I/usr/lib/aarch64-linux-gnu/glib-2.0/include
and custom parser compiled.
now i got the below error for engine creation:
$ cmake ..
-- The CXX compiler identification is GNU 9.4.0
-- The CUDA compiler identification is unknown
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Error at CMakeLists.txt:4 (project):
No CMAKE_CUDA_COMPILER could be found.
Tell CMake where to find the compiler by setting either the environment
variable "CUDACXX" or the CMake cache entry CMAKE_CUDA_COMPILER to the full
path to the compiler, or to the compiler name if it is in the PATH.
-- Configuring incomplete, errors occurred!
See also "/home/sensormatic/Neo/deepstream-face-recognition/plugins/nms/build/CMakeFiles/CMakeOutput.log".
See also "/home/sensormatic/Neo/deepstream-face-recognition/plugins/nms/build/CMakeFiles/CMakeError.log".
Hi,
How to include rendering instead of file sink in main_ff.py
file?
Thanks.
Can you share onnx file ?
error: identifier "saturate" is undefined
bbox_data[index] = saturate(bbox_data[index]);
^
detected during:
instantiation of "void decodeBBoxes_kernel<T_BBOX,nthds_per_cta>(int, nvinfer1::plugin::CodeTypeSSD, __nv_bool, int, __nv_bool, int, int, __nv_bool, const T_BBOX *, const T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, nthds_per_cta=512U]" at line 309
instantiation of "pluginStatus_t decodeBBoxes_gpu<T_BBOX>(cudaStream_t, int, nvinfer1::plugin::CodeTypeSSD, __nv_bool, int, __nv_bool, int, int, __nv_bool, const void *, const void *, void *, __nv_bool) [with T_BBOX=float]" at line 350
after following all the steps, I ran both the pipelines. while running it does not throw any error, but I dont get any detection either. any guesses why?
I gave the correct LD_PRELOAD path and got this error while it was trying to build the tRT engine.
WARNING: [TRT]: onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: builtin_op_importers.cpp:5221: Attribute scoreBits not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
ERROR: [TRT]: 3: [optimizationProfile.cpp::setDimensions::119] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/optimizationProfile.cpp::setDimensions::119, condition: std::all_of(dims.d, dims.d + dims.nbDims, [](int32_t x) noexcept { return x >= 0; })
)
ERROR: [TRT]: 3: [optimizationProfile.cpp::setDimensions::119] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/optimizationProfile.cpp::setDimensions::119, condition: std::all_of(dims.d, dims.d + dims.nbDims, [](int32_t x) noexcept { return x >= 0; })
)
ERROR: [TRT]: 3: [optimizationProfile.cpp::setDimensions::119] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/optimizationProfile.cpp::setDimensions::119, condition: std::all_of(dims.d, dims.d + dims.nbDims, [](int32_t x) noexcept { return x >= 0; })
)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1432 Explicit config dims is invalid
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1115 Failed to configure builder options
Please let me know how I can solve this. Thanks!
I gave the path as LD_PRELOAD=..../plugins/nms/
And i get this error:
Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: [TRT]: 3: getPluginCreator could not find plugin: BatchedNMSCustomDynamic_TRT version: 1
ERROR: [TRT]: ModelImporter.cpp:771: While parsing node number 343 [BatchedNMSCustomDynamic_TRT -> "num_detections"]:
I using source and it very good with face alignment.
But i have pipeline:
Primary gie (detect object: person, car,...) -> secondary (face detect) -> secondary (face encoder)
I want to move face alignment into preprocessing and add it between 2 secondary. How to do it? In deepstream app example not support that.
Why you face alignment at nvinfer but not any other way?
Thanks!!!
Hi thanks for the awesome work, can we get the facial landmarks from this model?
Hi, i tried to do inference with the code and i am not sure which is this path Makefile
and i removed it and created the engine file, but got error while inferencing.
ERROR: [TRT]: 2: [pluginV2DynamicExtRunner.cpp::execute::115] Error Code 2: Internal Error (Assertion status == kSTATUS_SUCCESS failed. )
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:12.100863066 338608 0xd770f60 WARN nvinfer gstnvinfer.cpp:1357:gst_nvinfer_input_queue_loop: error: Failed to queue input batch for inferencing
Error: gst-stream-error-quark: Failed to queue input batch for inferencing (1): gstnvinfer.cpp(1357): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference
Exiting app
ERROR: [TRT]: 2: [pluginV2DynamicExtRunner.cpp::execute::115] Error Code 2: Internal Error (Assertion status == kSTATUS_SUCCESS failed. )
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:12.195473559 338608 0xd770f60 WARN nvinfer gstnvinfer.cpp:1357:gst_nvinfer_input_queue_loop: error: Failed to queue input batch for inferencing
ERROR: [TRT]: 2: [pluginV2DynamicExtRunner.cpp::execute::115] Error Code 2: Internal Error (Assertion status == kSTATUS_SUCCESS failed. )
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:12.216500136 338608 0xd770f60 WARN nvinfer gstnvinfer.cpp:1357:gst_nvinfer_input_queue_loop: error: Failed to queue input batch for inferencing
ERROR: [TRT]: 2: [pluginV2DynamicExtRunner.cpp::execute::115] Error Code 2: Internal Error (Assertion status == kSTATUS_SUCCESS failed. )
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:12.236711767 338608 0xd770f60 WARN nvinfer gstnvinfer.cpp:1357:gst_nvinfer_input_queue_loop: error: Failed to queue input batch for inferencing
free(): invalid pointer
Aborted (core dumped)
22 | RUN pip3 install --upgrade pip
23 | >>> RUN cd /opt/nvidia/deepstream/deepstream-6.1/sources/apps/deepstream_python_apps/bindings &&
24 | >>> mkdir build &&
25 | >>> cd build &&
26 | >>> cmake -DPYTHON_MAJOR_VERSION=3 -DPYTHON_MINOR_VERSION=8 -DPIP_PLATFORM=linux_x86_64 -DDS_PATH=/opt/nvidia/deepstream/deepstream-6.1 .. &&
27 | >>> make &&
28 | >>> pip3 install pyds-1.1.4-py3-none-linux_x86_64.whl
29 | RUN cd /opt/nvidia/deepstream/deepstream-6.1/sources/apps/deepstream_python_apps &&
--------------------
ERROR: failed to solve: process "/bin/sh -c cd /opt/nvidia/deepstream/deepstream-6.1/sources/apps/deepstream_python_apps/bindings && mkdir build && cd build && cmake -DPYTHON_MAJOR_VERSION=3 -DPYTHON_MINOR_VERSION=8 -DPIP_PLATFORM=linux_x86_64 -DDS_PATH=/opt/nvidia/deepstream/deepstream-6.1 .. && make && pip3 install pyds-1.1.4-py3-none-linux_x86_64.whl" did not complete successfully: exit code: 2`
I have x86 machine with RTX3060 GPU. I have installed DS6.2 in it.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.