Comments (11)
Please see my multipleInferences.md again. I reverted files and updated them. Now you can use different versions/models with separated gie's folders without errors (especially see Editing yoloPlugin.h section).
it works, THX !!!
from deepstream-yolo.
config_infer_primary.txt
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=yolov3_person.cfg
model-file=yolov3_person_best.weights
model-engine-file=model_b1_gpu0_fp16_personv3.engine
labelfile-path=labels.txt
batch-size=1
network-mode=2
num-detected-classes=1
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=4
maintain-aspect-ratio=0
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
pre-cluster-threshold=0.25
from deepstream-yolo.
config_infer_secondary1.txt
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=custom_yolov4_helmet.cfg
model-file=custom_yolov4_helmet_best.weights
model-engine-file=model_b16_gpu0_fp16_helmet.engine
labelfile-path=labels_helmet.txt
batch-size=16
network-mode=2
num-detected-classes=3
interval=0
gie-unique-id=2
process-mode=2
#operate-on-gie-id=1
#operate-on-class-ids=0
network-type=0
cluster-mode=4
maintain-aspect-ratio=0
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
pre-cluster-threshold=0.25
from deepstream-yolo.
deepstream_app_config.txt
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0
[source0]
enable=1
type=3
uri=file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_1080p_h264.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0
[sink0]
enable=1
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0
[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=pgie/config_infer_primary.txt
[secondary-gie0]
enable=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
nvbuf-memory-type=0
config-file=sgie1/config_infer_secondary1.txt
[tests]
file-loop=0
from deepstream-yolo.
I will test this today
from deepstream-yolo.
Hi @XiangjiBU, sorry for the delay.
I found the problem, and I updated repo.
Thanks.
from deepstream-yolo.
Hi @XiangjiBU, sorry for the delay.
I found the problem, and I updated repo.
Thanks.
I tried new repo, and config exactly follow "multipleinference.md", but still not work
can you show where is the problem ?
from deepstream-yolo.
I tried new repo, and config exactly follow "multipleinference.md", but still not work
can you show where is the problem ?
Put all files (cfg/weights/labels) in deepstream/sources/yolo directory (without pgie/sgie folders) and use only one nvdsinfer_custom_impl_Yolo folder for all inference engines.
If it doesn't work, try to rebuild model with this new folder.
from deepstream-yolo.
I tried new repo, and config exactly follow "multipleinference.md", but still not work
can you show where is the problem ?Put all files (cfg/weights/labels) in deepstream/sources/yolo directory (without pgie/sgie folders) and use only one nvdsinfer_custom_impl_Yolo folder for all inference engines.
If it doesn't work, try to rebuild model with this new folder.
thanks, I tried again, but still find 2 issues:
- I put them in yolo folder, but I find that, the secondary gie detect plenty of bboxes, I try to set "cluster-mode=2", but still get planty of bboxes. can you help me to figure out what happened?
- when primary-gie and secondary-gie are different version yolo model, this repo seems not works(for example: pgie is custom yolov4, and sgie1 is custom yolov3). did I do some thing wrong or this repo is indeed not support that ?
from deepstream-yolo.
- I put them in yolo folder, but I find that, the secondary gie detect plenty of bboxes, I try to set "cluster-mode=2", but still get planty of box. can you help me to figure out what happened?
cluster-mode is used to set which NMS mode will be used in DeepStream. In my code, NSM function is added to nvdsparsebbox_Yolo.cpp for YOLO models v3 and v4. Using cluster-mode=2, you will add another NMS after coded NMS, therefore, is better to use cluster-mode=4.
To decrease number of bboxs, you need to increase pre-cluster-threshold, where 0 is 0% and 1.0 is 100% of confidence to show bbox.
- when primary-gie and secondary-gie are different version yolo model, this repo seems not works(for example: pgie is custom yolov4, and sgie1 is custom yolov3).
I believe it will work because the code is the same for all models. It only differs in kernel, where it calls different functions for each model.
I tested only with YOLOv4, but I will do future tests with other models.
from deepstream-yolo.
Hi @XiangjiBU
Please see my multipleInferences.md again. I reverted files and updated them. Now you can use different versions/models with separated gie's folders without errors (especially see Editing yoloPlugin.h section).
from deepstream-yolo.
Related Issues (20)
- Deepstream 6.4 does not work with YOLOv5 on Jetson in Docker
- failed to create NvDsinferContext instance, NvDsinfer Error: NVDSINFER_CONFIG_FAILED HOT 3
- How to use Python API to runtime multiple RTSP streams? HOT 1
- Model Export HOT 1
- Backend has maxBatchSize 1 whereas 2 has been requested, model_b2_gpu0_fp32.engine failed to match config params
- change config model from yolov5l to yolov8m
- yolov9 support ? HOT 7
- Sending Bounding Box info through UDP connection HOT 1
- Weaker detections in deepstream-app
- YOLOV9 custom parser request HOT 1
- NvDsInfer Error: NVDSINFER_CONFIG_FAILED HOT 1
- NvDsInfer Error: NVDSINFER_CONFIG_FAILED HOT 1
- Segmentation Fault ( Core Dumped)
- ERROR with int8
- How to add custom function
- CUDA shared memory registration failed when requesting recognition from deepstream to an external triton server. to occur
- how can i use it in deepstream-test1 HOT 1
- Performance drop when using multiple sources HOT 6
- Ultralytics inference vs deepstream same model not yield same results
- How can I obtain the coordinates of the recognized target?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepstream-yolo.