Giter Site home page Giter Site logo

Comments (11)

XiangjiBU avatar XiangjiBU commented on May 22, 2024 1

Please see my multipleInferences.md again. I reverted files and updated them. Now you can use different versions/models with separated gie's folders without errors (especially see Editing yoloPlugin.h section).

it works, THX !!!

from deepstream-yolo.

XiangjiBU avatar XiangjiBU commented on May 22, 2024

config_infer_primary.txt

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0

custom-network-config=yolov3_person.cfg
model-file=yolov3_person_best.weights
model-engine-file=model_b1_gpu0_fp16_personv3.engine
labelfile-path=labels.txt

batch-size=1
network-mode=2
num-detected-classes=1
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=4
maintain-aspect-ratio=0
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
pre-cluster-threshold=0.25

from deepstream-yolo.

XiangjiBU avatar XiangjiBU commented on May 22, 2024

config_infer_secondary1.txt

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0

custom-network-config=custom_yolov4_helmet.cfg
model-file=custom_yolov4_helmet_best.weights
model-engine-file=model_b16_gpu0_fp16_helmet.engine
labelfile-path=labels_helmet.txt

batch-size=16
network-mode=2
num-detected-classes=3
interval=0
gie-unique-id=2
process-mode=2
#operate-on-gie-id=1
#operate-on-class-ids=0
network-type=0
cluster-mode=4
maintain-aspect-ratio=0
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
pre-cluster-threshold=0.25

from deepstream-yolo.

XiangjiBU avatar XiangjiBU commented on May 22, 2024

deepstream_app_config.txt

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
type=3
uri=file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_1080p_h264.mp4

num-sources=1
gpu-id=0
cudadec-memtype=0

[sink0]
enable=1
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=pgie/config_infer_primary.txt

[secondary-gie0]
enable=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
nvbuf-memory-type=0
config-file=sgie1/config_infer_secondary1.txt

[tests]
file-loop=0

from deepstream-yolo.

marcoslucianops avatar marcoslucianops commented on May 22, 2024

I will test this today

from deepstream-yolo.

marcoslucianops avatar marcoslucianops commented on May 22, 2024

Hi @XiangjiBU, sorry for the delay.

I found the problem, and I updated repo.

See multipleInferences.md

Thanks.

from deepstream-yolo.

XiangjiBU avatar XiangjiBU commented on May 22, 2024

Hi @XiangjiBU, sorry for the delay.

I found the problem, and I updated repo.

See multipleInferences.md

Thanks.

I tried new repo, and config exactly follow "multipleinference.md", but still not work
can you show where is the problem ?

from deepstream-yolo.

marcoslucianops avatar marcoslucianops commented on May 22, 2024

I tried new repo, and config exactly follow "multipleinference.md", but still not work

can you show where is the problem ?

Put all files (cfg/weights/labels) in deepstream/sources/yolo directory (without pgie/sgie folders) and use only one nvdsinfer_custom_impl_Yolo folder for all inference engines.

If it doesn't work, try to rebuild model with this new folder.

from deepstream-yolo.

XiangjiBU avatar XiangjiBU commented on May 22, 2024

I tried new repo, and config exactly follow "multipleinference.md", but still not work
can you show where is the problem ?

Put all files (cfg/weights/labels) in deepstream/sources/yolo directory (without pgie/sgie folders) and use only one nvdsinfer_custom_impl_Yolo folder for all inference engines.

If it doesn't work, try to rebuild model with this new folder.

thanks, I tried again, but still find 2 issues:

  1. I put them in yolo folder, but I find that, the secondary gie detect plenty of bboxes, I try to set "cluster-mode=2", but still get planty of bboxes. can you help me to figure out what happened?
  2. when primary-gie and secondary-gie are different version yolo model, this repo seems not works(for example: pgie is custom yolov4, and sgie1 is custom yolov3). did I do some thing wrong or this repo is indeed not support that ?

from deepstream-yolo.

marcoslucianops avatar marcoslucianops commented on May 22, 2024
  1. I put them in yolo folder, but I find that, the secondary gie detect plenty of bboxes, I try to set "cluster-mode=2", but still get planty of box. can you help me to figure out what happened?

cluster-mode is used to set which NMS mode will be used in DeepStream. In my code, NSM function is added to nvdsparsebbox_Yolo.cpp for YOLO models v3 and v4. Using cluster-mode=2, you will add another NMS after coded NMS, therefore, is better to use cluster-mode=4.

To decrease number of bboxs, you need to increase pre-cluster-threshold, where 0 is 0% and 1.0 is 100% of confidence to show bbox.

  1. when primary-gie and secondary-gie are different version yolo model, this repo seems not works(for example: pgie is custom yolov4, and sgie1 is custom yolov3).

I believe it will work because the code is the same for all models. It only differs in kernel, where it calls different functions for each model.

I tested only with YOLOv4, but I will do future tests with other models.

from deepstream-yolo.

marcoslucianops avatar marcoslucianops commented on May 22, 2024

Hi @XiangjiBU

Please see my multipleInferences.md again. I reverted files and updated them. Now you can use different versions/models with separated gie's folders without errors (especially see Editing yoloPlugin.h section).

from deepstream-yolo.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.