Giter Site home page Giter Site logo

Error reading layer bin about tkdnn HOT 11 CLOSED

ceccocats avatar ceccocats commented on June 30, 2024
Error reading layer bin

from tkdnn.

Comments (11)

zjZSTU avatar zjZSTU commented on June 30, 2024 1

Hi, when it prints:
Not supported field ...
It means that the field in the cfg is not taken into account changing the model of the network and therefore the dimension of the weights to load.
Not all cfg are supported, see DarknetParser.cpp

hi @ceccocats , i use yolov4 pretrained weights in YOLOv4 model zoo, there also will appear Not supported info, but following work is fine

$ ./test_yolo4
Not supported field: batch=1
Not supported field: subdivisions=1
Not supported field: momentum=0.949
Not supported field: decay=0.0005
Not supported field: angle=0
Not supported field: saturation = 1.5
Not supported field: exposure = 1.5
Not supported field: hue=.1
Not supported field: learning_rate=0.00261
Not supported field: burn_in=1000
Not supported field: max_batches = 500500
Not supported field: policy=steps
Not supported field: steps=400000,450000
Not supported field: scales=.1,.1
Not supported field: mosaic=1
New NETWORK (tkDNN v0.5, CUDNN v8)

but when i use my own trained weights, it doesn't work

$ ./test_yolo4
Not supported field: batch=1
Not supported field: subdivisions=1
Not supported field: momentum=0.949
Not supported field: decay=0.0005
Not supported field: angle=0
Not supported field: saturation = 1.5
Not supported field: exposure = 1.5
Not supported field: hue=.1
Not supported field: learning_rate=0.00261
Not supported field: burn_in=1000
Not supported field: max_batches = 500500
Not supported field: policy=steps
Not supported field: steps=400000,450000
Not supported field: scales=.1,.1
Not supported field: mosaic=1
New NETWORK (tkDNN v0.5, CUDNN v8)
Reading weights: I=3 O=32 KERNEL=3x3x1
Reading weights: I=32 O=64 KERNEL=3x3x1
Reading weights: I=64 O=64 KERNEL=1x1x1
Reading weights: I=64 O=64 KERNEL=1x1x1
Reading weights: I=64 O=32 KERNEL=1x1x1
Reading weights: I=32 O=64 KERNEL=3x3x1
Reading weights: I=64 O=64 KERNEL=1x1x1
Reading weights: I=128 O=64 KERNEL=1x1x1
Reading weights: I=64 O=128 KERNEL=3x3x1
Reading weights: I=128 O=64 KERNEL=1x1x1
Reading weights: I=128 O=64 KERNEL=1x1x1
Reading weights: I=64 O=64 KERNEL=1x1x1
Reading weights: I=64 O=64 KERNEL=3x3x1
Reading weights: I=64 O=64 KERNEL=1x1x1
Reading weights: I=64 O=64 KERNEL=3x3x1
Reading weights: I=64 O=64 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=128 O=256 KERNEL=3x3x1
Reading weights: I=256 O=128 KERNEL=1x1x1
Reading weights: I=256 O=128 KERNEL=1x1x1
...
...
Reading weights: I=128 O=256 KERNEL=3x3x1
Reading weights: I=256 O=255 KERNEL=1x1x1
Error reading file yolo4/layers/c138.bin with n of float: 65280 seek: 0 size: 261120

/home/user/software/tkDNN/src/utils.cpp:58
Aborting...

thank you for you help

from tkdnn.

ceccocats avatar ceccocats commented on June 30, 2024

Hi, when it prints:
Not supported field ...
It means that the field in the cfg is not taken into account changing the model of the network and therefore the dimension of the weights to load.
Not all cfg are supported, see DarknetParser.cpp

from tkdnn.

ASONG0506 avatar ASONG0506 commented on June 30, 2024

Hi, I met the similar problem.
I trained my yolov4 model using my own dataset with 10 class, the I converted it using the this repo and no error occured with these infomation:

......

n: 136, type 0
Convolutional
weights: 32768, biases: 128, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c136.bin

n: 137, type 0
Convolutional
weights: 294912, biases: 256, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c137.bin

n: 138, type 0
Convolutional
weights: 11520, biases: 45, batch_normalize: 0, groups: 1
write binary ../output_bdd100k/c138.bin

n: 139, type 27
export YOLO
mask: 3
biases: 18
mask 0.000000
mask 1.000000
mask 2.000000
anchor 12.000000
anchor 16.000000
anchor 19.000000
anchor 36.000000
anchor 40.000000
anchor 28.000000
anchor 36.000000
anchor 75.000000
anchor 76.000000
anchor 55.000000
anchor 72.000000
anchor 146.000000
anchor 142.000000
anchor 110.000000
anchor 192.000000
anchor 243.000000
anchor 459.000000
anchor 401.000000
write binary ../output_bdd100k/g139.bin

n: 140, type 9
export ROUTE

n: 141, type 0
Convolutional
weights: 294912, biases: 256, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c141.bin

n: 142, type 9
export ROUTE

n: 143, type 0
Convolutional
weights: 131072, biases: 256, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c143.bin

n: 144, type 0
Convolutional
weights: 1179648, biases: 512, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c144.bin

n: 145, type 0
Convolutional
weights: 131072, biases: 256, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c145.bin

n: 146, type 0
Convolutional
weights: 1179648, biases: 512, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c146.bin

n: 147, type 0
Convolutional
weights: 131072, biases: 256, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c147.bin

n: 148, type 0
Convolutional
weights: 1179648, biases: 512, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c148.bin

n: 149, type 0
Convolutional
weights: 23040, biases: 45, batch_normalize: 0, groups: 1
write binary ../output_bdd100k/c149.bin

n: 150, type 27
export YOLO
mask: 3
biases: 18
mask 3.000000
mask 4.000000
mask 5.000000
anchor 12.000000
anchor 16.000000
anchor 19.000000
anchor 36.000000
anchor 40.000000
anchor 28.000000
anchor 36.000000
anchor 75.000000
anchor 76.000000
anchor 55.000000
anchor 72.000000
anchor 146.000000
anchor 142.000000
anchor 110.000000
anchor 192.000000
anchor 243.000000
anchor 459.000000
anchor 401.000000
write binary ../output_bdd100k/g150.bin

n: 151, type 9
export ROUTE

n: 152, type 0
Convolutional
weights: 1179648, biases: 512, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c152.bin

n: 153, type 9
export ROUTE

n: 154, type 0
Convolutional
weights: 524288, biases: 512, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c154.bin

n: 155, type 0
Convolutional
weights: 4718592, biases: 1024, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c155.bin

n: 156, type 0
Convolutional
weights: 524288, biases: 512, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c156.bin

n: 157, type 0
Convolutional
weights: 4718592, biases: 1024, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c157.bin

n: 158, type 0
Convolutional
weights: 524288, biases: 512, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c158.bin

n: 159, type 0
Convolutional
weights: 4718592, biases: 1024, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c159.bin

n: 160, type 0
Convolutional
weights: 46080, biases: 45, batch_normalize: 0, groups: 1
write binary ../output_bdd100k/c160.bin

n: 161, type 27
export YOLO
mask: 3
biases: 18
mask 6.000000
mask 7.000000
mask 8.000000
anchor 12.000000
anchor 16.000000
anchor 19.000000
anchor 36.000000
anchor 40.000000
anchor 28.000000
anchor 36.000000
anchor 75.000000
anchor 76.000000
anchor 55.000000
anchor 72.000000
anchor 146.000000
anchor 142.000000
anchor 110.000000
anchor 192.000000
anchor 243.000000
anchor 459.000000
anchor 401.000000
write binary ../output_bdd100k/g161.bin

this is my yolov4.cfg file:


##########################

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=45
activation=linear


[yolo]
mask = 0,1,2
anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
classes=10
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
scale_x_y = 1.2
iou_thresh=0.213
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
nms_kind=greedynms
beta_nms=0.6
max_delta=5


[route]
layers = -4

[convolutional]
batch_normalize=1
size=3
stride=2
pad=1
filters=256
activation=leaky

[route]
layers = -1, -16

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=45
activation=linear


[yolo]
mask = 3,4,5
anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
classes=10
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
scale_x_y = 1.1
iou_thresh=0.213
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
nms_kind=greedynms
beta_nms=0.6
max_delta=5


[route]
layers = -4

[convolutional]
batch_normalize=1
size=3
stride=2
pad=1
filters=512
activation=leaky

[route]
layers = -1, -37

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=45
activation=linear


[yolo]
mask = 6,7,8
anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
classes=10
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1
scale_x_y = 1.05
iou_thresh=0.213
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
nms_kind=greedynms
beta_nms=0.6
max_delta=5

Then I use this repo to generate the .rt file and do inference, when run this commond ./test_yolo4 and error occured as follow:

.......
Reading weights: I=128 O=256 KERNEL=3x3x1
Reading weights: I=256 O=255 KERNEL=1x1x1
Error reading file yolo4/layers/c138.bin with n of float: 65280 seek: 0 size: 261120

/home/xavier/test_ws/tkDNN-master/src/utils.cpp:58
Aborting...

Do you know what's whe wrong with it, thanks!

from tkdnn.

ASONG0506 avatar ASONG0506 commented on June 30, 2024

Hi, I met the similar problem.
I trained my yolov4 model using my own dataset with 10 class, the I converted it using the this repo and no error occured with these infomation:

......

n: 136, type 0
Convolutional
weights: 32768, biases: 128, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c136.bin

n: 137, type 0
Convolutional
weights: 294912, biases: 256, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c137.bin

n: 138, type 0
Convolutional
weights: 11520, biases: 45, batch_normalize: 0, groups: 1
write binary ../output_bdd100k/c138.bin

n: 139, type 27
export YOLO
mask: 3
biases: 18
mask 0.000000
mask 1.000000
mask 2.000000
anchor 12.000000
anchor 16.000000
anchor 19.000000
anchor 36.000000
anchor 40.000000
anchor 28.000000
anchor 36.000000
anchor 75.000000
anchor 76.000000
anchor 55.000000
anchor 72.000000
anchor 146.000000
anchor 142.000000
anchor 110.000000
anchor 192.000000
anchor 243.000000
anchor 459.000000
anchor 401.000000
write binary ../output_bdd100k/g139.bin

n: 140, type 9
export ROUTE

n: 141, type 0
Convolutional
weights: 294912, biases: 256, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c141.bin

n: 142, type 9
export ROUTE

n: 143, type 0
Convolutional
weights: 131072, biases: 256, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c143.bin

n: 144, type 0
Convolutional
weights: 1179648, biases: 512, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c144.bin

n: 145, type 0
Convolutional
weights: 131072, biases: 256, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c145.bin

n: 146, type 0
Convolutional
weights: 1179648, biases: 512, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c146.bin

n: 147, type 0
Convolutional
weights: 131072, biases: 256, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c147.bin

n: 148, type 0
Convolutional
weights: 1179648, biases: 512, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c148.bin

n: 149, type 0
Convolutional
weights: 23040, biases: 45, batch_normalize: 0, groups: 1
write binary ../output_bdd100k/c149.bin

n: 150, type 27
export YOLO
mask: 3
biases: 18
mask 3.000000
mask 4.000000
mask 5.000000
anchor 12.000000
anchor 16.000000
anchor 19.000000
anchor 36.000000
anchor 40.000000
anchor 28.000000
anchor 36.000000
anchor 75.000000
anchor 76.000000
anchor 55.000000
anchor 72.000000
anchor 146.000000
anchor 142.000000
anchor 110.000000
anchor 192.000000
anchor 243.000000
anchor 459.000000
anchor 401.000000
write binary ../output_bdd100k/g150.bin

n: 151, type 9
export ROUTE

n: 152, type 0
Convolutional
weights: 1179648, biases: 512, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c152.bin

n: 153, type 9
export ROUTE

n: 154, type 0
Convolutional
weights: 524288, biases: 512, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c154.bin

n: 155, type 0
Convolutional
weights: 4718592, biases: 1024, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c155.bin

n: 156, type 0
Convolutional
weights: 524288, biases: 512, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c156.bin

n: 157, type 0
Convolutional
weights: 4718592, biases: 1024, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c157.bin

n: 158, type 0
Convolutional
weights: 524288, biases: 512, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c158.bin

n: 159, type 0
Convolutional
weights: 4718592, biases: 1024, batch_normalize: 1, groups: 1
write binary ../output_bdd100k/c159.bin

n: 160, type 0
Convolutional
weights: 46080, biases: 45, batch_normalize: 0, groups: 1
write binary ../output_bdd100k/c160.bin

n: 161, type 27
export YOLO
mask: 3
biases: 18
mask 6.000000
mask 7.000000
mask 8.000000
anchor 12.000000
anchor 16.000000
anchor 19.000000
anchor 36.000000
anchor 40.000000
anchor 28.000000
anchor 36.000000
anchor 75.000000
anchor 76.000000
anchor 55.000000
anchor 72.000000
anchor 146.000000
anchor 142.000000
anchor 110.000000
anchor 192.000000
anchor 243.000000
anchor 459.000000
anchor 401.000000
write binary ../output_bdd100k/g161.bin

this is my yolov4.cfg file:


##########################

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=45
activation=linear


[yolo]
mask = 0,1,2
anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
classes=10
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
scale_x_y = 1.2
iou_thresh=0.213
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
nms_kind=greedynms
beta_nms=0.6
max_delta=5


[route]
layers = -4

[convolutional]
batch_normalize=1
size=3
stride=2
pad=1
filters=256
activation=leaky

[route]
layers = -1, -16

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=45
activation=linear


[yolo]
mask = 3,4,5
anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
classes=10
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
scale_x_y = 1.1
iou_thresh=0.213
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
nms_kind=greedynms
beta_nms=0.6
max_delta=5


[route]
layers = -4

[convolutional]
batch_normalize=1
size=3
stride=2
pad=1
filters=512
activation=leaky

[route]
layers = -1, -37

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=45
activation=linear


[yolo]
mask = 6,7,8
anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
classes=10
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1
scale_x_y = 1.05
iou_thresh=0.213
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
nms_kind=greedynms
beta_nms=0.6
max_delta=5

Then I use this repo to generate the .rt file and do inference, when run this commond ./test_yolo4 and error occured as follow:

.......
Reading weights: I=128 O=256 KERNEL=3x3x1
Reading weights: I=256 O=255 KERNEL=1x1x1
Error reading file yolo4/layers/c138.bin with n of float: 65280 seek: 0 size: 261120

/home/xavier/test_ws/tkDNN-master/src/utils.cpp:58
Aborting...

Do you know what's whe wrong with it, thanks!

I found the solution, thanks

from tkdnn.

ceccocats avatar ceccocats commented on June 30, 2024

Hi, can you share your solution? It can be useful.
It was your mistake? or an error in the code?

from tkdnn.

Kr4is avatar Kr4is commented on June 30, 2024

Hi!, my error was that you said.

I had a maxpool layer with maxpool_depth=1 that is not already supported.
I change this layer by:

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky

As @AlexeyAB says in this issue.

I regenerate the weights, and it works!

Then i think in the previous layer can be an unsupported parameter?

from tkdnn.

ceccocats avatar ceccocats commented on June 30, 2024

Yes, darknet Is big and the time is limited.
We decided to implement only the official YOLOs and the ones that needs only minor changes.

from tkdnn.

ASONG0506 avatar ASONG0506 commented on June 30, 2024

There is no error in your code, actually I just changed my own cfg file and the default cfg file for parsing the darknet network is : root/tests/darknet/cfg/yolov4.cfg and the corresponding names file is : root/tests/darknet/names/coco.names. when I modified these two files, it worked!

from tkdnn.

mive93 avatar mive93 commented on June 30, 2024

Hi @zjZSTU

which cfg are you using in the test?

from tkdnn.

zjZSTU avatar zjZSTU commented on June 30, 2024

Hi @zjZSTU

which cfg are you using in the test?

hi @mive93 , i have solved my problem. the key is to choose my own .cfg and .name file for custom dataset. refer to #99

from tkdnn.

mive93 avatar mive93 commented on June 30, 2024

Yes, exactly 👍

from tkdnn.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.