Hello author, after converting the onnx model to rknn, the input of the rknn model is [1,1224224], and the following error occurred during the execution of infer.py:
(1, 1, 224, 224)
E RKNN: [03:51:51.071] rknn_set_input_shapes error, input name = input supported input shapes are as follows:
E RKNN: [03:51:51.071] shape = [1,224,224,1], layout = NHWC
E RKNN: [03:51:51.071] while get rknn_tensor_attr[0].name = input, dims = [1, 1, 224, 224], fmt = NHWC
E Catch exception when setting inputs.
E Traceback (most recent call last):
File "/home/pi/archiconda3/envs/py38/lib/python3.8/site-packages/rknnlite/api/rknn_lite.py", line 200, in inference
self.rknn_runtime.set_inputs(inputs, data_type, data_format, inputs_pass_through=inputs_pass_through)
File "rknnlite/api/rknn_runtime.py", line 1127, in rknnlite.api.rknn_runtime.RKNNRuntime.set_inputs
Exception: Set input shape failed. error code: RKNN_ERR_FAIL
None
By adding printing information, it was found that the dimension of img_crop in the statement out_y [sy: sy+output_size, sx: sx+output_size]=rknn. reference (inputs=[img_crop]) [0] [0] is (1, 1, 224, 224), which conforms to the input of the rknn model. However, the result of rknn. reference (inputs=[img_crop]) is a None