Comments (5)
- 先用 testMNNFromOnnx.py 测试保证 mnn 模型没问题,然后用 GetMNNInfo 看下模型信息。
- 输出为空时会打 log 的
- 这个看上去是用的 uint8_t 输入,大概率是有问题的,一般都是float
from mnn.
1:testMNNFromOnnx.py成功了输出如下:
Dir exist
onnx/test.onnx
2024-04-12 17:26:12.595200 [W:onnxruntime:, graph.cc:3593 CleanUnusedInitializersAndNodeArgs] Removing initializer '370'. It is not used by any node and should be removed from the model.
tensor(float)
['keypoints', 'scores', 'descriptors']
inputs:
image
onnx/
outputs:
onnx/keypoints.txt (1, 564, 2)
onnx/
onnx/scores.txt (1, 564)
onnx/
onnx/descriptors.txt (1, 564, 256)
onnx/
hw.cpufamily: 458787763 , size = 4
The device support i8sdot:1, support fp16:1, support i8mm: 0
Start to Convert Other Model Format To MNN Model..., target version: 2.8
[17:26:13] /Users/yjt/Downloads/MNN-2.8.1/tools/converter/source/onnx/onnxConverter.cpp:46: ONNX Model ir version: 8
[17:26:13] /Users/yjt/Downloads/MNN-2.8.1/tools/converter/source/onnx/onnxConverter.cpp:47: ONNX Model opset version: 17
[17:26:13] /Users/yjt/Downloads/MNN-2.8.1/tools/converter/source/onnx/onnxConverter.cpp:133: Check it out ==> /Clip_output_0 has empty input, the index is 2
[17:26:13] /Users/yjt/Downloads/MNN-2.8.1/tools/converter/source/onnx/onnxConverter.cpp:133: Check it out ==> /Clip_1_output_0 has empty input, the index is 2
Start to Optimize the MNN Net...
inputTensors : [ image, ]
outputTensors: [ descriptors, keypoints, scores, ]
The model has subgraphs, please use MNN::Express::Module to run it
Converted Success!
Check convert result by onnx, thredhold is 0.01
image
output: keypoints
output: scores
output: descriptors
keypoints: (1, 564, 2, )
scores: (1, 564, )
descriptors: (1, 564, 256, )
TEST_SUCCESS
2:GetMNNInfo信息如下:
hw.cpufamily: 458787763 , size = 4
The device support i8sdot:1, support fp16:1, support i8mm: 0
Model default dimensionFormat is NCHW
Model Inputs:
[ image ]: dimensionFormat: NC4HW4, size: [ 1,1,320,640 ], type is float
Model Outputs:
[ descriptors ]
[ keypoints ]
[ scores ]
Model Version: 2.8.1
3:相关代码也改成float了。但是输出还是为空。为什么呢?
std::vector<MNN::Express::VARP> getMNNInputs(std::string file_name) {
int width = 640;
int height = 320;
int channels;
unsigned char *data = stbi_load(file_name.c_str(), &width, &height, &channels, 3);
if (data == nullptr) {
std::cout << "Failed to load image: " << file_name << std::endl;
return {};
}
MNN::CV::ImageProcess::Config config;
config.filterType = MNN::CV::BILINEAR;
float mean[3] = {0.0f, 0.0f, 0.0f};
float normals[3] = {1.0f / 255.0f, 1.0f / 255.0f, 1.0f / 255.0f};
::memcpy(config.mean, mean, sizeof(mean));
::memcpy(config.normal, normals, sizeof(normals));
config.sourceFormat = RGB;
config.destFormat = GRAY;
std::shared_ptr<MNN::CV::ImageProcess> pretreat(MNN::CV::ImageProcess::create(config));
auto img_tensor = MNN::Express::_Input({1, height, width, 1}, MNN::Express::NHWC, halide_type_of<float>());
pretreat->convert(data, width, height, 0, img_tensor->writeMap<float>(), width, height, 0, 0, halide_type_of<float>());
img_tensor = MNN::Express::_Convert(img_tensor, MNN::Express::NC4HW4);
std::vector<MNN::Express::VARP> inputs;
inputs.emplace_back(img_tensor);
stbi_image_free(data);
return inputs;
}
from mnn.
转模型时加个 --keepInputFormat=true ,
img_tensor = MNN::Express::_Convert(img_tensor, MNN::Express::NC4HW4);
改成
img_tensor = MNN::Express::_Convert(img_tensor, MNN::Express::NCHW);
试下,另外看下 log
from mnn.
我看你运行代码是 ios 上,先确认一下 ios 上的 mnn 版本和 pc 上一致
from mnn.
已解决。问题是输入写的不对fix如下:
std::vector<MNN::Express::VARP> getMNNInputs(std::string file_name) {
int width = 640;
int height = 320;
int inputWidth = 0;
int inputHeight = 0;
int channels;
unsigned char *data = stbi_load(file_name.c_str(), &inputWidth, &inputHeight, &channels, 3);
if (data == nullptr) {
std::cout << "Failed to load image: " << file_name << std::endl;
return {};
}
MNN::CV::ImageProcess::Config config;
config.filterType = MNN::CV::BILINEAR;
float mean[3] = {0.0f, 0.0f, 0.0f};
float normals[3] = {1.0f / 255.0f, 1.0f / 255.0f, 1.0f / 255.0f};
::memcpy(config.mean, mean, sizeof(mean));
::memcpy(config.normal, normals, sizeof(normals));
config.sourceFormat = RGB;
config.destFormat = GRAY;
Matrix trans;
// Set transform, from dst scale to src, the ways below are both ok
trans.setScale((float)(inputWidth-1) / (width-1), (float)(inputHeight-1) / (height-1));
std::shared_ptr<MNN::CV::ImageProcess> pretreat(MNN::CV::ImageProcess::create(config));
pretreat->setMatrix(trans);
auto img_tensor = MNN::Express::_Input({1,1, height, width}, MNN::Express::NC4HW4, halide_type_of<float>());
pretreat->convert(data, inputWidth, inputHeight, 0, img_tensor->writeMap<float>(), width, height, 0, 0, halide_type_of<float>());
std::vector<MNN::Express::VARP> inputs;
inputs.emplace_back(img_tensor);
stbi_image_free(data);
return inputs;
}
from mnn.
Related Issues (20)
- 多输入模型转换时添加--optimizePrefer 2选项 并基于此使用MNNPythonOfflineQuant离线量化,在arm cpu后端推理输出异常,与x86 cpu推理不对齐 HOT 1
- 请问MNN如何实现使用同一模型做并行推理? HOT 3
- Invalidate buffer to create MNN Module HOT 5
- MNNConvert onnx转mnn 错误,nn.ConvTranspose1d forward 中 pad 会出问题 HOT 7
- How to get SD mnn model used in transformers/diffussion/diffussion_demo
- Android 编译DEMO 出现 Target "MNN_CL_WRAP" of type OBJECT_LIBRARY may not be linked into another target
- 问问MNN的兼容方式 HOT 1
- ios平台 没有getTensor方法 HOT 2
- linux x86_64 平台, 多线程情况下, mnn 和推理速度不如 onnx,有可能是什么原因? HOT 4
- 手机端vulkan特性 image_write_without_format这个特性影响大吗?
- mnn 模型,Android 平台 benchmark 测试,推理使用 openCL, vulkan 都比 CPU 慢,可能是什么原因? HOT 2
- 鸿蒙32位 推理速度异常缓慢 HOT 2
- demo. exec segment.cpp推理为纯黑图,改变后处理后,推理结果正常 HOT 4
- 可以出一个详细的在android平台训练模型的教程吗? HOT 1
- ModuleBasic.out 测试 mnn 模型,load 模型阶段,报错 “PipelineModule:: Can't find enough output from the model, finded is” HOT 3
- opencl ssd 量化 HOT 1
- Python 多次推理会有结果不一致的情况 HOT 5
- 编译android release的libMNN.so的时候怎样可以保留带symbol的so,方便上线定位crash HOT 2
- C++ multi threads call error. HOT 1
- ONNX转换成MNN模型结果错误 HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mnn.