Comments (6)
I followed the document of converting model with openmmp and generated pretrained faster_rcnn model , but when I used x86 pplnn to run this model, there is an error says:
[INFO][2021-07-06 08:48:49.498][pplnn.cc:683] ppl.nn version: 7dd75a1077867fc9a762449953417088446ae2f8-dirty [INFO][2021-07-06 08:48:49.498][pplnn.cc:110] ***** register X86Engine ***** [INFO][2021-07-06 08:48:49.761][simple_graph_partitioner.cc:90] total partition(s) of graph[torch-jit-export]: 1. [ERROR][2021-07-06 08:48:50.556][kernel.cc:14] reshape kernel[Expand_1100] failed: invalid value [ERROR][2021-07-06 08:48:50.556][kernel.cc:47] BeforeExecute() of kernel[Expand_1100] failed: invalid value [ERROR][2021-07-06 08:48:50.556][scheduler_common.cc:153] exec kernel[Expand_1100] failed: invalid value [ERROR][2021-07-06 08:48:50.556][sequential_scheduler.cc:99] execute kernel[Expand_1100] failed: invalid value [ERROR][2021-07-06 08:48:50.556][pplnn.cc:784] Run() failed: invalid value
The mobilenet can execute successfully. Can I do anything to make this model execute right?
Can you post the command sequence that are used to convert the faster-rcnn model?
from ppl.nn.
My command is just as the insturction of your model-convert-guide
, the Example: Converting Faster R-CNN
:
cd mmdetection && mkdir checkpoints && cd checkpoints
wget https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth
python ../tools/deployment/pytorch2onnx.py ../configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py \
faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
--output-file faster_rcnn.onnx --simplify --dynamic-export
from ppl.nn.
My command is just as the insturction of your
model-convert-guide
, theExample: Converting Faster R-CNN
:cd mmdetection && mkdir checkpoints && cd checkpoints wget https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth python ../tools/deployment/pytorch2onnx.py ../configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py \ faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \ --output-file faster_rcnn.onnx --simplify --dynamic-export
We cannot reproduce your problem. Can you send your converted model to [email protected]?
from ppl.nn.
Thanks for relpying! I have sent the model and my environment to your email.
from ppl.nn.
Thanks for relpying! I have sent the model and my environment to your email.
We have successfully reproduced your problem on your model. The bug has been fixed and we merged a PR:
#15
By our test, your model now can be infered successfully, and its result is same as onnxruntime.
Please have a try again.
from ppl.nn.
That worked! Thanks a lot!
from ppl.nn.
Related Issues (20)
- pplnn run mobilenet v2 model failed. (use cuda) HOT 7
- linux compile error protobuf static assertion failed HOT 3
- malloc_consolidate(): invalid chunk size HOT 2
- pplnn save-input 得到的NDARRAY的 shape不正确 HOT 1
- 如何使用cmake的将ppl.nn和依赖ppl.nn的代码一同编译? HOT 3
- Segmentation fault at ppl::nn::x86::X86Kernel::DumpOutputTensors HOT 5
- 获取模型推理结果(GetOutputs)耗时长 HOT 2
- Install Error HOT 1
- The compilation passed, but an error was reported in test phase HOT 2
- Floating point exception (core dumped) ? HOT 4
- 使用x86 engine运行resnet50 fp16 onnx模型 core dump
- (Ask) why InferInheritedType handle int8 to fp16 out? HOT 3
- Got wrong output shape when run a Gemm op(transB=0) use cuda HOT 4
- Crash with ONNX Split operator
- 关于全局engine,其他线程引用导致的性能下降问题 HOT 4
- 推理误差排查
- 多模型pipeline的示例
- ARM平台是否可以跑int8的推理
- cuda build error
- When I run build.sh to compile the project, a compilation error occured. HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ppl.nn.