Comments (36)
The Portal team at Facebook is already using ONNX to deploy Detectron2 models to device.
I added ROIAlign to ONNX, but we're still missing:
- GenerateProposals
- BBoxTransform
- BoxWithNMSLimit
We splice up the model, using ONNX where possible, and using caffe2.python.core.CreateOperator
for these unsupported ops. I will see if we can release our conversion script.
from detectron2.
Why is this closed? lol, the issue is still there
from detectron2.
No, it's our internal support.
why is this closed then :-D ? Adding ONNX support to your proprietary facebook pipeline is not equal to adding it to detectron2 opensource.
from detectron2.
Hi,
@jinfagang can you please provide a direction to convert the model to ONNX?
I wanna use it with OpenVINO
from detectron2.
onnx export mode has been supported
any docs or examples?
from detectron2.
@jiajunshen so Google, internally, made the onnx work, and keeps it closed-source, is that correct ? Confusing :)
from detectron2.
@sampepose Thanks for this wonderful work. AFAIK, original maskrcnn-benmark which detectron2 based mainly already exported to onnx by community, and ONNX opset 11 have all ops needed by maskrcnn.onnx.
For what we want final is, exported to onnx if possible and finally convert onnx model to TensorRT engine to gain the massive accelerations.
From this aspect, importing too many ops that not in range of onnx standard is not really necessary I think.
from detectron2.
I have to say generates onnx is quite simple. The question is make it runnable on a acceleration framwork such as TensorRT.
things becomes different when it comes to such a complicated model. Wish community can start the next step thing though.
from detectron2.
official repo now supports export. using pytorch master source code to build
from detectron2.
From the comments I assume that exporting detectron2 to caffe2 or ONNX is simple. Any hint about how to convert them to TensorRT or any other way to predict using GPU power?
from detectron2.
@jinfagang So it's a bit unclear to me. Can detectron2 models now be exported to .onnx and then converted into a gpu-optimized TensorRT engine for inference?
from detectron2.
Hi,
Shouldn't this issue stay open as long as the conversion script is not available ?
In #243 (comment) and #46 (comment) Yuxin Wu
(ppwwyyxx) says that this issue will be updated to follow progress on this.
I'm interested to subscribe to this one to be notified when converting models to ONNX will be available.
Thanks.
from detectron2.
@jinfagang Could you share with us how to convert this model to onnx? Would highly appreciate that. Thanks in advance
from detectron2.
@jinfagang I would love @fernandorovai question to be answered because i am having some issues with the conversion.
from detectron2.
official repo now supports export. using pytorch master source code to build
I built PyTorch from master source code, and successfully export FASTER_RCNN_R50_FPN_3x
to onnx model. However, I found that onnxruntime did not know AliasWithName
operator.
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Fatal error: AliasWithName is not a registered function/op
Have you experienced this issue?
from detectron2.
Answered in https://detectron2.readthedocs.io/modules/export.html#detectron2.export.export_onnx_model
from detectron2.
Answered in https://detectron2.readthedocs.io/modules/export.html#detectron2.export.export_onnx_model
Thank you
from detectron2.
Hi,
@jinfagang can you please provide a direction to convert the model to ONNX?
I wanna use it with OpenVINO
@fernandorovai Hi, Does it work converting detectron2 model into OpenVINO IR?
from detectron2.
官方仓库现在支持出口。使用pytorch master源代码进行构建
我从主源代码构建了PyTorch,并成功导出
FASTER_RCNN_R50_FPN_3x
到onnx模型。但是,我发现onnxruntime不知道AliasWithName
运算符。
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Fatal error: AliasWithName is not a registered function/op
I had the same problem
from detectron2.
在https://detectron2.readthedocs.io/modules/export.html#detectron2.export.export_onnx_model中回答
Can I add these operators(such as AliasWithName) to onnx myself and use onnxruntime for inference. How difficult is the work?
from detectron2.
Hi,
I can successfully export the onnx model with MODEL.DEVICE cuda, but when I try to run it with caffe2.python.onnx.backend, with device = 'CUDA', it reports the error:
RuntimeError: [enforce fail at operator.cc:275] op. Cannot create operator of type 'BBoxTransform' on the device 'CUDA'. Verify that implementation for the corresponding device exist. It might also happen if the binary is not linked with the operator implementation code. If Python frontend is used it might happen if dyndep.InitOpsLibrary call is missing.
from detectron2.
@jinfagang @ppwwyyxx
Can you provide some examples for https://detectron2.readthedocs.io/modules/export.html#detectron2.export.export_onnx_model and how to use this to convert DensePose project models to ONNX model that can run in onnx.js or onnxruntime?
Thanks
from detectron2.
@rs9899 how's DensePose runs speed on pytorch side?
from detectron2.
@jinfagang , personally I am just using to generate IUV_Map
of a single image at a time so it runs in about 5 seconds, but that includes loading of the model and the setup. So, it is not the best benchmark.
But do you know how to get similar output on JS and browser because that's where I want to deploy the code?
from detectron2.
official repo now supports export. using pytorch master source code to build
I built PyTorch from master source code, and successfully export
FASTER_RCNN_R50_FPN_3x
to onnx model. However, I found that onnxruntime did not knowAliasWithName
operator.
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Fatal error: AliasWithName is not a registered function/op
Have you experienced this issue?
Did you find a solution for this issue?
from detectron2.
Hi Guys, have anybody sorted this issue by adding a custom plugin? could you share please?
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Fatal error: AliasWithName is not a registered function/op
from detectron2.
Hi Guys, have anybody sorted this issue by adding a custom plugin? could you share please?
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Fatal error: AliasWithName is not a registered function/op
I came across the same issue. Have anyone tried torchvision?
from detectron2.
Hi Guys, have anybody sorted this issue by adding a custom plugin? could you share please?
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Fatal error: AliasWithName is not a registered function/op
Facing the same issue.
from detectron2.
Answered in https://detectron2.readthedocs.io/modules/export.html#detectron2.export.export_onnx_model
I still don't know where it is? Would you please point me?
from detectron2.
I use Caffe2Tracer to convert to onnx model. But when I ran inference, this throwed an exception:
[ONNXRuntimeError] : 1 : FAIL : Fatal error: AliasWithName is not a registered function/op
from detectron2.
Hi,
@jinfagang can you please provide a direction to convert the model to ONNX?
I wanna use it with OpenVINO
Have you success used it with OpenVINO?
from detectron2.
Hi,
@jinfagang can you please provide a direction to convert the model to ONNX?
I wanna use it with OpenVINO@fernandorovai Hi, Does it work converting detectron2 model into OpenVINO IR?
I have the same problem, Does it work with OpenVINO?
from detectron2.
What I understand from this link, we couldn't run the model via onnx. It is only feasible with caffe2. And, we don't have any clue about post-processing to run it with onnx. Am I missing something?
from detectron2.
Hey! Is there any update on this issue? Seems like its still not resolved and like some others in the thread I'm having trouble with the AliasWithName error in ONNXRuntime.
from detectron2.
Hi,
@jinfagang can you please provide a direction to convert the model to ONNX?
I wanna use it with OpenVINO@fernandorovai Hi, Does it work converting detectron2 model into OpenVINO IR?
I would like to do it export detectron2 model to OpenVINO.
from detectron2.
Hi,
@jinfagang can you please provide a direction to convert the model to ONNX?
I wanna use it with OpenVINO@fernandorovai Hi, Does it work converting detectron2 model into OpenVINO IR?
I have the same problem, Does it work with OpenVINO?
I would like to do it export detectron2 model to OpenVINO.
from detectron2.
Related Issues (20)
- inference_on_dataset get Killed HOT 1
- Detectron2 Keypoint Detection Slowness issue - GPU usage is high
- Detectron2 Keypoints detection slowness issue HOT 2
- 🐛 Minor Bug: PointsVisualizer() throws error when passed floating coordinate values
- export_model.py crashes with keypoints HOT 1
- export_model.py crashes with keypoints HOT 9
- Very slow training on Apple M1 Pro HOT 2
- UnpicklingError: invalid load key, '\xef'. HOT 2
- export_model.py - list_of_lines[165] = " [1344, 1344], 1344 \n" HOT 1
- Please read & provide the following HOT 2
- The comits you are making are breaking the code!!! HOT 1
- @torch.compiler.disable - AttributeError: module 'torch' has no attribute 'compiler' HOT 7
- missing config key error HOT 2
- Please read & provide the following HOT 1
- Detectron2 about rotated object detection HOT 1
- AttributeError: Cannot find field 'gt_masks' in the given Instances! HOT 1
- DensePose的apply_net.py运行dump的选项时候,如何多gpu运行呢? HOT 1
- Encountered freezing during start training at iteration 0 HOT 2
- printing label name and bbox coordinates of predicted images
- Add device argument for multi-backends access & Ascend NPU support HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from detectron2.