Giter Site home page Giter Site logo

Comments (36)

sampepose avatar sampepose commented on May 30, 2024 29

The Portal team at Facebook is already using ONNX to deploy Detectron2 models to device.

I added ROIAlign to ONNX, but we're still missing:

  • GenerateProposals
  • BBoxTransform
  • BoxWithNMSLimit

We splice up the model, using ONNX where possible, and using caffe2.python.core.CreateOperator for these unsupported ops. I will see if we can release our conversion script.

from detectron2.

Traderain avatar Traderain commented on May 30, 2024 9

Why is this closed? lol, the issue is still there

from detectron2.

dselivanov avatar dselivanov commented on May 30, 2024 7

No, it's our internal support.

why is this closed then :-D ? Adding ONNX support to your proprietary facebook pipeline is not equal to adding it to detectron2 opensource.

from detectron2.

fernandorovai avatar fernandorovai commented on May 30, 2024 6

Hi,
@jinfagang can you please provide a direction to convert the model to ONNX?
I wanna use it with OpenVINO

from detectron2.

mmxuan18 avatar mmxuan18 commented on May 30, 2024 4

onnx export mode has been supported

any docs or examples?

from detectron2.

beniz avatar beniz commented on May 30, 2024 4

@jiajunshen so Google, internally, made the onnx work, and keeps it closed-source, is that correct ? Confusing :)

from detectron2.

lucasjinreal avatar lucasjinreal commented on May 30, 2024 3

@sampepose Thanks for this wonderful work. AFAIK, original maskrcnn-benmark which detectron2 based mainly already exported to onnx by community, and ONNX opset 11 have all ops needed by maskrcnn.onnx.

For what we want final is, exported to onnx if possible and finally convert onnx model to TensorRT engine to gain the massive accelerations.

From this aspect, importing too many ops that not in range of onnx standard is not really necessary I think.

from detectron2.

lucasjinreal avatar lucasjinreal commented on May 30, 2024 2

I have to say generates onnx is quite simple. The question is make it runnable on a acceleration framwork such as TensorRT.

things becomes different when it comes to such a complicated model. Wish community can start the next step thing though.

from detectron2.

lucasjinreal avatar lucasjinreal commented on May 30, 2024 2

official repo now supports export. using pytorch master source code to build

from detectron2.

JavierClearImageAI avatar JavierClearImageAI commented on May 30, 2024 2

From the comments I assume that exporting detectron2 to caffe2 or ONNX is simple. Any hint about how to convert them to TensorRT or any other way to predict using GPU power?

from detectron2.

solarflarefx avatar solarflarefx commented on May 30, 2024 1

@jinfagang So it's a bit unclear to me. Can detectron2 models now be exported to .onnx and then converted into a gpu-optimized TensorRT engine for inference?

from detectron2.

jillemash avatar jillemash commented on May 30, 2024

Hi,
Shouldn't this issue stay open as long as the conversion script is not available ?
In #243 (comment) and #46 (comment) Yuxin Wu
(ppwwyyxx) says that this issue will be updated to follow progress on this.
I'm interested to subscribe to this one to be notified when converting models to ONNX will be available.

Thanks.

from detectron2.

cryptSky avatar cryptSky commented on May 30, 2024

@jinfagang Could you share with us how to convert this model to onnx? Would highly appreciate that. Thanks in advance

from detectron2.

Musbell avatar Musbell commented on May 30, 2024

@jinfagang I would love @fernandorovai question to be answered because i am having some issues with the conversion.

from detectron2.

nattavitk avatar nattavitk commented on May 30, 2024

official repo now supports export. using pytorch master source code to build

I built PyTorch from master source code, and successfully export FASTER_RCNN_R50_FPN_3x to onnx model. However, I found that onnxruntime did not know AliasWithName operator.

onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Fatal error: AliasWithName is not a registered function/op

Have you experienced this issue?

from detectron2.

ppwwyyxx avatar ppwwyyxx commented on May 30, 2024

Answered in https://detectron2.readthedocs.io/modules/export.html#detectron2.export.export_onnx_model

from detectron2.

nattavitk avatar nattavitk commented on May 30, 2024

Answered in https://detectron2.readthedocs.io/modules/export.html#detectron2.export.export_onnx_model

Thank you

from detectron2.

kotomiDu avatar kotomiDu commented on May 30, 2024

Hi,
@jinfagang can you please provide a direction to convert the model to ONNX?
I wanna use it with OpenVINO

@fernandorovai Hi, Does it work converting detectron2 model into OpenVINO IR?

from detectron2.

LeonNerd avatar LeonNerd commented on May 30, 2024

官方仓库现在支持出口。使用pytorch master源代码进行构建

我从主源代码构建了PyTorch,并成功导出FASTER_RCNN_R50_FPN_3x到onnx模型。但是,我发现onnxruntime不知道AliasWithName运算符。

onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Fatal error: AliasWithName is not a registered function/op

I had the same problem

from detectron2.

justinyym avatar justinyym commented on May 30, 2024

https://detectron2.readthedocs.io/modules/export.html#detectron2.export.export_onnx_model中回答
Can I add these operators(such as AliasWithName) to onnx myself and use onnxruntime for inference. How difficult is the work?

from detectron2.

avagreeen avatar avagreeen commented on May 30, 2024

Hi,
I can successfully export the onnx model with MODEL.DEVICE cuda, but when I try to run it with caffe2.python.onnx.backend, with device = 'CUDA', it reports the error:

RuntimeError: [enforce fail at operator.cc:275] op. Cannot create operator of type 'BBoxTransform' on the device 'CUDA'. Verify that implementation for the corresponding device exist. It might also happen if the binary is not linked with the operator implementation code. If Python frontend is used it might happen if dyndep.InitOpsLibrary call is missing.

from detectron2.

rs9899 avatar rs9899 commented on May 30, 2024

@jinfagang @ppwwyyxx

Can you provide some examples for https://detectron2.readthedocs.io/modules/export.html#detectron2.export.export_onnx_model and how to use this to convert DensePose project models to ONNX model that can run in onnx.js or onnxruntime?

Thanks

from detectron2.

lucasjinreal avatar lucasjinreal commented on May 30, 2024

@rs9899 how's DensePose runs speed on pytorch side?

from detectron2.

rs9899 avatar rs9899 commented on May 30, 2024

@jinfagang , personally I am just using to generate IUV_Map of a single image at a time so it runs in about 5 seconds, but that includes loading of the model and the setup. So, it is not the best benchmark.
But do you know how to get similar output on JS and browser because that's where I want to deploy the code?

from detectron2.

nassarofficial avatar nassarofficial commented on May 30, 2024

official repo now supports export. using pytorch master source code to build

I built PyTorch from master source code, and successfully export FASTER_RCNN_R50_FPN_3x to onnx model. However, I found that onnxruntime did not know AliasWithName operator.

onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Fatal error: AliasWithName is not a registered function/op

Have you experienced this issue?

Did you find a solution for this issue?

from detectron2.

xsidneib avatar xsidneib commented on May 30, 2024

Hi Guys, have anybody sorted this issue by adding a custom plugin? could you share please?

onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Fatal error: AliasWithName is not a registered function/op

from detectron2.

chenynCV avatar chenynCV commented on May 30, 2024

Hi Guys, have anybody sorted this issue by adding a custom plugin? could you share please?

onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Fatal error: AliasWithName is not a registered function/op

I came across the same issue. Have anyone tried torchvision?

from detectron2.

chkda avatar chkda commented on May 30, 2024

Hi Guys, have anybody sorted this issue by adding a custom plugin? could you share please?

onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Fatal error: AliasWithName is not a registered function/op

Facing the same issue.

from detectron2.

hieubz avatar hieubz commented on May 30, 2024

Answered in https://detectron2.readthedocs.io/modules/export.html#detectron2.export.export_onnx_model

I still don't know where it is? Would you please point me?

from detectron2.

hieubz avatar hieubz commented on May 30, 2024

I use Caffe2Tracer to convert to onnx model. But when I ran inference, this throwed an exception:

[ONNXRuntimeError] : 1 : FAIL : Fatal error: AliasWithName is not a registered function/op

from detectron2.

zhoujinhai avatar zhoujinhai commented on May 30, 2024

Hi,
@jinfagang can you please provide a direction to convert the model to ONNX?
I wanna use it with OpenVINO

Have you success used it with OpenVINO?

from detectron2.

zhoujinhai avatar zhoujinhai commented on May 30, 2024

Hi,
@jinfagang can you please provide a direction to convert the model to ONNX?
I wanna use it with OpenVINO

@fernandorovai Hi, Does it work converting detectron2 model into OpenVINO IR?

I have the same problem, Does it work with OpenVINO?

from detectron2.

hasansalimkanmaz avatar hasansalimkanmaz commented on May 30, 2024

What I understand from this link, we couldn't run the model via onnx. It is only feasible with caffe2. And, we don't have any clue about post-processing to run it with onnx. Am I missing something?

from detectron2.

ronitsinha avatar ronitsinha commented on May 30, 2024

Hey! Is there any update on this issue? Seems like its still not resolved and like some others in the thread I'm having trouble with the AliasWithName error in ONNXRuntime.

from detectron2.

bouachalazhar avatar bouachalazhar commented on May 30, 2024

Hi,
@jinfagang can you please provide a direction to convert the model to ONNX?
I wanna use it with OpenVINO

@fernandorovai Hi, Does it work converting detectron2 model into OpenVINO IR?

I would like to do it export detectron2 model to OpenVINO.

from detectron2.

bouachalazhar avatar bouachalazhar commented on May 30, 2024

Hi,
@jinfagang can you please provide a direction to convert the model to ONNX?
I wanna use it with OpenVINO

@fernandorovai Hi, Does it work converting detectron2 model into OpenVINO IR?

I have the same problem, Does it work with OpenVINO?

I would like to do it export detectron2 model to OpenVINO.

from detectron2.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.