Comments (7)
@Slyne This will be available in triton 22.02 release.
from onnxruntime_backend.
@Tabrizian @tanmayv25 can you look into the same
from onnxruntime_backend.
This issue explains the current limitation and why output is always on CPU : triton-inference-server/server#3364
from onnxruntime_backend.
Once triton-inference-server/server#3364 is merged we will enable output binding to gpus in ort backend.
from onnxruntime_backend.
Any update ?
@askhade
@deadeyegoodwin
from onnxruntime_backend.
@askhade Thank you for informing!
from onnxruntime_backend.
@askhade
I serving encoder-decoder model (TrOCR) on Triton onnx backend. I meet a problem:
First, I call and get output from encoder model in server. Afterthat, because output on GPU, I need to transfer to CPU for converting to numpy before call output from decoder model on server. It make botteneck. Hope you can help me with issue. Thanks a lot.
from onnxruntime_backend.
Related Issues (20)
- Onnxruntime Error
- Fatal error: TRT:EfficientNMS_TRT(-1) is not a registered function/op HOT 2
- InvalidArgumentError: The tensor Input (Input) of Slice op is not initialized.
- How to create onnx model for ragged batching?
- Add `enable_dynamic_shapes` To Model Config To Resolve CNN Memory Leaks With OpenVino EP
- GPU memory leak with high load for ONNX model HOT 3
- Onnxruntime backend error when workload is high since Triton uses CUDA 12 HOT 4
- how to use onnxruntime profiling in triton
- Error while Loading YOLOv8 Model with EfficientNMS_TRT Plugin in TRITON HOT 2
- Openvino doesn't work, it degrades inference performance instead HOT 4
- Support arbitrary options for execution providers
- Model failed to create because of output dimensions
- Question: Does ONNX-RT silently fallbacks to CPU? HOT 1
- Request for Supporting minShapes/optShapes/maxShapes for TensorRT HOT 1
- Will onxxruntime backend support INT8 on cpu ? HOT 1
- Enable "trt_build_heuristics_enable" optimization for onnxruntime-TensorRT HOT 2
- CPU Throttling when Deploying Triton with ONNX Backend on Kubernetes HOT 6
- Facing errors when installing onnxruntime backend for triton
- Failed to allocated memory for requested buffer of size X
- Is onnxruntime-genai supported? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onnxruntime_backend.