Comments (7)
Looks similar to #109
For Triton to support dynamic batching the output shape should be [-1, <dim0>, <dim1>, <dim2>]
, however, when reading the onnx model the "output" shape was [1, -1, -1, -1]. Note, the first dimension should have been -1.
Is there a way to make batch
in [batch, Unsqueezeoutput_dim_1, height, width]
to be treated as -1
?
@sarperkilic Are you able to run a batch size>1
request on the model outside triton on your model.onnx using onnx runtime?
from onnxruntime_backend.
@tanmayv25 is the batch
can be changed as -1
using polygraphy
?
$ python3 -m pip install polygraphy
$ polygraphy surgeon sanitize <inut_model.onnx> -o <output_model.onnx> --override-input-shapes input:[-1,3,height,width]
from onnxruntime_backend.
@tanmayv25 is the
batch
can be changed as-1
usingpolygraphy
?$ python3 -m pip install polygraphy $ polygraphy surgeon sanitize <inut_model.onnx> -o <output_model.onnx> --override-input-shapes input:[-1,3,height,width]
Hello,
I tried the polygraphy
First, I inspect my model as follow:
$ polygraphy inspect model model.onnx
[W] 'colored' module is not installed, will not use colors when logging. To enable colors, please install the 'colored' module: python3 -m pip install colored
[I] Loading model: /home/mert/Documents/model.onnx
[I] ==== ONNX Model ====
Name: torch-jit-export | ONNX Opset: 11
---- 1 Graph Input(s) ----
{input [dtype=float32, shape=('batch', 3, 'height', 'width')]}
---- 1 Graph Output(s) ----
{output [dtype=int64, shape=('batch', 'Unsqueezeoutput_dim_1', 'height', 'width')]}
---- 74 Initializer(s) ----
---- 165 Node(s) ----
And then, I tried to can the input shape as you said. But I failed. It says no matches found.
$ polygraphy surgeon sanitize model.onnx -o output_model.onnx --override-input-shapes input:[-1,3,height,width]
zsh: no matches found: input:[-1,3,height,width]
What do you suggest me?
Thanks
from onnxruntime_backend.
Looks similar to triton-inference-server/onnxruntime_backend#109
For Triton to support dynamic batching the output shape should be
[-1, <dim0>, <dim1>, <dim2>]
, however, when reading the onnx model the "output" shape was [1, -1, -1, -1]. Note, the first dimension should have been -1.Is there a way to make
batch
in[batch, Unsqueezeoutput_dim_1, height, width]
to be treated as-1
? @sarperkilic Are you able to run abatch size>1
request on the model outside triton on your model.onnx using onnx runtime?
Hello,
I have converted my model input and output tensor shape as follows:
And then I tried to inference using onnx-runtime. It works.
I share input and output logs of onnx-runtime below:
batch array shape: (7, 3, 524, 870)
output shape: (1, 1, 7, 524, 870)
But, when I try to load the model to triton server I got the same error.
tensor 'output': for the model to support batching the shape should have at least 1 dimension and the first dimension must be -1; but shape expected by the model is [1,1,-1,-1]
I also tried to load model to triton server with tritonserver --strict-model-config=false
command and run the
curl localhost:8000/v2/models/segmentation_model/config
command. Here is the output.
"input":[{"name":"input","data_type":"TYPE_FP32","format":"FORMAT_NONE","dims":[-1,3,-1,-1],
"output":[{"name":"output","data_type":"TYPE_INT64","dims":[1,1,-1,-1]
How can I solve this problem?
Thanks
from onnxruntime_backend.
Triton is still reading the output tensor shape as [1, 1, -1, -1]
I don't understand these shapes in your comment:
batch array shape: (7, 3, 524, 870)
output shape: (1, 1, 7, 524, 870)
Why is there an extra dimension in the output tensor? And why the batch size 7
is present in 3rd dim instead of first.
And then I tried to inference using onnx-runtime. It works.
I presume onnx runtime doesn't apply strict output validation as needed by Triton. Something is wrong with the model, the generated tensor (1, 1, 7, 524, 870)
is definitely not compliant with [-1, 1, height, width].
from onnxruntime_backend.
Triton is still reading the output tensor shape as [1, 1, -1, -1]
I don't understand these shapes in your comment:
batch array shape: (7, 3, 524, 870) output shape: (1, 1, 7, 524, 870)
Why is there an extra dimension in the output tensor? And why the batch size
7
is present in 3rd dim instead of first.And then I tried to inference using onnx-runtime. It works.
I presume onnx runtime doesn't apply strict output validation as needed by Triton. Something is wrong with the model, the generated tensor
(1, 1, 7, 524, 870)
is definitely not compliant with [-1, 1, height, width].
I dont know why the output tensor shape has extra dimension.
My model was trained with PyTorch. I still have .pth file.
If you suggest me the way, I can re-convert from .pth to .onnx
When I try batch-size=1 inference on Triton, the output tensor shape is (1,524,870)
from onnxruntime_backend.
Unfortunately, I don't know why it would be happening either. Have you asked it here.
We obtain the dimension for the tensor in onnxruntime_backend here. This is giving [1,1, -1, -1]
.
I am transferring the issue to onnxruntime_backend project. But I think you may be better off working with pytorch and onnx teams to fix your model to generate outputs in expected format.
from onnxruntime_backend.
Related Issues (20)
- InvalidArgumentError: The tensor Input (Input) of Slice op is not initialized.
- How to create onnx model for ragged batching?
- Add `enable_dynamic_shapes` To Model Config To Resolve CNN Memory Leaks With OpenVino EP
- GPU memory leak with high load for ONNX model HOT 3
- Onnxruntime backend error when workload is high since Triton uses CUDA 12 HOT 4
- how to use onnxruntime profiling in triton
- Error while Loading YOLOv8 Model with EfficientNMS_TRT Plugin in TRITON HOT 2
- Openvino doesn't work, it degrades inference performance instead HOT 4
- Support arbitrary options for execution providers
- Model failed to create because of output dimensions
- Question: Does ONNX-RT silently fallbacks to CPU? HOT 1
- Request for Supporting minShapes/optShapes/maxShapes for TensorRT HOT 1
- Will onxxruntime backend support INT8 on cpu ? HOT 1
- Enable "trt_build_heuristics_enable" optimization for onnxruntime-TensorRT HOT 2
- CPU Throttling when Deploying Triton with ONNX Backend on Kubernetes HOT 6
- Facing errors when installing onnxruntime backend for triton
- Failed to allocated memory for requested buffer of size X
- Is onnxruntime-genai supported? HOT 1
- UNAVAILABLE: Unsupported: Triton TRITONBACKEND API version: X does not support 'onnxruntime' TRITONBACKEND API version X
- OpenVINO EP doesn't respect threading parameters
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onnxruntime_backend.