Giter Site home page Giter Site logo

yolov5-onnxruntime's People

Contributors

itsnine avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

yolov5-onnxruntime's Issues

How to find onnxruntime_cxx_api.h?

Hi, I have built onnxruntime on macOS, but under MacOS/Release folder, there is no onnxruntime_cxx_api such file at all.
Also, there is no such lib sub folder under it, it just inside MacOS/Release/libonnxruntime.dylib.

How should I set them under macOS?

just got 2 such file in src:

/libs/onnxruntime//cmake/external/onnxruntime-extensions/includes/onnxruntime/onnxruntime_cxx_api.h
/libs/onnxruntime//include/onnxruntime/core/session/onnxruntime_cxx_api.h

I encountered a bug in detect

Hello, when I used the same onnx model to detect the original yolov5 project and this project, I encountered the problem of different results. The original yolov5 project category result is correct, but only part of the category of this project is correct, the recognition box The same, but the confidence level is also different. How to solve this problem?

CUDA failure 101: invalid device ordinal

after build, it can sucess run in cpu, but not in gpu

it will generate the follow errors:

root@4dbec2b03d4e:/ssd/liuhao/yolov5-onnxruntime/build# ./yolo_ort --model_path ../models/yolov5m.onnx --image ../images/bus.jpg --class_names ../models/coco.names --gpu Inference device: GPU /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:121 bool onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*) [with ERRTYPE = cudaError; bool THRW = true] /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:115 bool onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*) [with ERRTYPE = cudaError; bool THRW = true] CUDA failure 101: invalid device ordinal ; GPU=0 ; hostname=4dbec2b03d4e ; expr=cudaSetDevice(info_.device_id);

my envir is:
ubuntu 18.04
cuda 11.03
onnxruntime x64-gpu-1.8.0

Support for ONNXRuntime 1.12+

Are there any plans to add support for ONNXRuntime 1.12+ (replace session.GetInputName with session.GetInputNameAllocated), replacing const char input and output vectors with ORTAllocatedStringPtr vectors?

Add a license to a repository

First of all, thanks for the working C++ code that uses onnxruntime with yolov5 👍
Could you please add the license file to the repository so that it is clear how your code can be used in other projects

ort batch inference

hello,I have tested ort c++ inference successfully! but I didn't make batch inference works! could you please give a batch inference c++ example??
Thank you very much!

How to change imgsz?

Hello, I had an experiment where all the images were high pixels, so scaling to 640*640 would cause the target to be too small. I tried to modify the c++ file in the src folder to change 640 to 1280, but after compiling, I still need 640 input, so how should I modify the project?

Half precision

Official yolov5 PyTorch repo uses half precision. I try the onnx model with half precision on python, and speed increased. Can this repo support half precision?

Invalid input name: �����U

(CUDA113+CUDNN82) han@han:~/Desktop/hxb_projects/CPP_Instance/10-30/git_3/yolov5-onnxruntime/build$ ./yolo_ort --model_path /home/han/Desktop/hxb_projects/CPP_Instance/10-30/git_3/yolov5-onnxruntime/models/yolov5s.onnx --image /home/han/Desktop/hxb_projects/CPP_Instance/10-30/git_3/yolov5-onnxruntime/images/bus.jpg --class_names /home/han/Desktop/hxb_projects/CPP_Instance/10-30/git_3/yolov5-onnxruntime/coco.names
Inference device: CPU
Input shape: 1
Input shape: 3
Input shape: 640
Input shape: 640
Input name: images
Output name: images
Model was initialized.
Invalid input name: �����U

[Question] Sample with Anchor Box

This is great reference for c++.
Question:
In Line https://github.com/itsnine/yolov5-onnxruntime/blob/master/src/detector.cpp#L112 .
Why are we considering only first element of outputTensors, It has 4 output array.
We could request for all 4 outputs if we change parameters in https://github.com/itsnine/yolov5-onnxruntime/blob/master/src/detector.cpp#L190

Any particular reason to go this way?

I could not find any reference including Anchor_boxes, Could you please add one?
Thanks.

occur an error in sessionOptions.AppendExecutionProvider_CUDA(cudaOption)

env:
platform: windows 10 x64
onnxruntime version: gpu 1.7.0

when processing excute to AppendExecutionProvider_CUDA function, occur an exception, and i found cudaOption obeject Member variables not right, for exanmple the device_id is a Uninitialized values like -858993460, can you help me ? thank you very much!

Inference speed

What is the speed of inference?I try it on RTX2060 with only 10 FPS.

fatal error LNK 1104

Hello,
I am trying to run this example, but when I'm writing "cmake --build ." in the terminal, I always get this error:

LINK : fatal error LNK1104: Datei "onnxruntime-win-x64\onnxruntime-win-x64\lib\onnxruntime.lib.lib" kann nicht geöffnet werden. [...\build\yolo_ort.vcxproj]

Maybe the "lib.lib" from onnxruntime.lib.lib is the problem, but I don't know how to solve it.
I am using the win-x64-1.10.0 onnxruntime version.

Thanks for your help.

cvtColor does not take effect

In preprocessing

void YOLODetector::preprocessing(cv::Mat &image, float*& blob, std::vector<int64_t>& inputTensorShape)
{
    cv::Mat resizedImage, floatImage;
    cv::cvtColor(image, resizedImage, cv::COLOR_BGR2RGB);
    utils::letterbox(image, resizedImage, this->inputImageShape,
                     cv::Scalar(114, 114, 114), this->isDynamicInputShape,
                     false, true, 32);

    inputTensorShape[2] = resizedImage.rows;
    inputTensorShape[3] = resizedImage.cols;

    resizedImage.convertTo(floatImage, CV_32FC3, 1 / 255.0);
    blob = new float[floatImage.cols * floatImage.rows * floatImage.channels()];
    cv::Size floatImageSize {floatImage.cols, floatImage.rows};

    // hwc -> chw
    std::vector<cv::Mat> chw(floatImage.channels());
    for (int i = 0; i < floatImage.channels(); ++i)
    {
        chw[i] = cv::Mat(floatImageSize, CV_32FC1, blob + i * floatImageSize.width * floatImageSize.height);
    }
    cv::split(floatImage, chw);
}

How to Run?

Hello, thank you for doing this project. Now I have a problem. How to run the code after I compile it with Cmake
image
image

When run it in clion with args

--model_path ../models/yolov5s.onnx --image ../images/bus.jpg --class_names ../models/coco.names --gpu

ERROR: Failed to access class name path:

Integrate with TensorRT?

I tried out your sample - very cool! I get 110 FPS with a YOLOv5s running CUDA 11.5 on my 1080ti. I am curious what it would take to evaluate performance with TensorRT. Have you tried this? Any pointers?
Thanks.

Dynamic input shape

There seem to be an issue handling an input other than 640x640. When I try to feed a 320x1296 input it throws an error:
Got invalid dimensions for input: images for the following indices index: 2 Got: 1296 Expected: 640 index: 3 Got: 320 Expected: 640

I think it has to do with the dynamic input shape checking of the code, which I think it is not doing its job correctly.
Can someone point me where should I look at, to make it able to excecute multiple input shape images?
Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.