Giter Site home page Giter Site logo

paddlesharp's Introduction

PaddleSharp main QQ

💗.NET Wrapper for PaddleInference C API, include PaddleOCR, PaddleDetection, Rotation Detector, support Windows(x64), NVIDIA Cuda 10.2+ based GPU and Linux(Ubuntu-22.04 x64), currently contained following main components:

  • PaddleOCR support 14 OCR languages model download on-demand, allow rotated text angle detection, 180 degree text detection, also support table recognition.
  • PaddleDetection support PPYolo detection model and PicoDet model.
  • RotationDetection use Baidu's official text_image_orientation_infer model to detect text picture's rotation angle(0, 90, 180, 270).

NuGet Packages/Docker Images

Release notes

Please checkout this page.

Infrastructure packages

NuGet Package Version Description
Sdcb.PaddleInference NuGet Paddle Inference C API .NET binding
Sdcb.PaddleInference.runtime.win64.openblas NuGet Paddle Inference native windows-x64-openblas binding
Sdcb.PaddleInference.runtime.win64.mkl NuGet Paddle Inference native windows-x64-mkldnn binding

Note: Linux does not need a native binding NuGet package like windows(Sdcb.PaddleInference.runtime.win64.mkl), instead, you can/should based from a Dockerfile to development:

Docker Images Version Description
sdflysha/dotnet6-paddle Docker PaddleInference 2.4.0, OpenCV 4.6.0, based on official Ubuntu 20.04 .NET 6 Runtime
sdflysha/dotnet6sdk-paddle Docker PaddleInference 2.4.0, OpenCV 4.6.0, based on official Ubuntu 20.04 .NET 6 SDK

Paddle Inference GPU package

Since GPU package are too large(>1.5GB), I cannot publish a NuGet package to nuget.org, there is a limitation of 250MB when upload to Github, there is some related issues to this:

However you're good to build your own GPU nuget package using 01-build-native.linq.

Here is the GPU package that I compiled(not from baidu official):

NuGet Package Version Description
cuda117_cudnn84_tr84_sm86 NuGet win64/CUDA 11.7/cuDNN 8.4/TensorRT 8.4/sm86 binding
cuda102_cudnn76_sm61_75 NuGet win64/CUDA 10.2/cuDNN 7.6/sm61+sm75 binding
cuda116_cudnn84_sm86_onnx NuGet win64/CUDA 11.6/cuDNN 8.4/sm86/onnx binding

PaddleOCR packages

NuGet Package Version Description
Sdcb.PaddleOCR NuGet PaddleOCR library(based on Sdcb.PaddleInference)
Sdcb.PaddleOCR.Models.Online NuGet Online PaddleOCR models, will download when first using
Sdcb.PaddleOCR.Models.LocalV3 NuGet Full local v3 models, include multiple language(~130MB)

Rotation Detection packages(part of PaddleCls)

NuGet Package Version Description
Sdcb.RotationDetector NuGet RotationDetector library(based on Sdcb.PaddleInference)

PaddleDetection packages

NuGet Package Version Description
Sdcb.PaddleDetection NuGet PaddleDetection library(based on Sdcb.PaddleInference)

Usage

FAQ

Why my code runs good in my windows machine, but DllNotFoundException in other machine:

  1. Please ensure the latest Visual C++ Redistributable was installed in Windows(typically it should automatically installed if you have Visual Studio installed) Otherwise, it will failed with following error(Windows only):

    DllNotFoundException: Unable to load DLL 'paddle_inference_c' or one of its dependencies (0x8007007E)
    

    If it's Unable to load DLL OpenCvSharpExtern.dll or one of its dependencies, then most likely the Media Foundation is not installed in windows server 2012 R2 machine: image

  2. Many old CPUs does not support AVX instructions, please ensure your CPU supports AVX, or download the x64-noavx-openblas dlls and disable Mkldnn: PaddleConfig.Defaults.UseMkldnn = false;

  3. If you're using Win7-x64, and your CPU do support AVX2, then you might also need to extract following 3 dlls into C:\Windows\System32 folder to make it run:

    • api-ms-win-core-libraryloader-l1-2-0.dll
    • api-ms-win-core-processtopology-obsolete-l1-1-0.dll
    • API-MS-Win-Eventing-Provider-L1-1-0.dll

    You can download these 3 dlls here: win7-x64-onnxruntime-missing-dlls.zip

How to enable GPU?

Enable GPU support can significantly improve the throughput and lower the CPU usage.

Steps to use GPU in windows:

  1. (for windows) Install the package: Sdcb.PaddleInference.runtime.win64.cuda* instead of Sdcb.PaddleInference.runtime.win64.mkl, do not install both.
  2. Install CUDA from NVIDIA, and configure environment variables to PATH or LD_LIBRARY_PATH(linux)
  3. Install cuDNN from NVIDIA, and configure environment variables to PATH or LD_LIBRARY_PATH(linux)
  4. Install TensorRT from NVIDIA, and configure environment variables to PATH or LD_LIBRARY_PATH(linux)

You can refer this blog page for GPU in Windows: 关于PaddleSharp GPU使用 常见问题记录

If you're using Linux, you need to compile your own OpenCvSharp4 environment following the docker build scripts follow the CUDA/cuDNN/TensorRT configuration tasks.

After these steps completed, you can try specify PaddleDevice.Gpu() in paddle device configure parameter then enjoy🚀.

TensorRT

To use TensorRT, just specify PaddleDevice.Gpu().And(PaddleDevice.TensorRt("shape-info.txt")) instead of PaddleDevice.Gpu() to make it work.

Please aware, this shape info text file **.txt is bind to your model, different model have different shape info, so if you're using complexed model like Sdcb.PaddleOCR, you should use different shape for different model like this:

using PaddleOcrAll all = new(model,
   PaddleDevice.Gpu().And(PaddleDevice.TensorRt("det.txt")),
   PaddleDevice.Gpu().And(PaddleDevice.TensorRt("cls.txt")),
   PaddleDevice.Gpu().And(PaddleDevice.TensorRt("rec.txt")))
{
   Enable180Classification = true,
   AllowRotateDetection = true,
};

In this case:

  • DetectionModel will use det.txt
  • 180DegreeClassificationModel will use cls.txt
  • RecognitionModel will use rec.txt

NOTE :

First round of TensorRT running will generate a shape info **.txt file in this folder: %AppData%\Sdcb.PaddleInference\TensorRtCache, it will takes around 100 seconds to finish TensorRT cache generation, after than it should be faster than general GPU.

In this case, if something strange happened, (for example you mistakenly create a same shape-info.txt file for different models), you can delete this folder to generate TensorRT cache again: %AppData%\Sdcb.PaddleInference\TensorRtCache.

Thanks & Sponsors

Contact

QQ group of C#/.NET computer vision technical communicate(C#/.NET计算机视觉技术交流群): 579060605

paddlesharp's People

Contributors

sdcb avatar n0099 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.