Giter Site home page Giter Site logo

paddlesharp's Introduction

PaddleSharp 🌟 main QQ

English | 简体中文

💗 .NET Wrapper for PaddleInference C API, support Windows(x64) 💻, NVIDIA Cuda 10.2+ based GPU 🎮 and Linux(Ubuntu-22.04 x64) 🐧, currently contained following main components:

  • PaddleOCR 📖 support 14 OCR languages model download on-demand, allow rotated text angle detection, 180 degree text detection, also support table recognition 📊.
  • PaddleDetection 🎯 support PPYolo detection model and PicoDet model 🏹.
  • RotationDetection 🔄 use Baidu's official text_image_orientation_infer model to detect text picture's rotation angle(0, 90, 180, 270).
  • PaddleNLP ChineseSegmenter 📚 support PaddleNLP Lac Chinese segmenter model, supports tagging/customized words.
  • Paddle2Onnx 🔄 Allow user export ONNX model using C#.

NuGet Packages/Docker Images 📦

Release notes 📝

Please checkout this page 📄.

Infrastructure packages 🏗️

NuGet Package 💼 Version 📌 Description 📚
Sdcb.PaddleInference NuGet Paddle Inference C API .NET binding ⚙️

Native packages 🏗️

Package Version 📌 Description
Sdcb.PaddleInference.runtime.win64.mkl NuGet win64+mkldnn
Sdcb.PaddleInference.runtime.win64.openblas NuGet win64+openblas
Sdcb.PaddleInference.runtime.win64.openblas-noavx NuGet win64+openblas(no AVX, for old CPUs)
Sdcb.PaddleInference.runtime.win64.cuda102_cudnn76_tr72_sm61_75 NuGet win64/CUDA 10.2/cuDNN 7.6/TensorRT 7.2/sm61+sm75
Sdcb.PaddleInference.runtime.win64.cuda118_cudnn86_tr85_sm86_89 NuGet win64/CUDA 11.8/cuDNN 8.6/TensorRT 8.5/sm86+sm89

Linux OS packages(preview):

Package Version 📌 Description
Sdcb.PaddleInference.runtime.linux-loongarch64 NuGet Loongnix GCC 8.2 Loongarch64
Sdcb.PaddleInference.runtime.linux64.mkl.gcc82 NuGet Linux-x64 GCC 8.2(tested in Ubuntu 22.04)

Be aware, as the Linux operating system cannot modify the value of LD_LIBRARY_PATH at runtime. If dependent dynamic libraries (such as libcommon.so) are loaded before the main dynamic library (such as libpaddle_inference_c.so), and also due to protobuf errors reported: PaddlePaddle/Paddle#62670

Therefore, all NuGet packages for Linux operating systems are in a preview state, and I'm unable to resolve this issue. Currently, if you are using the NuGet package on Linux, you need to manually specify the LD_LIBRARY_PATH environment variable before running the program, using the following commands:

  • For x64 CPUs: export LD_LIBRARY_PATH=/<program directory>/bin/Debug/net8.0/runtimes/linux-x64/native:$LD_LIBRARY_PATH

  • For Loongson 5000 or above CPUs (linux-loongarch64): export LD_LIBRARY_PATH=/<program directory>/bin/Debug/net8.0/runtimes/linux-loongarch64/native:$LD_LIBRARY_PATH

Some of packages already deprecated(Version <= 2.5.0):

Package Version 📌 Description
Sdcb.PaddleInference.runtime.win64.cuda117_cudnn84_tr84_sm86 NuGet win64/CUDA 11.7/cuDNN 8.4/TensorRT 8.4/sm86
Sdcb.PaddleInference.runtime.win64.cuda102_cudnn76_sm61_75 NuGet win64/CUDA 10.2/cuDNN 7.6/sm61+sm75
Sdcb.PaddleInference.runtime.win64.cuda116_cudnn84_sm86_onnx NuGet win64/CUDA 11.6/cuDNN 8.4/sm86/onnx

Any other packages that starts with Sdcb.PaddleInference.runtime might deprecated.

Baidu packages were downloaded from here: https://www.paddlepaddle.org.cn/inference/master/guides/install/download_lib.html#windows

All Windows packages were compiled manually by me.

Baidu official GPU packages are too large(>1.5GB) to publish to nuget.org, there is a limitation of 250MB when upload to Github, there is some related issues to this:

But You're good to build your own GPU nuget package using 01-build-native.linq 🛠️.

Paddle Devices

  • Mkldnn - PaddleDevice.Mkldnn()

    Based on Mkldnn, generally fast

  • Openblas - PaddleDevice.Openblas()

    Based on openblas, slower, but dependencies file smaller and consume lesser memory

  • Onnx - PaddleDevice.Onnx()

    Based on onnxruntime, is also pretty fast and consume less memory

  • Gpu - PaddleDevice.Gpu()

    Much faster but relies on NVIDIA GPU and CUDA

    If you wants to use GPU, you should refer to FAQ How to enable GPU? section, CUDA/cuDNN/TensorRT need to be installed manually.

  • TensorRT - PaddleDevice.Gpu().And(PaddleDevice.TensorRt("shape-info.txt"))

    Even faster than raw Gpu but need install TensorRT environment.

    Please refer to tensorrt section for more details

FAQ ❓

Why my code runs good in my windows machine, but DllNotFoundException in other machine: 💻

  1. Please ensure the latest Visual C++ Redistributable was installed in Windows (typically it should automatically installed if you have Visual Studio installed) 🛠️ Otherwise, it will fail with the following error (Windows only):

    DllNotFoundException: Unable to load DLL 'paddle_inference_c' or one of its dependencies (0x8007007E)
    

    If it's Unable to load DLL OpenCvSharpExtern.dll or one of its dependencies, then most likely the Media Foundation is not installed in the Windows Server 2012 R2 machine: image

  2. Many old CPUs do not support AVX instructions, please ensure your CPU supports AVX, or download the x64-noavx-openblas DLLs and disable Mkldnn: PaddleDevice.Openblas() 🚀

  3. If you're using Win7-x64, and your CPU does support AVX2, then you might also need to extract the following 3 DLLs into C:\Windows\System32 folder to make it run: 💾

    • api-ms-win-core-libraryloader-l1-2-0.dll
    • api-ms-win-core-processtopology-obsolete-l1-1-0.dll
    • API-MS-Win-Eventing-Provider-L1-1-0.dll

    You can download these 3 DLLs here: win7-x64-onnxruntime-missing-dlls.zip ⬇️

How to enable GPU? 🎮

Enable GPU support can significantly improve the throughput and lower the CPU usage. 🚀

Steps to use GPU in Windows:

  1. (for Windows) Install the package: Sdcb.PaddleInference.runtime.win64.cuda* instead of Sdcb.PaddleInference.runtime.win64.mkl, do not install both. 📦
  2. Install CUDA from NVIDIA, and configure environment variables to PATH or LD_LIBRARY_PATH (Linux) 🔧
  3. Install cuDNN from NVIDIA, and configure environment variables to PATH or LD_LIBRARY_PATH (Linux) 🛠️
  4. Install TensorRT from NVIDIA, and configure environment variables to PATH or LD_LIBRARY_PATH (Linux) ⚙️

You can refer to this blog page for GPU in Windows: 关于PaddleSharp GPU使用 常见问题记录 📝

If you're using Linux, you need to compile your own OpenCvSharp4 environment following the docker build scripts and the CUDA/cuDNN/TensorRT configuration tasks. 🐧

After these steps are completed, you can try specifying PaddleDevice.Gpu() in the paddle device configuration parameter, then enjoy the performance boost! 🎉

TensorRT 🚄

To use TensorRT, just specify PaddleDevice.Gpu().And(PaddleDevice.TensorRt("shape-info.txt")) instead of PaddleDevice.Gpu() to make it work. 💡

Please be aware, this shape info text file **.txt is bound to your model. Different models have different shape info, so if you're using a complex model like Sdcb.PaddleOCR, you should use different shapes for different models like this:

using PaddleOcrAll all = new(model,
   PaddleDevice.Gpu().And(PaddleDevice.TensorRt("det.txt")),
   PaddleDevice.Gpu().And(PaddleDevice.TensorRt("cls.txt")),
   PaddleDevice.Gpu().And(PaddleDevice.TensorRt("rec.txt")))
{
   Enable180Classification = true,
   AllowRotateDetection = true,
};

In this case:

  • DetectionModel will use det.txt 🔍
  • 180DegreeClassificationModel will use cls.txt 🔃
  • RecognitionModel will use rec.txt 🔡

NOTE 📝:

The first round of TensorRT running will generate a shape info **.txt file in this folder: %AppData%\Sdcb.PaddleInference\TensorRtCache. It will take around 100 seconds to finish TensorRT cache generation. After that, it should be faster than the general GPU. 🚀

In this case, if something strange happens (for example, you mistakenly create the same shape-info.txt file for different models), you can delete this folder to generate TensorRT cache again: %AppData%\Sdcb.PaddleInference\TensorRtCache. 🗑️

Thanks & Sponsors 🙏

Contact 📞

QQ group of C#/.NET computer vision technical communication (C#/.NET计算机视觉技术交流群): 579060605

paddlesharp's People

Contributors

jeremywu917 avatar luguangguang avatar n0099 avatar sdcb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

paddlesharp's Issues

How to compress the size of dependency?

感谢dalao的工作!

似乎您的项目将官方提供的模型全部打包成了dll

能否手动引用模型路径, 调用FileRecognizationModel载入模型, 然后删掉一些dll?

内存占用非常高,压测发现内存持续在1GB以上

内存占用非常高,压测发现内存持续在1GB以上;
对于只有8GB的电脑来说,这个内存占用太高了,随便在跑跑业务系统,内存占用到了80%以上;
能否降低点内存占用,使用的模型是:服务端完整模型;

Support Paddle-Lite

It would be awesome if this project also packaged and exposed Paddle-Lite models for low memory systems and OCR at the edge.

shutdown of the container

Good afternoon
we use the following dockerfile

`FROM mcr.microsoft.com/dotnet/aspnet:6.0-focal as base

ENV DEBIAN_FRONTEND=noninteractive
ENV OPENCV_VERSION=4.6.0

WORKDIR /

Install opencv dependencies

RUN apt-get update && apt-get -y install --no-install-recommends
apt-transport-https
software-properties-common
wget
unzip
ca-certificates
build-essential
cmake
git
libtbb-dev
libatlas-base-dev
libgtk2.0-dev
libavcodec-dev
libavformat-dev
libswscale-dev
libdc1394-22-dev
libxine2-dev
libv4l-dev
libtheora-dev
libvorbis-dev
libxvidcore-dev
libopencore-amrnb-dev
libopencore-amrwb-dev
libavresample-dev
x264 \
libgdiplus
tesseract-ocr
tesseract-ocr-rus
imagemagick
libtesseract-dev
apt-utils
&& apt-get -y clean
&& rm -rf /var/lib/apt/lists/*

Setup opencv and opencv-contrib source

RUN wget https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip &&
unzip ${OPENCV_VERSION}.zip &&
rm ${OPENCV_VERSION}.zip &&
mv opencv-${OPENCV_VERSION} opencv &&
wget https://github.com/opencv/opencv_contrib/archive/${OPENCV_VERSION}.zip &&
unzip ${OPENCV_VERSION}.zip &&
rm ${OPENCV_VERSION}.zip &&
mv opencv_contrib-${OPENCV_VERSION} opencv_contrib

Build OpenCV

RUN cd opencv && mkdir build && cd build &&
cmake
-D OPENCV_EXTRA_MODULES_PATH=/opencv_contrib/modules
-D CMAKE_BUILD_TYPE=RELEASE
-D BUILD_SHARED_LIBS=OFF
-D ENABLE_CXX11=ON
-D BUILD_EXAMPLES=OFF
-D BUILD_DOCS=OFF
-D BUILD_PERF_TESTS=OFF
-D BUILD_TESTS=OFF
-D BUILD_JAVA=OFF
-D BUILD_opencv_app=OFF
-D BUILD_opencv_barcode=OFF
-D BUILD_opencv_java_bindings_generator=OFF
-D BUILD_opencv_js_bindings_generator=OFF
-D BUILD_opencv_python_bindings_generator=OFF
-D BUILD_opencv_python_tests=OFF
-D BUILD_opencv_ts=OFF
-D BUILD_opencv_js=OFF
-D BUILD_opencv_bioinspired=OFF
-D BUILD_opencv_ccalib=OFF
-D BUILD_opencv_datasets=OFF
-D BUILD_opencv_dnn_objdetect=OFF
-D BUILD_opencv_dpm=OFF
-D BUILD_opencv_fuzzy=OFF
-D BUILD_opencv_gapi=OFF
-D BUILD_opencv_intensity_transform=OFF
-D BUILD_opencv_mcc=OFF
-D BUILD_opencv_objc_bindings_generator=OFF
-D BUILD_opencv_rapid=OFF
-D BUILD_opencv_reg=OFF
-D BUILD_opencv_stereo=OFF
-D BUILD_opencv_structured_light=OFF
-D BUILD_opencv_surface_matching=OFF
-D BUILD_opencv_videostab=OFF
-D BUILD_opencv_wechat_qrcode=ON
-D WITH_GSTREAMER=OFF
-D WITH_ADE=OFF
-D OPENCV_ENABLE_NONFREE=ON
.. && make -j$(nproc) && make install && ldconfig

Download OpenCvSharp

RUN git clone https://github.com/shimat/opencvsharp.git && cd opencvsharp

Install the Extern lib.

RUN mkdir /opencvsharp/make && cd /opencvsharp/make &&
cmake -D CMAKE_INSTALL_PREFIX=/opencvsharp/make /opencvsharp/src &&
make -j$(nproc) && make install &&
rm -rf /opencv &&
rm -rf /opencv_contrib &&
cp /opencvsharp/make/OpenCvSharpExtern/libOpenCvSharpExtern.so /usr/lib/

set noninteractive installation

#RUN export DEBIAN_FRONTEND=noninteractive
#install tzdata package
RUN apt-get install -y tzdata

set your timezone

RUN ln -fs /usr/share/zoneinfo/Europe/Moscow /etc/localtime
RUN dpkg-reconfigure --frontend noninteractive tzdata
RUN echo Europe/Moscow > /etc/timezone
#RUN apk del tzdata
RUN wget -q https://paddle-inference-lib.bj.bcebos.com/2.3.2/cxx_c/Linux/CPU/gcc8.2_avx_mkl/paddle_inference_c.tgz &&
tar -xzf /paddle_inference_c.tgz &&
find /paddle_inference_c -mindepth 2 -name .so -print0 | xargs -0 -I {} mv {} /usr/lib &&
ls /usr/lib/.so &&
rm -rf /paddle_inference_c &&
rm paddle_inference_c.tgz

FROM mcr.microsoft.com/dotnet/sdk:6.0-focal AS build

WORKDIR /src
COPY ["Services/Image/Services.Image/Services.Image.csproj", "Services/Image/Services.Image/"]
COPY ["Core/Common/Common.csproj", "Core/Common/"]
RUN dotnet restore "Services/Image/Services.Image/Services.Image.csproj"
COPY . .
WORKDIR "/src/Services/Image/Services.Image"
RUN dotnet build "Services.Image.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "Services.Image.csproj" -c Release -o /app/publish

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .

WORKDIR /app/x64
RUN ln -s /usr/lib/x86_64-linux-gnu/libdl-2.31.so libdl.so
RUN ln -s /usr/lib/x86_64-linux-gnu/liblept.so.5 liblept.so.5
RUN ln -s /usr/lib/x86_64-linux-gnu/liblept.so.5 libleptonica-1.80.0.so
RUN ln -s /usr/lib/x86_64-linux-gnu/libtesseract.so.4.0.1 libtesseract41.so

WORKDIR /app
ENTRYPOINT ["dotnet", "Services.Image.dll"]`

And we get a complete shutdown of the container when transferring a large file during text recognition.
If you transfer a small file for processing, then everything works, the container does not "fall"
There are errors in the container logs
Error in boxClipToRectangle: box outside rectangle
Error in pixScanForForeground: invalid box

error

1 how to reduce memory consumption while working?
2 how to prevent the "fall" of the container

linux 环境下部署

您好,请问可以部署到linux环境下嘛?我看有windows下的runtime,我的理解当前的程序包目前只能运行再windows x64环境下。请问是这样嘛?谢谢。

One or more errors occurred. (External component has thrown an exception.)

1、测试验证环境
1.1、webapi服务,同时整合paddleocr和PaddleDetection,通过单例化模式在首次请求时会实例唯一对象。
1.2、CUDA 11.7 CUDNN 8.4 TensorRt 8.4 使用GPU模式推理
1.3、Windows11 操作系统

3、运行过程报异常
at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)

【问题】
1、使用队列模式时,控制台没有输出日志
2、如何在调用时出现异常,能否在日志中记录底层的原因。

没有为 paddle_inference.dll 加载的符号文件

本地正常,放到阿里云服务器上后异常:0x00007FFDA784F3F1 (paddle_inference.dll) (w3wp.exe 中)处有未经处理的异常: 请求了严重的程序退出。 没有为 paddle_inference.dll 加载的符号文件。识别图片时候报的错,请问这个是怎么回事呢?我一直没有解决。

System.AccessViolationException 异常

在WIndows 2019 IIS中运行,偶发性。 .Net 6和.Net 7都会发生。

Application: w3wp.exe
CoreCLR Version: 7.0.523.17405
.NET Version: 7.0.5
Description: The process was terminated due to an unhandled exception.
Exception Info: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Stack:
   at Sdcb.PaddleInference.Native.PaddleNative.PD_PredictorRun(IntPtr)
   at Sdcb.PaddleInference.Native.PaddleNative.PD_PredictorRun(IntPtr)
   at Sdcb.PaddleOCR.PaddleOcrClassifier.ShouldRotate180(OpenCvSharp.Mat)
   at Sdcb.PaddleOCR.PaddleOcrClassifier.Run(OpenCvSharp.Mat)
   at Sdcb.PaddleOCR.PaddleOcrAll+<>c__DisplayClass24_0.<Run>b__0(OpenCvSharp.RotatedRect)
   at System.Linq.Enumerable+SelectArrayIterator`2[[OpenCvSharp.RotatedRect, OpenCvSharp, Version=1.0.0.0, Culture=neutral, PublicKeyToken=6adad1e807fea099],[System.__Canon, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].ToArray()
   at Sdcb.PaddleOCR.PaddleOcrAll.Run(OpenCvSharp.Mat, Int32)
   at Sdcb.PaddleOCR.QueuedPaddleOcrAll.ProcessQueue()
   at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(System.Threading.Thread, System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
   at System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef, System.Threading.Thread)
   at System.Threading.ThreadPoolWorkQueue.Dispatch()
   at System.Threading.PortableThreadPool+WorkerThread.WorkerThreadStart()

Support for osx

Hi, I spent few days trying to make it work on mac m1.
The problematic part is to build paddle inference c, as I face into the issue with gflags duplicates. So I change some cmake files to make it work.
However I see there are warning outputs:
Warn: OSPlatform is not windows or linux, platform might not supported.
Would you like to elaborate to release osx runtime package? And remove that warning from output?:)
Made a simple package: Sdcb.PaddleInference.runtime.osx_m1

paddle_inference_c.dll的源码

paddle_inference_c.dll的代码开源吗,想学习下如何编译c++的动态库给c#调用,
自己写的dllexport 返回string类型总是乱码。

ExternalError: CUDA error(3), initialization error.

1、测试验证环境
1.1、webapi服务,同时整合paddleocr和PaddleDetection,通过单例化模式在首次请求时会实例唯一对象。
1.2、CUDA 11.7 CUDNN 8.4 TensorRt 8.4 使用GPU模式推理
1.3、Windows11 操作系统

2、偶发异常信息

C++ Traceback (most recent call last):

Not support stack backtrace yet.


Error Message Summary:

ExternalError: CUDA error(3), initialization error.
[Hint: Please search for the error code(3) on website (https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html#group__CUDART__TYPES_1g3f51e3575c2178246db0a94a430e0038) to get Nvidia's official solution and advice about CUDA Error.] (at C:_\code\Paddle\paddle\fluid\platform\device\gpu\gpu_info.cc:289)

3、通过多次测试定位问题
image
image

4、业务代码增加了并发锁,问题一样
image

使用Window GPU, ocr后显存不释放导致错误?

代码大致如下图,一个静态实例,执行两次Run后,会报
System.AccessViolationException:“Attempted to read or write protected memory. This is often an indication that other memory is corrupt.”
或偶尔报 CUDNN error(4), CUDNN_STATUS_INTERNAL_ERROR.
使用CPU时没有此问题(单线程),请问该如何正确使用GPU进行OCR?

image

找不到字段:“Sdcb.PaddleInference.PaddleConfig.Defaults”

无法调用配置项:

PaddleConfig.Defaults.EnableGLog = EnableGLog; PaddleConfig.Defaults.UseMkldnn = UseMkldnn; PaddleConfig.Defaults.MkldnnCacheCapacity = MkldnnCacheCapacity;

只要调用那么这个函数据报错, 但是 后续的 OCR功能正常;

linux运行报错: System.DllNotFoundException

Unhandled exception. System.DllNotFoundException: Unable to load shared library 'paddle_inference_c' or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: libpaddle_inference_c: cannot open shared object file: No such file or directory
   at Sdcb.PaddleInference.Native.PaddleNative.PD_ConfigCreate()
   at Sdcb.PaddleInference.PaddleConfig..ctor()
   at Sdcb.PaddleOCR.PaddleOcrDetector..ctor(String modelDir)
   at Sdcb.PaddleOCR.PaddleOcrAll..ctor(String modelPath, String labelFilePath)
   at LocalPaddleOcr.Program.Main(String[] args) in /home/cuiliang/dotnetocr/Program.cs:line 30
   at LocalPaddleOcr.Program.<Main>(String[] args)

System.IndexOutOfRangeException: 索引超出了数组界限。

My System:
Win10 64bit, Sdcb.PaddleOCR 2.5.0.0, C# console app
——————————————————————————————
My Code:
`using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

using OpenCvSharp;
using Sdcb.PaddleInference;
using Sdcb.PaddleOCR;
using Sdcb.PaddleOCR.Models;

namespace PaddleOCR_2._6
{
class Program
{
static void Main(string[] args)
{
string det_model_path = "./model_2/ch_ppocr_mobile_v2.0_cls_infer/";
string cls_model_path = "./model_2/ch_PP-OCRv3_det_infer/";
string rec_model_path = "./model_2/ch_PP-OCRv3_rec_infer/";
string label_path = "./ppocr_keys_v1.txt";

        DetectionModel detectModel = DetectionModel.FromDirectory(det_model_path);
        ClassificationModel classifyModel = ClassificationModel.FromDirectory(cls_model_path);
        RecognizationModel recognizeModel = RecognizationModel.FromDirectory(rec_model_path,
                                                                                        label_path, ModelVersion.V3);

        FullOcrModel model = new FullOcrModel(detectModel, classifyModel, recognizeModel);
        PaddleConfig.Defaults.UseMkldnn = true;
        using (PaddleOcrAll all = new PaddleOcrAll(model)
        {
            AllowRotateDetection = true, /* 允许识别有角度的文字 */
            Enable180Classification = false, /* 允许识别旋转角度大于90度的文字 */
        })
        {
            // Load local file by following code:
            using (Mat src = Cv2.ImRead("./A.jpg"))
            {
                PaddleOcrResult result = all.Run(src);
                Console.WriteLine("Detected all texts: \n" + result.Text);
                foreach (PaddleOcrResultRegion region in result.Regions)
                {
                    Console.WriteLine($"Text: {region.Text}, Score: {region.Score}, RectCenter: {region.Rect.Center}, RectSize:    {region.Rect.Size}, Angle: {region.Rect.Angle}");
                }
            }
        }
    }
}

}`
——————————————————————————————
Error Message as follows:

e[37m--- fused 0 elementwise_mul with abs activatione[0m
e[37m--- fused 0 elementwise_mul with clip activatione[0m
e[37m--- fused 0 elementwise_mul with gelu activatione[0m
e[37m--- fused 0 elementwise_mul with relu6 activatione[0m
e[37m--- fused 0 elementwise_mul with sigmoid activatione[0m

未经处理的异常: System.IndexOutOfRangeException: 索引超出了数组界限。
在 Sdcb.PaddleOCR.PaddleOcrDetector.Run(Mat src)
在 Sdcb.PaddleOCR.PaddleOcrAll.Run(Mat src)
在 PaddleOCR_2._6.Program.Main(String[] args) 位置 E:\Practice\C#\OpenCVSharp_Test\PaddleOCR_2.6\PaddleOCR_2.6\Program.cs:行号 30
请按任意键继续. . .

识别结果不理想。如何提高识别率?如何训练模型

win10 识别车牌图片
车牌
代码:

OCRModel.GlobalModelDirectory = Directory.GetCurrentDirectory();
            OCRModel model = KnownOCRModel.PPOcrV2;
            await model.EnsureAll();

            byte[] sampleImageData = File.ReadAllBytes("images/车牌.jpg");
            //string sampleImageUrl = @"https://www.tp-link.com.cn/content/images/detail/2164/TL-XDR5450易展Turbo版-3840px_03.jpg";
            //using (HttpClient http = new HttpClient())
            //{
            //    Console.WriteLine("Download sample image from: " + sampleImageUrl);
            //    sampleImageData = await http.GetByteArrayAsync(sampleImageUrl);
            //}



            using (PaddleOcrAll all = new PaddleOcrAll(model.RootDirectory, model.KeyPath)
            {
                AllowRotateDetection = true, /* 允许识别有角度的文字 */
                Enable180Classification = false, /* 允许识别旋转角度大于90度的文字 */
            })
            {
                all.Detector.MaxSize = null;
                // Load local file by following code:
                // using (Mat src2 = Cv2.ImRead(@"C:\test.jpg"))
                using (Mat src = Cv2.ImDecode(sampleImageData, ImreadModes.Color))
                {
                    PaddleOcrResult result = all.Run(src);
                    Console.WriteLine("Detected all texts: \n" + result.Text);
                    foreach (PaddleOcrResultRegion region in result.Regions)
                    {
                        Console.WriteLine($"Text: {region.Text}, Score: {region.Score}, RectCenter: {region.Rect.Center}, RectSize: {region.Rect.Size}, Angle: {region.Rect.Angle}");
                    }
                }
            }

结果:

Detected all texts:
政KBT355
Text: 政KBT355, Score: 0.845895, RectCenter: (x:648.5 y:633), RectSize: (width:102 height:311), Angle: 90

如何训练模型提高准确率,简单的中文也没识别出来,😢

“Sdcb.PaddleInference.PaddleConfig”的类型初始值设定项引发异常

报错代码:
FullOcrModel model = LocalFullModels.ChineseV3;
Action action = PaddleDevice.Mkldnn(cpuMathThreadCount: 1);
using (PaddleOcrAll all = new PaddleOcrAll(model, action)

报错信息:
在 Sdcb.PaddleInference.PaddleConfig.FromMemoryModel(Byte[] programBuffer, Byte[] paramsBuffer)
在 Sdcb.PaddleOCR.Models.LocalV3.Details.Utils.LoadLocalModel(String key)
在 Sdcb.PaddleOCR.Models.LocalV3.LocalTableRecognitionModel.CreateConfig()
在 Sdcb.PaddleOCR.PaddleOcrTableRecognizer..ctor(TableRecognitionModel model, Action`1 configure)
“Sdcb.PaddleInference.PaddleConfig”的类型初始值设定项引发异常。

我做的word vsto插件在 自己电脑word里面运行正常在wps里面运行报错。在同事电脑word wps均报错

反复针对相同/不同的图片持续执行`PaddleOCRAll.Run()`10分钟后stdout显示`'dnnl:error' could not execute a primitive`并退出进程

PaddlePaddle/Paddle#34492
PaddlePaddle/PaddleClas#2024
https://aistudio.baidu.com/paddle/forum/topic/show/1917027
隔壁PaddleOCRSharp .NET封装库的文档也提到了 https://github.com/raoyutian/PaddleOCRSharp/blob/8f3f71ece472583f035975835f4f40bfb0cf4612/doc/README_question_en.md?plain=1#L6-L8

Q: WebAPI部署遇到错误:could not execute a primitive

CPU加速则会遇到,目前官方bug。暂时关闭cpu加速 oCRParameter.Enable_mkldnn = 0;

但上述都只提出直接不用mkldnn以绕过问题,然而openblas跑起来太慢(尽管比mkldnn省内存),我又没有显卡来用cudnn/tensorrt

按照 oneapi-src/oneDNN#914 (comment) 设置DNNL_VERBOSE=2 envvar后的stdout如下:

onednn_verbose,create:cache_hit,cpu,reorder,jit:uni,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:Acdb8a:f0,,,8x3x3x3,0.00317383
onednn_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:Acdb8a:f0,,,8x3x3x3,0.00195312
onednn_verbose,create:cache_hit,cpu,convolution,jit:avx2,forward_inference,src_f32::blocked:abcd:f0 wei_f32::blocked:Acdb8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic3oc8_ih800oh400kh3sh2dh0ph1_iw1216ow608kw3sw2dw0pw1,0.00390625
onednn_verbose,exec,cpu,convolution,jit:avx2,forward_inference,src_f32::blocked:abcd:f0 wei_f32::blocked:Acdb8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic3oc8_ih800oh400kh3sh2dh0ph1_iw1216ow608kw3sw2dw0pw1,2.67505
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,8x8x1x1,0.00292969
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,8x8x1x1,0.00292969
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,mb1_ic8oc8_ih400oh400kh1sh1dh0ph0_iw608ow608kw1sw1dw0pw0,0.00488281
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,mb1_ic8oc8_ih400oh400kh1sh1dh0ph0_iw608ow608kw1sw1dw0pw0,0.668945
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,8x1x1x3x3,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,8x1x1x3x3,0.00195312
onednn_verbose,create:cache_hit,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,g8mb1_ic8oc8_ih400oh400kh3sh1dh0ph1_iw608ow608kw3sw1dw0pw1,0.00512695
onednn_verbose,exec,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,g8mb1_ic8oc8_ih400oh400kh3sh1dh0ph1_iw608ow608kw3sw1dw0pw1,0.722168
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,8x8x1x1,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,8x8x1x1,0.000976562
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:sum ,alg:convolution_direct,mb1_ic8oc8_ih400oh400kh1sh1dh0ph0_iw608ow608kw1sw1dw0pw0,0.00390625
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:sum ,alg:convolution_direct,mb1_ic8oc8_ih400oh400kh1sh1dh0ph0_iw608ow608kw1sw1dw0pw0,0.663086
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,32x8x1x1,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,32x8x1x1,0.00195312
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,mb1_ic8oc32_ih400oh400kh1sh1dh0ph0_iw608ow608kw1sw1dw0pw0,0.00292969
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,mb1_ic8oc32_ih400oh400kh1sh1dh0ph0_iw608ow608kw1sw1dw0pw0,2.01392
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,32x1x1x3x3,0.00292969
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,32x1x1x3x3,0.00292969
onednn_verbose,create:cache_hit,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,g32mb1_ic32oc32_ih400oh200kh3sh2dh0ph1_iw608ow304kw3sw2dw0pw1,0.00390625
onednn_verbose,exec,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,g32mb1_ic32oc32_ih400oh200kh3sh2dh0ph1_iw608ow304kw3sw2dw0pw1,1.69189
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,16x32x1x1,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,16x32x1x1,0.00292969
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,,alg:convolution_direct,mb1_ic32oc16_ih200oh200kh1sh1dh0ph0_iw304ow304kw1sw1dw0pw0,0.00415039
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,,alg:convolution_direct,mb1_ic32oc16_ih200oh200kh1sh1dh0ph0_iw304ow304kw1sw1dw0pw0,0.714111
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,40x16x1x1,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,40x16x1x1,0.00195312
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,mb1_ic16oc40_ih200oh200kh1sh1dh0ph0_iw304ow304kw1sw1dw0pw0,0.00390625
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,mb1_ic16oc40_ih200oh200kh1sh1dh0ph0_iw304ow304kw1sw1dw0pw0,1.03687
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,40x1x1x3x3,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,40x1x1x3x3,0.00195312
onednn_verbose,create:cache_hit,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,g40mb1_ic40oc40_ih200oh200kh3sh1dh0ph1_iw304ow304kw3sw1dw0pw1,0.00390625
onednn_verbose,exec,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,g40mb1_ic40oc40_ih200oh200kh3sh1dh0ph1_iw304ow304kw3sw1dw0pw1,0.904053
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,16x40x1x1,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,16x40x1x1,0.00195312
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:sum ,alg:convolution_direct,mb1_ic40oc16_ih200oh200kh1sh1dh0ph0_iw304ow304kw1sw1dw0pw0,0.00292969
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:sum ,alg:convolution_direct,mb1_ic40oc16_ih200oh200kh1sh1dh0ph0_iw304ow304kw1sw1dw0pw0,0.853027
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,40x16x1x1,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,40x16x1x1,0.00219727
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,mb1_ic16oc40_ih200oh200kh1sh1dh0ph0_iw304ow304kw1sw1dw0pw0,0.00292969
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,mb1_ic16oc40_ih200oh200kh1sh1dh0ph0_iw304ow304kw1sw1dw0pw0,0.97998
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,40x1x1x5x5,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,40x1x1x5x5,0.00317383
onednn_verbose,create:cache_hit,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,g40mb1_ic40oc40_ih200oh100kh5sh2dh0ph2_iw304ow152kw5sw2dw0pw2,0.00292969
onednn_verbose,exec,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,g40mb1_ic40oc40_ih200oh100kh5sh2dh0ph2_iw304ow152kw5sw2dw0pw2,0.62793
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,24x40x1x1,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,24x40x1x1,0.00195312
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,,alg:convolution_direct,mb1_ic40oc24_ih100oh100kh1sh1dh0ph0_iw152ow152kw1sw1dw0pw0,0.00317383
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,,alg:convolution_direct,mb1_ic40oc24_ih100oh100kh1sh1dh0ph0_iw152ow152kw1sw1dw0pw0,0.3479
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,64x24x1x1,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,64x24x1x1,0.00195312
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,mb1_ic24oc64_ih100oh100kh1sh1dh0ph0_iw152ow152kw1sw1dw0pw0,0.00292969
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,mb1_ic24oc64_ih100oh100kh1sh1dh0ph0_iw152ow152kw1sw1dw0pw0,0.569092
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,64x1x1x5x5,0.000976562
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,64x1x1x5x5,0.00292969
onednn_verbose,create:cache_hit,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,g64mb1_ic64oc64_ih100oh100kh5sh1dh0ph2_iw152ow152kw5sw1dw0pw2,0.00415039
onednn_verbose,exec,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,g64mb1_ic64oc64_ih100oh100kh5sh1dh0ph2_iw152ow152kw5sw1dw0pw2,0.737061
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,24x64x1x1,0.000976562
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,24x64x1x1,0.00292969
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:sum ,alg:convolution_direct,mb1_ic64oc24_ih100oh100kh1sh1dh0ph0_iw152ow152kw1sw1dw0pw0,0.00317383
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:sum ,alg:convolution_direct,mb1_ic64oc24_ih100oh100kh1sh1dh0ph0_iw152ow152kw1sw1dw0pw0,0.522217
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,64x24x1x1,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,64x24x1x1,0.00195312
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,mb1_ic24oc64_ih100oh100kh1sh1dh0ph0_iw152ow152kw1sw1dw0pw0,0.00292969
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,mb1_ic24oc64_ih100oh100kh1sh1dh0ph0_iw152ow152kw1sw1dw0pw0,0.571045
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,64x1x1x5x5,0.000976562
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,64x1x1x5x5,0.00195312
onednn_verbose,create:cache_hit,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,g64mb1_ic64oc64_ih100oh100kh5sh1dh0ph2_iw152ow152kw5sw1dw0pw2,0.00292969
onednn_verbose,exec,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,g64mb1_ic64oc64_ih100oh100kh5sh1dh0ph2_iw152ow152kw5sw1dw0pw2,0.73999
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,24x64x1x1,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,24x64x1x1,0.00292969
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:sum ,alg:convolution_direct,mb1_ic64oc24_ih100oh100kh1sh1dh0ph0_iw152ow152kw1sw1dw0pw0,0.00317383
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:sum ,alg:convolution_direct,mb1_ic64oc24_ih100oh100kh1sh1dh0ph0_iw152ow152kw1sw1dw0pw0,0.591064
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,120x24x1x1,0.00292969
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,120x24x1x1,0.00390625
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic24oc120_ih100oh100kh1sh1dh0ph0_iw152ow152kw1sw1dw0pw0,0.00415039
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic24oc120_ih100oh100kh1sh1dh0ph0_iw152ow152kw1sw1dw0pw0,1.16284
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,120x1x1x3x3,0.000976562
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,120x1x1x3x3,0.00292969
onednn_verbose,create:cache_hit,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,g120mb1_ic120oc120_ih100oh50kh3sh2dh0ph1_iw152ow76kw3sw2dw0pw1,0.00292969
onednn_verbose,exec,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,g120mb1_ic120oc120_ih100oh50kh3sh2dh0ph1_iw152ow76kw3sw2dw0pw1,0.297852
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,40x120x1x1,0.000976562
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,40x120x1x1,0.00512695
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,,alg:convolution_direct,mb1_ic120oc40_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,0.00292969
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,,alg:convolution_direct,mb1_ic120oc40_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,0.412109
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,104x40x1x1,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,104x40x1x1,0.00415039
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic40oc104_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,0.00415039
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic40oc104_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,0.405029
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,104x1x1x3x3,0.000976562
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,104x1x1x3x3,0.00195312
onednn_verbose,create:cache_hit,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,g104mb1_ic104oc104_ih50oh50kh3sh1dh0ph1_iw76ow76kw3sw1dw0pw1,0.00292969
onednn_verbose,exec,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,g104mb1_ic104oc104_ih50oh50kh3sh1dh0ph1_iw76ow76kw3sw1dw0pw1,0.194824
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,40x104x1x1,0.000976562
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,40x104x1x1,0.00512695
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:sum ,alg:convolution_direct,mb1_ic104oc40_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,0.00317383
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:sum ,alg:convolution_direct,mb1_ic104oc40_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,0.328857
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,96x40x1x1,0.000976562
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,96x40x1x1,0.00415039
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic40oc96_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,0.00292969
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic40oc96_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,0.362061
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,96x1x1x3x3,0.000976562
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,96x1x1x3x3,0.00195312
onednn_verbose,create:cache_hit,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,g96mb1_ic96oc96_ih50oh50kh3sh1dh0ph1_iw76ow76kw3sw1dw0pw1,0.0209961
onednn_verbose,exec,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,g96mb1_ic96oc96_ih50oh50kh3sh1dh0ph1_iw76ow76kw3sw1dw0pw1,0.169189
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,40x96x1x1,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,40x96x1x1,0.00488281
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:sum ,alg:convolution_direct,mb1_ic96oc40_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,0.00390625
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:sum ,alg:convolution_direct,mb1_ic96oc40_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,0.322021
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,96x40x1x1,0.000976562
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,96x40x1x1,0.00292969
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic40oc96_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,0.00292969
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic40oc96_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,0.359863
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,96x1x1x3x3,0.000976562
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,96x1x1x3x3,0.00195312
onednn_verbose,create:cache_hit,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,g96mb1_ic96oc96_ih50oh50kh3sh1dh0ph1_iw76ow76kw3sw1dw0pw1,0.00195312
onednn_verbose,exec,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,g96mb1_ic96oc96_ih50oh50kh3sh1dh0ph1_iw76ow76kw3sw1dw0pw1,0.197998
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,40x96x1x1,0.000976562
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,40x96x1x1,0.00512695
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:sum ,alg:convolution_direct,mb1_ic96oc40_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,0.00317383
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:sum ,alg:convolution_direct,mb1_ic96oc40_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,0.335938
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,240x40x1x1,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,240x40x1x1,0.00610352
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic40oc240_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,0.00292969
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic40oc240_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,0.872803
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,240x1x1x3x3,0.00292969
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,240x1x1x3x3,0.00317383
onednn_verbose,create:cache_hit,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,g240mb1_ic240oc240_ih50oh50kh3sh1dh0ph1_iw76ow76kw3sw1dw0pw1,0.00317383
onednn_verbose,exec,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,g240mb1_ic240oc240_ih50oh50kh3sh1dh0ph1_iw76ow76kw3sw1dw0pw1,0.414062
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,56x240x1x1,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,56x240x1x1,0.0109863
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,,alg:convolution_direct,mb1_ic240oc56_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,0.00390625
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,,alg:convolution_direct,mb1_ic240oc56_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,1.06396
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,336x56x1x1,0.000976562
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,336x56x1x1,0.0109863
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic56oc336_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,0.00292969
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic56oc336_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,1.729
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,336x1x1x3x3,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,336x1x1x3x3,0.00390625
onednn_verbose,create:cache_hit,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,g336mb1_ic336oc336_ih50oh50kh3sh1dh0ph1_iw76ow76kw3sw1dw0pw1,0.00415039
onednn_verbose,exec,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,g336mb1_ic336oc336_ih50oh50kh3sh1dh0ph1_iw76ow76kw3sw1dw0pw1,0.610107
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,56x336x1x1,0.000976562
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,56x336x1x1,0.0141602
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:sum ,alg:convolution_direct,mb1_ic336oc56_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,0.00390625
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:sum ,alg:convolution_direct,mb1_ic336oc56_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,1.48608
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,336x56x1x1,0.000976562
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,336x56x1x1,0.0100098
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic56oc336_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,0.00292969
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic56oc336_ih50oh50kh1sh1dh0ph0_iw76ow76kw1sw1dw0pw0,1.66309
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,336x1x1x5x5,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,336x1x1x5x5,0.00610352
onednn_verbose,create:cache_hit,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,g336mb1_ic336oc336_ih50oh25kh5sh2dh0ph2_iw76ow38kw5sw2dw0pw2,0.00415039
onednn_verbose,exec,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,g336mb1_ic336oc336_ih50oh25kh5sh2dh0ph2_iw76ow38kw5sw2dw0pw2,0.430176
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,80x336x1x1,0.000976562
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,80x336x1x1,0.0168457
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,,alg:convolution_direct,mb1_ic336oc80_ih25oh25kh1sh1dh0ph0_iw38ow38kw1sw1dw0pw0,0.00390625
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,,alg:convolution_direct,mb1_ic336oc80_ih25oh25kh1sh1dh0ph0_iw38ow38kw1sw1dw0pw0,0.540039
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,480x80x1x1,0.000976562
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,480x80x1x1,0.0219727
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic80oc480_ih25oh25kh1sh1dh0ph0_iw38ow38kw1sw1dw0pw0,0.00390625
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic80oc480_ih25oh25kh1sh1dh0ph0_iw38ow38kw1sw1dw0pw0,0.825928
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,480x1x1x5x5,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,480x1x1x5x5,0.0090332
onednn_verbose,create:cache_hit,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,g480mb1_ic480oc480_ih25oh25kh5sh1dh0ph2_iw38ow38kw5sw1dw0pw2,0.00317383
onednn_verbose,exec,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,g480mb1_ic480oc480_ih25oh25kh5sh1dh0ph2_iw38ow38kw5sw1dw0pw2,0.428955
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,80x480x1x1,0.00219727
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,80x480x1x1,0.0209961
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:sum ,alg:convolution_direct,mb1_ic480oc80_ih25oh25kh1sh1dh0ph0_iw38ow38kw1sw1dw0pw0,0.00292969
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:sum ,alg:convolution_direct,mb1_ic480oc80_ih25oh25kh1sh1dh0ph0_iw38ow38kw1sw1dw0pw0,0.76001
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,480x80x1x1,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,480x80x1x1,0.0249023
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic80oc480_ih25oh25kh1sh1dh0ph0_iw38ow38kw1sw1dw0pw0,0.00415039
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic80oc480_ih25oh25kh1sh1dh0ph0_iw38ow38kw1sw1dw0pw0,0.822998
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,480x1x1x5x5,0.0012207
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:Abcde8a:f0,,,480x1x1x5x5,0.00585938
onednn_verbose,create:cache_hit,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,g480mb1_ic480oc480_ih25oh25kh5sh1dh0ph2_iw38ow38kw5sw1dw0pw2,0.00292969
onednn_verbose,exec,cpu,convolution,jit_dw:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:Abcde8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,g480mb1_ic480oc480_ih25oh25kh5sh1dh0ph2_iw38ow38kw5sw1dw0pw2,0.432861
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,80x480x1x1,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,80x480x1x1,0.0231934
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:sum ,alg:convolution_direct,mb1_ic480oc80_ih25oh25kh1sh1dh0ph0_iw38ow38kw1sw1dw0pw0,0.00195312
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:sum ,alg:convolution_direct,mb1_ic480oc80_ih25oh25kh1sh1dh0ph0_iw38ow38kw1sw1dw0pw0,0.75293
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,480x80x1x1,0.00512695
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,480x80x1x1,0.0249023
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic80oc480_ih25oh25kh1sh1dh0ph0_iw38ow38kw1sw1dw0pw0,0.00292969
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_hardswish ,alg:convolution_direct,mb1_ic80oc480_ih25oh25kh1sh1dh0ph0_iw38ow38kw1sw1dw0pw0,0.826904
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,96x480x1x1,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,96x480x1x1,0.0270996
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_undef::undef::f0 dst_f32::blocked:aBcd8b:f0,,alg:convolution_direct,mb1_ic480oc96_ih25oh25kh1sh1dh0ph0_iw38ow38kw1sw1dw0pw0,0.00415039
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_undef::undef::f0 dst_f32::blocked:aBcd8b:f0,,alg:convolution_direct,mb1_ic480oc96_ih25oh25kh1sh1dh0ph0_iw38ow38kw1sw1dw0pw0,0.939941
onednn_verbose,create:cache_hit,cpu,pooling_v2,jit:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 dst_f32::blocked:aBcd8b:f0 ws_undef::undef::f0,,alg:pooling_avg_exclude_padding,mb1ic96_ih25oh1kh25sh25dh0ph0_iw38ow1kw38sw38dw0pw0,0.00195312
onednn_verbose,exec,cpu,pooling_v2,jit:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 dst_f32::blocked:aBcd8b:f0 ws_undef::undef::f0,,alg:pooling_avg_exclude_padding,mb1ic96_ih25oh1kh25sh25dh0ph0_iw38ow1kw38sw38dw0pw0,0.0161133
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,24x96x1x1,0.000976562
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,24x96x1x1,0.00390625
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,mb1_ic96oc24_ih1oh1kh1sh1dh0ph0_iw1ow1kw1sw1dw0pw0,0.00292969
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_relu ,alg:convolution_direct,mb1_ic96oc24_ih1oh1kh1sh1dh0ph0_iw1ow1kw1sw1dw0pw0,0.00390625
onednn_verbose,create:cache_hit,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,96x24x1x1,0.00195312
onednn_verbose,exec,cpu,reorder,jit:blk,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd8b8a:f0,,,96x24x1x1,0.00317383
onednn_verbose,create:cache_hit,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_linear:0.2:0.5+eltwise_clip:0:1 ,alg:convolution_direct,mb1_ic24oc96_ih1oh1kh1sh1dh0ph0_iw1ow1kw1sw1dw0pw0,0.00292969
onednn_verbose,exec,cpu,convolution,jit_1x1:avx2,forward_inference,src_f32::blocked:aBcd8b:f0 wei_f32::blocked:ABcd8b8a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd8b:f0,attr-post-ops:eltwise_linear:0.2:0.5+eltwise_clip:0:1 ,alg:convolution_direct,mb1_ic24oc96_ih1oh1kh1sh1dh0ph0_iw1ow1kw1sw1dw0pw0,0.00292969
onednn_verbose,create:cache_hit,cpu,binary,jit:uni,undef,src_f32::blocked:aBcd8b:f0 src_f32::blocked:aBcd8b:f0 dst_f32::blocked:aBcd8b:f0,,alg:binary_mul,1x96x25x38:1x96x1x1,0.000976562
onednn_verbose,exec,cpu,binary,jit:uni,undef,src_f32::blocked:aBcd8b:f0 src_f32::blocked:aBcd8b:f0 dst_f32::blocked:aBcd8b:f0,,alg:binary_mul,1x96x25x38:1x96x1x1,0.045166
onednn_verbose,create:cache_hit,cpu,binary,jit:uni,undef,src_f32::blocked:aBcd8b:f0 src_f32::blocked:aBcd8b:f0 dst_f32::blocked:aBcd8b:f0,,alg:binary_add,1x96x25x38:1x96x25x38,0.000976562

There was an error creating your Issue: body is too long (maximum is 65536 characters).

QQ群已满

可以留个新群联系方式嘛~qq群满了,想进群交流。本人QQ:1023382453

NuGet package with binary deps for linux x64

Hey,
I haven't found a NuGet package containing the binary dependencies for linux x64 platform. I've created one (based on the provided Dockerfile) for internal use as the provided docker images are sufficient for deployment, but not development scenarios. I'm now dockerizing the build of the package. Would you be interested in a PR? What are you thoughts on this?

cpu 100%

nuget package

   <PackageReference Include="OpenCvSharp4.runtime.win" Version="4.7.0.20230115" />
    <PackageReference Include="Sdcb.PaddleInference" Version="2.4.1.2" />
    <PackageReference Include="Sdcb.PaddleInference.runtime.win64.mkl" Version="2.4.1" />
    <PackageReference Include="Sdcb.PaddleOCR" Version="2.6.0.4" />
    <PackageReference Include="Sdcb.PaddleOCR.Models.LocalV3" Version="2.6.0.3" />

test code

// See https://aka.ms/new-console-template for more information

using OpenCvSharp;
using Sdcb.PaddleInference;
using Sdcb.PaddleOCR.Models.LocalV3;
using Sdcb.PaddleOCR.Models;
using Sdcb.PaddleOCR;

FullOcrModel model = LocalFullModels.ChineseV3;
//byte[] sampleImageData;
//string sampleImageUrl = @"https://www.tp-link.com.cn/content/images2017/gallery/4288_1920.jpg";
//using (HttpClient http = new HttpClient())
//{
//    Console.WriteLine("Download sample image from: " + sampleImageUrl);
//    sampleImageData = await http.GetByteArrayAsync(sampleImageUrl);
//}

using (PaddleOcrAll all = new PaddleOcrAll(model, PaddleDevice.Mkldnn())
{
    AllowRotateDetection = true, /* 允许识别有角度的文字 */
    Enable180Classification = false, /* 允许识别旋转角度大于90度的文字 */
})
{
    var fs =Directory.GetFiles(@"D:\Pictures\发票\新发票");
    foreach (var item in fs)
    {
        // Load local file by following code:
        using (Mat src = Cv2.ImRead(item))
        // using (Mat src = Cv2.ImDecode(sampleImageData, ImreadModes.Color))
        {
            PaddleOcrResult result = all.Run(src);
            Console.WriteLine("Detected all texts: \n" + result.Text);
            // foreach (PaddleOcrResultRegion region in result.Regions)
            //{
            //    Console.WriteLine($"Text: {region.Text}, Score: {region.Score}, RectCenter: {region.Rect.Center}, RectSize:    {region.Rect.Size}, Angle: {region.Rect.Angle}");
            //}
        }
    }
}
Console.WriteLine("Hello, World!");

image

I don't understand ocr, just need to recognize text, I'm not sure if this is a normal phenomenon.

Memory leak on linux.

HI, First of all thanks for that great job. I spend so much time trying to optimise recognition model with C# OnnxRuntime version, and still execution time so slow on GPU. With paddle inference it's is insanely fast. I used your library and everything works perfectly on windows, unmanaged memory consumption around 3gb per instance. However I experienced serious memory leak in linux with the same code.
Environment:
Ubuntu 18.4
Opencv 4.6.0 (Opencvsharpextern)
Cudnn 8.4
Cuda 11.6
Paddle inference lib: https://paddle-inference-lib.bj.bcebos.com/2.3.2/cxx_c/Linux/GPU/x86-64_gcc8.2_avx_mkl_cuda11.6_cudnn8.4.0-trt8.4.0.6/paddle_inference_c.tgz
Everything is working inside docker with image nvidia/cuda:11.6.0-cudnn8-runtime-ubuntu18.04
I had to spent sometime to make paddle inference working, it appears I had to make soft link to libcudnn.so and libcublas.so to /usr/lib to order it may run.
Now the case, everything works as fast as on windows, but after 100's of iterations it used entire memory 64gb and 32 gb of swap, and crush with outofmemory. I
Do you have an Idea where the issue lies? I read sources, checked pinned object and it seems to me everything should be find, all pined objects released.

System.AccessViolationException

I get the following error when trying to run code similar to the Quick Start

Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Repeat 2 times:

at Sdcb.PaddleInference.Native.PaddleNative.PD_PredictorRun(IntPtr)

at Sdcb.PaddleOCR.PaddleOcrDetector.Run(OpenCvSharp.Mat)
at Sdcb.PaddleOCR.PaddleOcrAll.Run(OpenCvSharp.Mat, Int32)
at ImageLibrary.Clients.Getty.GettyImageClient.PopulateResultsV2(System.Object, SegmentType, ImageLibrary.DataSchemas.HeroImage.ImageSearchRequest)
at ImageLibrary.Clients.Getty.GettyImageClient.GetEditorialImagesExecute(Segment, ImageLibrary.DataSchemas.HeroImage.ImageSearchRequest, System.Collections.Generic.IList1<ImageFilterOptions>, ImageSortOption, SegmentType) at ImageLibrary.Clients.Getty.GettyImageClient.GetEditorialImages(Segment, ImageLibrary.DataSchemas.HeroImage.ImageSearchRequest, System.Collections.Generic.IList1, ImageSortOption, SegmentType)
at ImageLibrary.ImageScrapper.NonPeopleImageScrapper.SearchEditorialImages(System.Collections.Generic.List1<ImageLibrary.DataSchemas.HeroImage.ImageSearchResult>, ImageLibrary.DataSchemas.HeroImage.ImageSearchRequest, Int32, System.Collections.Generic.List1<System.String>, SegmentType, Int32 ByRef)
at ImageLibrary.ImageScrapper.NonPeopleImageScrapper.GetTopImages(ImageLibrary.DataSchemas.HeroImage.ImageSearchRequest, Int32, ImageSortOption, SegmentType, Boolean)
at ImageScrapper.Program+<>c__DisplayClass7_9.b__0(System.String)
at System.Threading.Tasks.Parallel+<>c__DisplayClass33_02[[System.__Canon, System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e],[System.__Canon, System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].<ForEachWorker>b__0(Int32) at System.Threading.Tasks.Parallel+<>c__DisplayClass19_01[[System.__Canon, System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].b__1(System.Threading.Tasks.RangeWorker ByRef, Int32, Boolean ByRef)
at System.Threading.Tasks.TaskReplicator+Replica.Execute()
at System.Threading.Tasks.TaskReplicator+Replica+<>c.b__7_0(System.Object)
at System.Threading.Tasks.Task.InnerInvoke()
at System.Threading.Tasks.Task+<>c.<.cctor>b__272_0(System.Object)
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(System.Threading.Thread, System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef, System.Threading.Thread)
at System.Threading.Tasks.Task.ExecuteEntryUnsafe(System.Threading.Thread)
at System.Threading.ThreadPoolWorkQueue.Dispatch()
at System.Threading.PortableThreadPool+WorkerThread.WorkerThreadStart()
at System.Threading.Thread.StartCallback()

What steps can I now take to get to the cause of the problem?

请问如何修改配置?

image
目前只知道这几条配置的修改,但是其他如下图的配置没有找到在哪里可以设置,请问如何修改下述配置呢?
image

Win Server 2022 ”Sdcb.PaddleInference.PaddleConfig” Error

System.TypeInitializationException: “Sdcb.PaddleInference.PaddleConfig”的类型初始值设定项引发异常。 ---> System.DllNotFoundException: 无法加载 DLL“dll\x64\paddle_inference_c.dll”: 动态链接库(DLL)初始化例程失败。 (异常来自 HRESULT:0x8007045A)。
在 Sdcb.PaddleInference.Native.PaddleNative.PD_GetVersion()
在 Sdcb.PaddleInference.PaddleConfig.AutoLoad()
在 Sdcb.PaddleInference.PaddleConfig..cctor()

通过 Dependency Worker 检查 paddle_inference_c.dll 未报告缺失依赖库
已安装 VC++ Runtime Library 2015-2022
Win10 LTSC 2021 环境下运行正常

Can't await all.run() when using GPU

when using PaddleConfig.Defaults.UseGpu = True
await result = all.Run(src2)
throw "System.AccessViolationException"
but using PaddleConfig.Defaults.UseGpu = False
await result = all.Run(src2) seen OK to me

Snipaste_2022-11-09_21-25-05

反正各种不能await。只要一套在await里必报错
一会试试放在多线程里是否可以。
已经在难受了。。

当然。如果没有await 。use GPU是OK的。但是总不能让用户界面假死很苦手。一张图片500ms的延迟不能忍受。

关于all.Run(img).Regions.Rect.Size

var v = all.Run(img);

foreach (var item in v.Regions)
{

item 里面的
item.Rect.Size.Height
item.Rect.Size.Width
这两个,你是弄反了吧?
看一下代码.
}

PaddleDevice.Gpu() returns junk characters in OCR results

Hello,

I am using following packages:

OpenCvSharp4.runtime.win 4.7.0.20230115
Sdcb.PaddleInference 2.4.1.3
Sdcb.PaddleInference.runtime.win64.cuda102_cudnn76_sm61_75 2.4.0
Sdcb.PaddleOCR 2.6.0.5
Sdcb.PaddleOCR.Models.LocalV3 2.6.0.5

I did follow the tutorial > "How to enable GPU?" as mentioned.

CUDA 10.2 and cudnn cudnn-10.2-windows10-x64-v7.6.5.32
are installed. Also TensorRT-7.0.0.11.Windows10.x86_64.cuda-10.2.cudnn7.6 is installed.
And all PATH are properly set.

The code does work, but the result.Text gives some junk / special characters instead of the actual text. There are no errors and the task manager does show the GPU is being used.

Example: A simple "Hello World" jpg file gives following result:

6$:L#U^2py
-_hqN

Can you please suggest what is wrong? Changing PaddleDevice.Gpu() back to PaddleDevice.Mkldnn() OR PaddleDevice.Openblas()
works fine and gives correct result as "Hello World". Without changing any Nuget packages.

  1. Has this something to do with OpenCVSharp? does it require a GpuMat instead of Mat when using PaddleDevice.Gpu()?
  2. Does this require a specific graphics card? having some specific compute capability? Mine has compute capability of 3.5, which I think is supported by CUDA 10.2

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.