Giter Site home page Giter Site logo

tencent / forward Goto Github PK

View Code? Open in Web Editor NEW
538.0 538.0 67.0 83.01 MB

A library for high performance deep learning inference on NVIDIA GPUs.

License: Other

CMake 3.24% Batchfile 0.02% Shell 0.03% C++ 55.28% Cuda 4.03% Python 5.84% Jupyter Notebook 31.45% C 0.10%
cuda deep-learning forward gpu inference inference-engine keras neural-network onnx pytorch tensorflow tensorrt

forward's Introduction

Forward 深度学习推理加速框架

License Build Status



[English Version]

什么是 Forward

Forward 是一款腾讯研发的 GPU 高性能推理加速框架。它提出了一种解析方案,可直接加载主流框架模型(Tensorflow / PyTorch / Keras / ONNX)转换成 TensorRT 推理加速引擎,帮助用户节省中间繁杂的模型转换或网络构建步骤。相对于直接使用 TensorRT,Forward 更易用以及更容易扩展支持更多模型和算子。目前,Forward 除了覆盖支持主流的 CV,NLP 及推荐领域的深度学习模型外,还支持一些诸如 BERT,FaceSwap,StyleTransfer 这类高级模型。

为什么选择 Forward

  • 模型性能优化高:基于 TensorRT API 开发网络层级的支持,保证对于通用网络层级的推理性能优化处于最优级别;
  • 模型支持范围广:除了通用的 CV,NLP,及推荐类模型,还支持一些诸如 BERT,FaceSwap,StyleTransfer 这类高级模型;
  • 多种推理模式:支持 FLOAT / HALF / INT8 推理模式;
  • 接口简单易用:直接导入已训练好的 Tensorflow(.pb) / PyTorch(.pth) / Keras(.h5) / ONNX(.onnx) 模型文件,隐式转换为高性能的推理 Engine 进行推理加速;
  • 支持自研扩展:可根据业务模型扩展支持自定义网络层级
  • 支持 C++ 和 Python 接口调用

快速上手 Forward

环境依赖

  • NVIDIA CUDA >= 10.0, CuDNN >= 7 (推荐 CUDA 10.2 以上)
  • TensorRT >= 7.0.0.11 (推荐 TensorRT-7.2.1.6)
  • CMake >= 3.12.2
  • GCC >= 5.4.0, ld >= 2.26.1
  • PyTorch >= 1.7.0
  • TensorFlow >= 1.15.0 (若使用 Linux 操作系统,需额外下载 Tensorflow 1.15.0,并将解压出来的 .so 文件拷贝至 Forward/source/third_party/tensorflow/lib 目录下)
  • Keras HDF5 (从 Forward/source/third_party/hdf5 源码构建)

项目构建

使用 CMake 进行构建生成 Makefiles 或者 Visual Studio 项目。根据使用目的,Forward 可构建成适用于不同框架的库,如 Fwd-Torch、Fwd-Python-Torch、Fwd-Tf、Fwd-Python-Tf、Fwd-Keras、Fwd-Python-Keras、Fwd-Onnx 和 Fwd-Python-Onnx。

以 Linux 平台构建 Fwd-Tf 为例,

步骤一:克隆项目

1 git clone https://github.com/Tencent/Forward.git

步骤二:下载 Tensorflow 1.15.0(仅在 Linux 平台使用 Tensorflow 框架推理时需要)

1 cd Forward/source/third_party/tensorflow/
2 wget https://github.com/neargye-forks/tensorflow/releases/download/v1.15.0/libtensorflow-cpu-linux-x86_64-1.15.0.tar.gz
3 tar -xvf libtensorflow-gpu-linux-x86_64-1.15.0.tar.gz

步骤三:创建 build 文件夹

1 cd ~/Forward/
2 rm -rf build
3 mkdir -p build
4 cd build/

步骤四:使用 cmake 生成构建关系,需指定 TensorRT_ROOT 安装路径

1 cmake ..  -DTensorRT_ROOT=<path_to_TensorRT> -DENABLE_TENSORFLOW=ON -DENABLE_UNIT_TESTS=ON

步骤五:使用 make 构建项目

1 make -j

步骤六:运行 unit_test 验证项目是否构建成功

cd bin/
./unit_test --gtest_filter=TestTfNodes.*

# 出现已下提示表示项目构建成
# [       OK ] TestTfNodes.ZeroPadding (347 ms)
# [----------] 22 tests from TestTfNodes (17555 ms total)

# [----------] Global test environment tear-down
# [==========] 22 tests from 1 test case ran. (17555 ms total)
# [  PASSED  ] 22 tests.

更多构建流程可参考 CMake 构建流程

Forward-Cpp 使用

参考 Demo for using Forward-Cpp in Linux

Forward-Python 使用

参考 Demo for using Forward-Python

Forward-Bert 使用

Refer to Demo for using Forward-Bert

更多使用方法

注意: 模型输入名可通过模型查看器来查看, 例如用 Netron 查看。

Logging 日志

Forward 使用 easylogging++ 作为日志功能,并使用 forward_log.conf 作为日志配置文件。

  • 若工作目录中存在 forward_log.conf 文件,Forward 将使用该配置文件,更多内容可参考 Using-configuration-file
  • 若工作目录中不存在 forward_log.conf 文件,Forward 将使用默认配置,并将日志记录到 logs/myeasylog.log

forward_log.conf 文件配置样例

* GLOBAL:
  FORMAT               =  "[%level] %datetime %fbase(%line): %msg"
  FILENAME             =  "Forward.log"
  ENABLED              =  true
  TO_FILE              =  true
  TO_STANDARD_OUTPUT   =  true
  PERFORMANCE_TRACKING =  true
  MAX_LOG_FILE_SIZE    =  2097152 ## 2MB - Comment starts with two hashes (##)
  LOG_FLUSH_THRESHOLD  =  100 ## Flush after every 100 logs

模型和算子支持

当前 Forward 的模型与算子支持如下所示,如有需要添加更多支持的,欢迎联系添加 Issue 反馈。如需要自行扩展添加支持的,可参考 开源共建:扩展添加支持操作的流程

模型

算子

参考资料

  1. 推理流程构建过程
  2. 推理引擎使用方法
  3. 工具与测试
  4. 常见问题

贡献

  1. 联系进入开源共建交流讨论群,QQ 群:776314438
  2. 请参考 CONTRIBUTING.md 进行开源共建。

Aster JIAN

Zexi YUAN

Ao LI

Paul LU

Zhaoyi LUO

Jett Hu

Ryosuke1eep

感谢所有贡献者,欢迎更多人加入一起贡献。

许可证

详情见 LISENCE

forward's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

forward's Issues

[TRT] (Unnamed Layer* 0) [Convolution]: at least 4 dimensions are required for input

dl frame: tensorflow 1.15
model: mobilenet_v2_1.4_224_frozen.pb
input name: input
input size: 1,224,224,3

error info:
[INFO ] 2021-04-08 14:38:59,844 tf_graph_parser.cpp(116): Input = input : input
[INFO ] 2021-04-08 14:38:59,844 tf_shuffle_creator.h(50): TrtShuffleDesc::Create
[INFO ] 2021-04-08 14:38:59,844 tf_softmax_creator.h(49): TrtSoftmaxDesc::Create
[INFO ] 2021-04-08 14:38:59,844 tf_shuffle_creator.h(50): TrtShuffleDesc::Create
[INFO ] 2021-04-08 14:38:59,845 tf_shuffle_creator.h(50): TrtShuffleDesc::Create
[INFO ] 2021-04-08 14:38:59,845 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,916 tf_pooling_creator.h(51): TrtPoolingDesc::Create
[INFO ] 2021-04-08 14:38:59,916 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,916 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,929 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,941 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,941 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:38:59,941 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:38:59,941 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,941 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,941 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,946 tf_element_wise_creator.h(54): TrtElementWiseDesc::Create
[INFO ] 2021-04-08 14:38:59,947 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,954 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,954 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:38:59,954 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:38:59,954 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,954 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,954 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,960 tf_element_wise_creator.h(54): TrtElementWiseDesc::Create
[INFO ] 2021-04-08 14:38:59,961 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,966 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,966 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:38:59,966 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:38:59,966 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,967 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,967 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,973 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,977 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,977 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:38:59,977 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:38:59,977 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,977 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,977 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,979 tf_element_wise_creator.h(54): TrtElementWiseDesc::Create
[INFO ] 2021-04-08 14:38:59,980 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,982 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,982 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:38:59,982 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:38:59,982 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,982 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,982 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,984 tf_element_wise_creator.h(54): TrtElementWiseDesc::Create
[INFO ] 2021-04-08 14:38:59,985 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,988 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,988 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:38:59,988 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:38:59,988 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,989 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,989 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,991 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,992 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,992 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:38:59,992 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:38:59,992 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,992 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,992 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,993 tf_element_wise_creator.h(54): TrtElementWiseDesc::Create
[INFO ] 2021-04-08 14:38:59,994 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,995 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,995 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:38:59,995 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:38:59,995 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,995 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,995 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,996 tf_element_wise_creator.h(54): TrtElementWiseDesc::Create
[INFO ] 2021-04-08 14:38:59,997 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,998 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,998 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:38:59,998 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:38:59,998 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,998 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,998 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,999 tf_element_wise_creator.h(54): TrtElementWiseDesc::Create
[INFO ] 2021-04-08 14:38:59,999 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,000 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,000 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:39:00,001 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:39:00,001 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,001 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,001 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,002 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,002 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,002 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:39:00,002 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:39:00,002 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,003 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,003 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,004 tf_element_wise_creator.h(54): TrtElementWiseDesc::Create
[INFO ] 2021-04-08 14:39:00,004 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,004 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,004 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:39:00,004 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:39:00,004 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,004 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,004 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,005 tf_element_wise_creator.h(54): TrtElementWiseDesc::Create
[INFO ] 2021-04-08 14:39:00,005 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,005 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,005 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:39:00,005 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:39:00,005 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,005 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,005 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,005 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:39:00,006 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_element_wise_creator.h(54): TrtElementWiseDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:39:00,006 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:39:00,007 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,007 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,007 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,007 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,007 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,007 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:39:00,007 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:39:00,007 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,007 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,007 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,588 trt_input_creator.h(44): TrtInputDesc::CreateLayer
[INFO ] 2021-04-08 14:39:00,588 trt_convolution_creator.h(44): TrtConvolutionNdDesc::CreateLayer
[INFO ] 2021-04-08 14:39:00,589 trt_clamp_creator.h(44): TrtClampDesc::CreateLayer
[ERROR] 2021-04-08 14:39:00,589 trt_logger.cpp(64): [TRT] (Unnamed Layer* 0) [Convolution]: at least 4 dimensions are required for input.
[ERROR] 2021-04-08 14:39:00,589 trt_logger.cpp(64): [TRT] (Unnamed Layer* 0) [Convolution]: at least 4 dimensions are required for input.
[INFO ] 2021-04-08 14:39:00,589 trt_convolution_creator.h(44): TrtConvolutionNdDesc::CreateLayer
[INFO ] 2021-04-08 14:39:00,589 trt_scale_creator.h(43): TrtScaleDesc::CreateLayer
[INFO ] 2021-04-08 14:39:00,589 trt_clamp_creator.h(44): TrtClampDesc::CreateLayer
[ERROR] 2021-04-08 14:39:00,589 trt_logger.cpp(64): [TRT] (Unnamed Layer* 0) [Convolution]: at least 4 dimensions are required for input.
[ERROR] 2021-04-08 14:39:00,589 trt_logger.cpp(64): [TRT] (Unnamed Layer* 0) [Convolution]: at least 4 dimensions are required for input.
[INFO ] 2021-04-08 14:39:00,589 trt_convolution_creator.h(44): TrtConvolutionNdDesc::CreateLayer
[INFO ] 2021-04-08 14:39:00,589 trt_convolution_creator.h(44): TrtConvolutionNdDesc::CreateLayer
[INFO ] 2021-04-08 14:39:00,589 trt_clamp_creator.h(44): TrtClampDesc::CreateLayer
[ERROR] 2021-04-08 14:39:00,589 trt_logger.cpp(64): [TRT] (Unnamed Layer* 0) [Convolution]: at least 4 dimensions are required for input.
[ERROR] 2021-04-08 14:39:00,589 trt_logger.cpp(64): [TRT] (Unnamed Layer* 0) [Convolution]: at least 4 dimensions are required for input.
[INFO ] 2021-04-08 14:39:00,589 trt_convolution_creator.h(44): TrtConvolutionNdDesc::CreateLayer
[INFO ] 2021-04-08 14:39:00,589 trt_scale_creator.h(43): TrtScaleDesc::CreateLayer
[INFO ] 2021-04-08 14:39:00,589 trt_clamp_creator.h(44): TrtClampDesc::CreateLayer
[ERROR] 2021-04-08 14:39:00,589 trt_logger.cpp(64): [TRT] (Unnamed Layer* 0) [Convolution]: at least 4 dimensions are required for input.
[ERROR] 2021-04-08 14:39:00,589 trt_logger.cpp(64): [TRT] (Unnamed Layer* 0) [Convolution]: at least 4 dimensions are required for input.

编译fwd-pytroch时提示缺少头文件

Describe the bug
编译 Forward-PyTorch时报错:
image

cmake 命令:
cmake .. -DENABLE_TORCH=ON -DBUILD_PYTHON_LIB=ON -DPYTHON_EXECUTABLE="/usr/local/bin/python"

是安装pytorch的不对吗?

Environment

TensorRT Version: 7.2.3.4
NVIDIA GPU: T4
NVIDIA Driver Version: 440.33.01
CUDA Version: 10.2
CUDNN Version:
Operating System: ubuntu 18.04
Python Version (if applicable): 3.6
Tensorflow Version (if applicable): 2.3.4
PyTorch Version (if applicable): 1.10.1

Relevant Files

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

make -j报错

tensorrt 版本:6.0.1.5
CUDA :10.1

下面是build/CMakeFiles/CMakeError.log的日志,请问应该如何处理
Performing C SOURCE FILE Test CMAKE_HAVE_LIBC_PTHREAD failed with the following output:
Change Dir: /opt/Forward/build/CMakeFiles/CMakeTmp

Run Build Command(s):/usr/bin/gmake cmTC_fe5dd/fast && /usr/bin/gmake -f CMakeFiles/cmTC_fe5dd.dir/build.make CMakeFiles/cmTC_fe5dd.dir/build
gmake[1]: 进入目录“/opt/Forward/build/CMakeFiles/CMakeTmp”
Building C object CMakeFiles/cmTC_fe5dd.dir/src.c.o
/opt/rh/devtoolset-7/root/usr/bin/cc -fPIC -DCMAKE_HAVE_LIBC_PTHREAD -o CMakeFiles/cmTC_fe5dd.dir/src.c.o -c /opt/Forward/build/CMakeFiles/CMakeTmp/src.c
Linking C executable cmTC_fe5dd
/usr/local/bin/cmake -E cmake_link_script CMakeFiles/cmTC_fe5dd.dir/link.txt --verbose=1
/opt/rh/devtoolset-7/root/usr/bin/cc -fPIC -DCMAKE_HAVE_LIBC_PTHREAD CMakeFiles/cmTC_fe5dd.dir/src.c.o -o cmTC_fe5dd
CMakeFiles/cmTC_fe5dd.dir/src.c.o:在函数‘main’中:
src.c:(.text+0x2f):对‘pthread_create’未定义的引用
src.c:(.text+0x3b):对‘pthread_detach’未定义的引用
src.c:(.text+0x47):对‘pthread_cancel’未定义的引用
src.c:(.text+0x58):对‘pthread_join’未定义的引用
src.c:(.text+0x6c):对‘pthread_atfork’未定义的引用
collect2: error: ld returned 1 exit status
gmake[1]: *** [cmTC_fe5dd] 错误 1
gmake[1]: 离开目录“/opt/Forward/build/CMakeFiles/CMakeTmp”
gmake: *** [cmTC_fe5dd/fast] 错误 2

Source file was:
#include <pthread.h>

void* test_func(void* data)
{
return data;
}

int main(void)
{
pthread_t thread;
pthread_create(&thread, NULL, test_func, NULL);
pthread_detach(thread);
pthread_cancel(thread);
pthread_join(thread, NULL);
pthread_atfork(NULL, NULL, NULL);
pthread_exit(NULL);

return 0;
}

Determining if the function pthread_create exists in the pthreads failed with the following output:
Change Dir: /opt/Forward/build/CMakeFiles/CMakeTmp

Run Build Command(s):/usr/bin/gmake cmTC_cbc42/fast && /usr/bin/gmake -f CMakeFiles/cmTC_cbc42.dir/build.make CMakeFiles/cmTC_cbc42.dir/build
gmake[1]: 进入目录“/opt/Forward/build/CMakeFiles/CMakeTmp”
Building C object CMakeFiles/cmTC_cbc42.dir/CheckFunctionExists.c.o
/opt/rh/devtoolset-7/root/usr/bin/cc -fPIC -DCHECK_FUNCTION_EXISTS=pthread_create -o CMakeFiles/cmTC_cbc42.dir/CheckFunctionExists.c.o -c /usr/local/share/cmake-3.17/Modules/CheckFunctionExists.c
Linking C executable cmTC_cbc42
/usr/local/bin/cmake -E cmake_link_script CMakeFiles/cmTC_cbc42.dir/link.txt --verbose=1
/opt/rh/devtoolset-7/root/usr/bin/cc -fPIC -DCHECK_FUNCTION_EXISTS=pthread_create CMakeFiles/cmTC_cbc42.dir/CheckFunctionExists.c.o -o cmTC_cbc42 -lpthreads
/opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: 找不到 -lpthreads
collect2: error: ld returned 1 exit status
gmake[1]: *** [cmTC_cbc42] 错误 1
gmake[1]: 离开目录“/opt/Forward/build/CMakeFiles/CMakeTmp”
gmake: *** [cmTC_cbc42/fast] 错误 2

keras.layer里Embedding的trt实现

感谢大佬做的项目!帮助很大,已star
我遇到了一个问题,我在做keras.layer.Embedding层的trt实现,想问下可否使用tensorRT自带的embLayerNormPlugin插件来实现功能??我的python和C++源码放在下面:
python代码:
微信图片_20210624171827
python运行的结果 从一个32长度数组变成32x128tensor:
微信图片_20210624171831

关于fwd-torch的几个路径的问题

image
你好,我在编译fwd-torch的时候,一直报找不到包的问题,如图,我想问这其中的CMAKE_PREFIX_PATH ,TORCH_DIR,TORCH_CMAKE_PATH都是关于torch的什么路径,是跟pytorch官方下的libtorch有关吗,具体是指定到其中的哪些路径

Help to support tensorflow slim "Flatten" pattern to Tensorrt.

There is a layer in tensorflow slim named "Flatten", it includes servel tensorflow operations like: "Shape", "StridedSlice" and "Reshape". The test code like this.

import numpy as np
import os


def create_tf_flatten(model_file):
    import tensorflow as tf
    import tf_slim as slim
    with tf.Session() as sess:
        x1 = tf.placeholder(shape=(None,299,299,3),dtype=tf.float32, name='x')

        op = slim.flatten(x1)

        sess.run(tf.global_variables_initializer())
        feed_dict = {x1: np.ones((1,299,299,3))}
        print(sess.run(op, feed_dict))

        graphdef = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['Flatten/flatten/Reshape'])
        tf.train.write_graph(graphdef, './', model_file, as_text=False)
        return feed_dict

def forward_transfer(model_file, dummy_input):
    import forward
    # 1. 构建 Engine
    builder = forward.TfBuilder()
    infer_mode = 'float32'

    builder.set_mode(infer_mode)
    tf_engine = builder.build(model_file, dummy_input)

    # save engine
    engine_path = os.path.splitext(model_file)[0] + '.engine'
    tf_engine.save(engine_path)


def test_forward(model_file, dummy_input):
    import forward
    engine_path = os.path.splitext(model_file)[0] + '.engine'
    # load saved engine
    tf_engine = forward.TfEngine()
    tf_engine.load(engine_path)

    inputs = dummy_input
    outputs = tf_engine.forward(inputs)
    print(outputs)


model_file = 'tf_model.pb'
create_tf_flatten(model_file)

x = {'x':np.ones([1,299,299,3],dtype='float32')}

forward_transfer(model_file, x)
test_forward(model_file, x)

Environment

TensorRT Version: 7.1.3.4
NVIDIA GPU: T4
NVIDIA Driver Version: 410.104
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System: ubuntu 18.04
Python Version (if applicable): 3.6.9
Tensorflow Version (if applicable): 1.15.0
PyTorch Version (if applicable): 1.7.0

@aster2013 @yuanzexi

win_python_keras版本加载模型时报错

简单测试时报错
步骤如下:
1.先用keras保存resnet50的预训练模型

model = ResNet50()
model.save('./resnet50.h5')

2.按照示例,运行

    # 1. 构建 Engine
    builder = fwd.KerasBuilder()
    infer_mode = 'float32'  # Infer Mode: float32 / float16 / int8_calib / int8
    batch_size = 1
    max_workspace_size = 1 << 32

    builder.set_mode(infer_mode)
    engine = builder.build(r'./resnet50.h5', batch_size)

    # need_save = True
    # if need_save:
    #     engine_path = 'path/to/out/engine'
    #     engine.save(engine_path)
    #     engine = fwd.KerasEngine()
    #     engine.load(engine_path)

    # 2. 执行推理
    inputs = np.random.randn(1, 224, 224, 3)
    outputs = engine.forward([inputs])  # list_type output
    print(outputs)

报错如下:

[ERROR] 2021-06-17 09:29:47,713 trt_keras_parser.cpp(90): Load Model failed.
[ERROR] 2021-06-17 09:29:47,713 keras_engine.cpp(129): Parse Keras Graph failed
Traceback (most recent call last):
  File "D:/Projects/tencent_forward/workspace/resnet_forward.py", line 26, in <module>
    outputs = engine.forward([inputs])  # list_type output
AttributeError: 'NoneType' object has no attribute 'forward'

加载模型那一步失败了,还望指导下,感谢~

Environment

TensorRT Version: 7.2.1.6
NVIDIA GPU: RTX 2080 SUPER
NVIDIA Driver Version: 441.22
CUDA Version: 10.2
CUDNN Version: 8.2.0.53
Operating System: Windows 10 专业版
Python Version: 3.8.5
Keras: 2.4.3
h5py: 2.10.0

编译VC项目时出错

编译报错
先按照指引cmake,有test failed,但是完成了。

Selecting Windows SDK version  to target Windows 10.0.19042.
CMake Deprecation Warning at CMakeLists.txt:28 (cmake_policy):
  The OLD behavior for policy CMP0074 will be removed from a future version
  of CMake.

  The cmake-policies(7) manual explains that the OLD behaviors of all
  policies are deprecated and that a policy should be set to OLD only under
  specific short-term circumstances.  Projects should be ported to the NEW
  behavior and not rely on setting a policy to OLD.


Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2 (found version "10.2") 
CUDA_NVCC_FLAGS:  -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75
Found TensorRT: D:/Software/TensorRT-7.2.1.6.Windows10.x86_64.cuda-10.2.cudnn8.0/TensorRT-7.2.1.6/lib/nvinfer.lib;D:/Software/TensorRT-7.2.1.6.Windows10.x86_64.cuda-10.2.cudnn8.0/TensorRT-7.2.1.6/lib/nvinfer_plugin.lib;D:/Software/TensorRT-7.2.1.6.Windows10.x86_64.cuda-10.2.cudnn8.0/TensorRT-7.2.1.6/lib/nvonnxparser.lib;D:/Software/TensorRT-7.2.1.6.Windows10.x86_64.cuda-10.2.cudnn8.0/TensorRT-7.2.1.6/lib/nvparsers.lib (found version "7.2.1") 
Using the single-header code from D:/Projects/tencent_forward/Forward/source/third_party/json/single_include/
Use HDF5 on third_party: D:/Projects/tencent_forward/Forward/source/third_party/hdf5
Warnings Configuration: default:   /DWIN32 /D_WINDOWS /W3 :  /DWIN32 /D_WINDOWS /W3 /GR /EHsc /W3 /WX-
Check for STD namespace
Check for STD namespace - found
Performing CXX Test OLD_HEADER_FILENAME - Failed
Performing CXX Test HDF_NO_NAMESPACE - Failed
Performing CXX Test HDF_NO_STD - Failed
Performing CXX Test BOOL_NOTDEFINED - Failed
Performing CXX Test NO_STATIC_CAST - Failed
Performing CXX Test CXX_HAVE_OFFSETOF - Failed
Configuring done
Generating done

然后在项目生成的时候报错:

严重性	代码	说明	项目	文件	行
错误	C2664	“void std::vector<fwd::NamedTensor,std::allocator<_Ty>>::push_back(const fwd::NamedTensor &)”: 无法将参数 1 从“initializer list”转换为“fwd::NamedTensor &&”	trt_engine	D:\Projects\tencent_forward\Forward\source\trt_engine\trt_engine\trt_buffer_manager.h	116
错误	C1083	无法打开源文件: “D:\Projects\tencent_forward\Forward\build\source\third_party\hdf5\H5Tinit.c”: No such file or directory	hdf5	D:\Projects\tencent_forward\Forward\build\source\third_party\hdf5\src\c1	1
错误	C2664	“nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer> nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::parse(nlohmann::detail::input_adapter &&,const std::function<bool (int,nlohmann::detail::parser<nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>>::parse_event_t,BasicJsonType &)>,const bool)”: 无法将参数 1 从“std::string”转换为“nlohmann::detail::input_adapter &&”	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp	72
错误	C2679	二进制“=”: 没有找到接受“std::string”类型的右操作数的运算符(或没有可接受的转换)	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp	147
错误	C2679	二进制“=”: 没有找到接受“bool”类型的右操作数的运算符(或没有可接受的转换)	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp	166
错误	C2679	二进制“=”: 没有找到接受“std::string”类型的右操作数的运算符(或没有可接受的转换)	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp	175
错误	C2679	二进制“=”: 没有找到接受“std::basic_string<char,std::char_traits<char>,std::allocator<char>>”类型的右操作数的运算符(或没有可接受的转换)	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp	205
错误	C2679	二进制“=”: 没有找到接受“const char [11]”类型的右操作数的运算符(或没有可接受的转换)	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp	206
错误	C2679	二进制“=”: 没有找到接受“const std::string”类型的右操作数的运算符(或没有可接受的转换)	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp	209
错误	C2679	二进制“=”: 没有找到接受“std::vector<std::vector<std::vector<std::string,std::allocator<_Ty>>,std::allocator<std::vector<_Ty,std::allocator<_Ty>>>>,std::allocator<std::vector<std::vector<_Ty,std::allocator<_Ty>>,std::allocator<std::vector<_Ty,std::allocator<_Ty>>>>>>”类型的右操作数的运算符(或没有可接受的转换)	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp	213
错误	C2666	“fwd::operator ==”: 3 个重载有相似的转换	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.h	62
错误	C2666	“fwd::operator ==”: 3 个重载有相似的转换	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.h	62
错误	C2666	“fwd::operator ==”: 3 个重载有相似的转换	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.h	62
错误	C2664	“void std::vector<fwd::TrtLayerOutput,std::allocator<_Ty>>::push_back(const fwd::TrtLayerOutput &)”: 无法将参数 1 从“initializer list”转换为“fwd::TrtLayerOutput &&”	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\trt_keras_parser.cpp	157
错误	C2679	二进制“=”: 没有找到接受“initializer list”类型的右操作数的运算符(或没有可接受的转换)	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\trt_keras_parser.cpp	177
错误	C2664	“void std::vector<fwd::TrtLayerOutput,std::allocator<_Ty>>::push_back(const fwd::TrtLayerOutput &)”: 无法将参数 1 从“initializer list”转换为“fwd::TrtLayerOutput &&”	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\trt_keras_parser.cpp	185
错误	C2664	“void std::vector<fwd::TrtLayerOutput,std::allocator<_Ty>>::push_back(const fwd::TrtLayerOutput &)”: 无法将参数 1 从“initializer list”转换为“fwd::TrtLayerOutput &&”	fwd_keras	D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\trt_keras_parser.cpp	201
错误	C4579	'nlohmann::detail::static_const<nlohmann::detail::from_json_fn>::value': in-class initialization for type 'const T' is not yet implemented; static member will remain uninitialized at runtime but use in constant-expressions is supported	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2235
错误	C2131	表达式的计算结果不是常数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2235
错误	C4579	'nlohmann::detail::static_const<nlohmann::detail::to_json_fn>::value': in-class initialization for type 'const T' is not yet implemented; static member will remain uninitialized at runtime but use in constant-expressions is supported	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2235
错误	C4579	'nlohmann::detail::static_const<nlohmann::detail::from_json_fn>::value': in-class initialization for type 'const T' is not yet implemented; static member will remain uninitialized at runtime but use in constant-expressions is supported	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2235
错误	C2131	表达式的计算结果不是常数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2235
错误	C4579	'nlohmann::detail::static_const<nlohmann::detail::to_json_fn>::value': in-class initialization for type 'const T' is not yet implemented; static member will remain uninitialized at runtime but use in constant-expressions is supported	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2235
错误	C4579	'nlohmann::detail::static_const<nlohmann::detail::from_json_fn>::value': in-class initialization for type 'const T' is not yet implemented; static member will remain uninitialized at runtime but use in constant-expressions is supported	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2235
错误	C2131	表达式的计算结果不是常数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2235
错误	C4579	'nlohmann::detail::static_const<nlohmann::detail::to_json_fn>::value': in-class initialization for type 'const T' is not yet implemented; static member will remain uninitialized at runtime but use in constant-expressions is supported	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2235
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2780	“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2893	未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2513
错误	C2783	“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2783	“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2783	“ValueType nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept(<expr>) const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2783	“BasicJsonType nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2783	“nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer> nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2893	未能使函数模板“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept const”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2893	未能使函数模板“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2783	“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2783	“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2783	“ValueType nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept(<expr>) const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2783	“BasicJsonType nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2783	“nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer> nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2893	未能使函数模板“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept const”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2893	未能使函数模板“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2783	“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2783	“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2783	“ValueType nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept(<expr>) const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2783	“BasicJsonType nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2783	“nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer> nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) const”: 未能为“__formal”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2893	未能使函数模板“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept const”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2893	未能使函数模板“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	2516
错误	C2784	“const _Ty *std::begin(const std::valarray<_Ty> &)”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“const std::valarray<_Ty> &”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2784	“_Ty *std::begin(std::valarray<_Ty> &)”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“std::valarray<_Ty> &”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2784	“_Ty *std::begin(_Ty (&)[_Size]) noexcept”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“_Ty (&)[_Size]”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2893	未能使函数模板“unknown-type std::begin(const _Container &)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2893	未能使函数模板“unknown-type std::begin(_Container &)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2784	“const _Elem *std::begin(std::initializer_list<_Elem>) noexcept”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“std::initializer_list<_Elem>”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2039	“iterator_category”: 不是“nlohmann::detail::iterator_traits<unknown-type,void>”的成员	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2146	语法错误: 缺少“>”(在标识符“iterator_category”的前面)	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2784	“const _Ty *std::begin(const std::valarray<_Ty> &)”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“const std::valarray<_Ty> &”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2784	“_Ty *std::begin(std::valarray<_Ty> &)”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“std::valarray<_Ty> &”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2784	“_Ty *std::begin(_Ty (&)[_Size]) noexcept”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“_Ty (&)[_Size]”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2893	未能使函数模板“unknown-type std::begin(const _Container &)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2893	未能使函数模板“unknown-type std::begin(_Container &)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2784	“const _Elem *std::begin(std::initializer_list<_Elem>) noexcept”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“std::initializer_list<_Elem>”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2039	“iterator_category”: 不是“nlohmann::detail::iterator_traits<unknown-type,void>”的成员	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2146	语法错误: 缺少“>”(在标识符“iterator_category”的前面)	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2784	“const _Ty *std::begin(const std::valarray<_Ty> &)”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“const std::valarray<_Ty> &”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2784	“_Ty *std::begin(std::valarray<_Ty> &)”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“std::valarray<_Ty> &”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2784	“_Ty *std::begin(_Ty (&)[_Size]) noexcept”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“_Ty (&)[_Size]”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2893	未能使函数模板“unknown-type std::begin(const _Container &)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2893	未能使函数模板“unknown-type std::begin(_Container &)”专用化	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2784	“const _Elem *std::begin(std::initializer_list<_Elem>) noexcept”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“std::initializer_list<_Elem>”推导 模板 参数	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2039	“iterator_category”: 不是“nlohmann::detail::iterator_traits<unknown-type,void>”的成员	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290
错误	C2146	语法错误: 缺少“>”(在标识符“iterator_category”的前面)	fwd_keras	D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp	4290

一共失败了3个

10>------ 已启动全部重新生成: 项目: ALL_BUILD, 配置: Debug x64 ------
10>  Building Custom Rule D:/Projects/tencent_forward/Forward/CMakeLists.txt
========== 全部重新生成: 成功 7 个,失败 3 个,跳过 0 个 ==========

这是什么原因?请问怎么解决?

Environment

TensorRT Version: 7.2.1.6
NVIDIA GPU: RTX 2080 SUPER
NVIDIA Driver Version: 441.22
CUDA Version: 10.2
CUDNN Version: 8.2.0.53
Operating System: Windows 10 专业版
Python Version: 3.8.5

Keras中Flatten层的支持

[ERROR] 2021-06-30 13:49:56,627 trt_keras_parser.cpp(197): Creating FlattenDesc failed! Please Check implementation and inputs.
[ERROR] 2021-06-30 13:49:56,627 keras_engine.cpp(129): Parse Keras Graph failed

如上,请大佬关注哈~

not find:aten::gelu

There is no problem with my environment and run demo.
But when using bert ,I encounter a bug. Do I need to implement aten::gelu by myself?


The current process just got forked. Disabling parallelism to avoid deadlocks...
To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false)
[ERROR] 2021-04-01 22:03:55,866 torch_desc_manager.cpp(128): Could not find layer create for node aten::gelu: output input.4
[ERROR] 2021-04-01 22:03:55,866 torch_engine.cpp(235): Parse torch module failed
Traceback (most recent call last):
  File "tensorrt_forward.py", line 48, in <module>
    outputs = engine.forward_with_name(dummy_inputs)
AttributeError: 'NoneType' object has no attribute 'forward_with_name

Segmentation fault (core dumped) when transfer keras model to trt.

Describe the bug
A clear and concise description of what the bug is.
neither show error nor warning, it core dump, when transfer keras model to trt. This keras model is DenseNet121 from keras.
code like that:

import numpy as np
import os

def save_keras_model(model_file):
    from keras.models import Model
    from keras.layers import Dense, Dropout
    import keras.backend as K
    from keras.applications.densenet import DenseNet121
    import keras.layers as layers
    import keras.models as models
    import keras.utils as utils

    class densenet121(object):
        def __init__(self, image_size):
            self.base_model = DenseNet121(input_shape=(image_size, image_size, 3),
                                          include_top=False, pooling='avg',
                                          backend=K,
                                          layers=layers,
                                          models=models,
                                          utils=utils,
                                          weights=None)
            x = Dropout(0.75)(self.base_model.output)
            x = Dense(3, activation='softmax', name='top_layer')(x)
            self.model = Model(self.base_model.input, x)
            print("Densenet121")

    model = densenet121(512).model
    model.save(model_file)

def forward_transfer(model_file):
    import forward
    # 1. 构建 Engine
    builder = forward.KerasBuilder()
    infer_mode = 'float32' # Infer Mode: float32 / float16 / int8_calib / int8
    batch_size = 1
    max_workspace_size = 1<<32

    builder.set_mode(infer_mode)
    engine = builder.build(model_file, batch_size)

    engine_path = os.path.splitext(model_file)[0]+'.engine'
    engine.save(engine_path)

def test_forward(model_file,inputs):
    import forward
    engine_path = os.path.splitext(model_file)[0]+'engine'
    engine = forward.KerasEngine()
    engine.load(engine_path)

    # inputs = np.ones(1, 24, 24, 3)
    outputs = engine.forward([inputs]) # list_type output
    print(outputs)

model_path = 'densenet121.h5'
save_keras_model(model_path)
x = np.ones((1,512,512,3))
forward_transfer(model_path)
test_forward(model_path,x)

Environment

TensorRT Version: 7.1.3.4
NVIDIA GPU: T4
NVIDIA Driver Version: 410.104
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System: ubuntu 18.04
Python Version (if applicable): 3.6.9
Tensorflow Version (if applicable): 1.15.0
PyTorch Version (if applicable): 1.7.0

print info:

[INFO ] 2021-05-06 16:16:20,516 trt_keras_parser.cpp(153): Parser::CreateNHWC2NCHWLayerDesc
[INFO ] 2021-05-06 16:16:20,516 keras_activation_creator.h(50): TrtActivationDesc::Create
Segmentation fault (core dumped)

编译出错 找不到cublas_device库

你好, 编译的时候报cublas_device找不到,具体如下:

Environment

TensorRT Version: 7.2.3.4
CUDA Version: 10.2
CUDNN Version: 7.4
Operating System: ubuntu18.04
Python Version (if applicable): 3.7
PyTorch Version (if applicable): 1.8

错误信息:
[ 97%] Linking CXX shared library ../../bin/libfwd_torch.so
[ 97%] Built target fwd_torch
Scanning dependencies of target forward
[ 98%] Building CXX object source/py_fwd/CMakeFiles/forward.dir/py_forward.cpp.o
[100%] Linking CXX shared module ../../bin/forward.cpython-37m-x86_64-linux-gnu.so
/usr/bin/x86_64-linux-gnu-ld: cannot find -lCUDA_cublas_device_LIBRARY-NOTFOUND
collect2: error: ld returned 1 exit status
source/py_fwd/CMakeFiles/forward.dir/build.make:117: recipe for target 'bin/forward.cpython-37m-x86_64-linux-gnu.so' failed
make[2]: *** [bin/forward.cpython-37m-x86_64-linux-gnu.so] Error 1
CMakeFiles/Makefile2:687: recipe for target 'source/py_fwd/CMakeFiles/forward.dir/all' failed
make[1]: *** [source/py_fwd/CMakeFiles/forward.dir/all] Error 2
Makefile:83: recipe for target 'all' failed

cublas_device这个库找不到,cuda10以后这个库就废弃了吧。

make error

I've done the 'cmake' part successfully. But, when I run 'make', error happens like:
[ 5%] Building NVCC (Device) object source/trt_engine/CMakeFiles/trt_engine.dir/trt_network_crt/plugins/emb_layer_norm_plugin/trt_engine_generated_emb_layer_norm_kernel.cu.o
In file included from /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/block_discontinuity.cuh:37:0,
from /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/block_histogram_sort.cuh:37,
from /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/block_histogram.cuh:36,
from /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/cub.cuh:38,
from /home/agx/SCW/Forward-master/source/trt_engine/trt_network_crt/plugins/common/bert_plugin_util.h:33,
from /home/agx/SCW/Forward-master/source/trt_engine/trt_network_crt/plugins/emb_layer_norm_plugin/emb_layer_norm_kernel.cu:36:
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:238:61: warning: missing terminating " character
asprmt"bfi.b32 %0, %1, %2, %3;" : "=r"(ret) : ar"(x), "r"(x), b, in) - 1;
^
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:282:2: error: #else without #if
#else
^~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:284:2: error: #endif without #if
#endif
^~~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:294:2: error: #else without #if
#else
^~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:296:2: error: #endif without #if
#endif
^~~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:306:2: error: #else without #if
#else
^~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:308:2: error: #endif without #if
#endif}
^~~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:319:2: error: #else without #if
#else
^~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:321:46: warning: missing terminating " character
3;" : "word(ret) : word("(x), "rc_bit-of("(x), flags() + z;
^
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:322:2: error: #endif without #if
#endif
^~~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:334:2: error: #else without #if
#else
^~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:336:46: warning: missing terminating " character
3;" : "word(ret) : word("(x), "rc_bit-of("(x), flags() + z;
^
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:337:2: error: #endif without #if
#endif
^~~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:349:2: error: #else without #if
#else
^~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:351:44: warning: missing terminating " character
3;" : "word(ret) : word("(x), "rc_lane("(x), flags() + z;
^
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:352:2: error: #endif without #if
#endif
^~~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:371:61: warning: missing terminating " character
asfma.rz.ffi.b32 %0, %1, %2, %3;" f"(d(ret)f: ar"(xf, "r"(xf, cr) - 1;
^
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:375:2: error: #endif without #if
#endif // DOXYGEN_SHOULD_SKIP_THIS
^~~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:383:20: warning: missing terminating " character
vo orileasexit;")>())
^
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:395:20: warning: missing terminating " character
vo orileasllup;")>()x;
^
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:406:18: warning: missing terminating " character
tef adIdx"urn x;
^
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:475:1: error: unterminated comment
/**
^
In file included from /home/agx/SCW/Forward-master/source/trt_engine/trt_network_crt/plugins/common/bert_plugin_util.h:33:0,
from /home/agx/SCW/Forward-master/source/trt_engine/trt_network_crt/plugins/emb_layer_norm_plugin/emb_layer_norm_kernel.cu:36:
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/cub.cuh:54:10: fatal error: device/device_run_length_encode.cuh: No such file or directory
#include "device/device_run_length_encode.cuh"
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
CMake Error at trt_engine_generated_emb_layer_norm_kernel.cu.o.cmake:220 (message):
Error generating
/home/agx/SCW/Forward-master/build/source/trt_engine/CMakeFiles/trt_engine.dir/trt_network_crt/plugins/emb_layer_norm_plugin/./trt_engine_generated_emb_layer_norm_kernel.cu.o

source/trt_engine/CMakeFiles/trt_engine.dir/build.make:1591: recipe for target 'source/trt_engine/CMakeFiles/trt_engine.dir/trt_network_crt/plugins/emb_layer_norm_plugin/trt_engine_generated_emb_layer_norm_kernel.cu.o' failed
make[2]: *** [source/trt_engine/CMakeFiles/trt_engine.dir/trt_network_crt/plugins/emb_layer_norm_plugin/trt_engine_generated_emb_layer_norm_kernel.cu.o] Error 1
CMakeFiles/Makefile2:556: recipe for target 'source/trt_engine/CMakeFiles/trt_engine.dir/all' failed
make[1]: *** [source/trt_engine/CMakeFiles/trt_engine.dir/all] Error 2
Makefile:90: recipe for target 'all' failed
make: *** [all] Error 2

Wonder what's wrong...

Cannot build the pb model (tensorflow), got "tf_graph_parser.cpp(48): Creating input desc is failed." .

Describe the bug
I can convert the pytorch model (my resnet classification model) by the Forward. When I try to convert tensorflow pb model, I cannot build the saved pb model after obtaining the forword.**.so successfully and can import forward. When I tested this code:" engine = builder.build('./test_tfmodel.pb', dummy_inputs)", the problem "tf_graph_parser.cpp(48): Creating input desc is failed." was appeared and "已放弃(吐核)".

Environment

TensorRT Version: 7.2.1.6
NVIDIA GPU: GTX1080TI
NVIDIA Driver Version: 450.80.02
CUDA Version: 11.0
CUDNN Version: 8.0.4
Operating System: 7.5
Python Version (if applicable): 3.6.13
Tensorflow Version (if applicable): tensorflow==1.15.0(cpu)
PyTorch Version (if applicable): 1.7.1

Relevant Files

image
just testing the add model.

To Reproduce
Steps to reproduce the behavior:
1.
cmake .. \
-DTensorRT_ROOT=/data/wind/TensorRT-7.2.1.6/
-DENABLE_LOGGING=OFF
-DENABLE_PROFILING=OFF
-DENABLE_DYNAMIC_BATCH=OFF
-DBUILD_PYTHON_LIB=ON
-DPYTHON_EXECUTABLE=/root/anaconda3/envs/xyang/bin/python
-DENABLE_TORCH=OFF
-DENABLE_TENSORFLOW=ON
-DENABLE_KERAS=OFF

make -j
image
then I import forward doesn't get wrong.

import numpy as np
import forward

# 1. 构建 Engine
builder = forward.TfBuilder()

# img = torch.randn(1, 784)
img = np.ones([1,784], dtype='float32')
dummy_inputs = {'inputs': img}
infer_mode = 'float32'  #  float32 / float16 / int8_calib / int8

builder.set_mode(infer_mode)
engine = builder.build('./test_tfmodel.pb', dummy_inputs)

then the last code got wrong!
image
image

Screenshots
When I ./unit_test --gtest_filter=TestTfNodes.*, the following error(已放弃(吐核)) happened. But I can import forward successfully.
image

make报错

Describe the bug
A clear and concise description of what the bug is.
执行命令:
cmake .. -DTensorRT_ROOT=/home/soft/wp/TensorRT-8.2.0.6 -DENABLE_TORC H=ON -DENABLE_TORCH_PLUGIN=ON -DCMAKE_PREFIX_PATH=/home/soft/wp/libtorch

错误日志:
-- Found Threads: TRUE
-- Found CUDA: /usr/local/cuda (found version "11.1")
-- CUDA_NVCC_FLAGS: -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75
-- Using the single-header code from /home/soft/wp/Forward/source/third_party/json/single_include/
-- Found TensorRT: /home/soft/wp/TensorRT-8.2.0.6/lib/libnvinfer.so;/home/soft/wp/TensorRT-8.2.0.6/lib/libnvinfer_plugin.so;/home/soft/wp/TensorRT-8.2.0.6/lib/libnvonnxparser.so;/home/soft/wp/TensorRT-8.2.0.6/lib/libnvparsers.so (found version "8.2.0")
-- Found CUDA: /usr/local/cuda (found version "11.1")
-- Caffe2: CUDA detected: 11.1
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda
-- Caffe2: Header version is: 11.1
-- Found CUDNN: /usr/local/cuda/lib64/libcudnn.so
-- Found cuDNN: v? (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so)
CMake Error at /home/soft/wp/libtorch/share/cmake/Caffe2/public/cuda.cmake:174 (message):
PyTorch requires cuDNN 7 and above.
Call Stack (most recent call first):
/home/soft/wp/libtorch/share/cmake/Caffe2/Caffe2Config.cmake:88 (include)
/home/soft/wp/libtorch/share/cmake/Torch/TorchConfig.cmake:68 (find_package)
CMakeLists.txt:248 (find_package)

-- Configuring incomplete, errors occurred!

实际上是已经安装了cudnn
image

ls /usr/local/cuda/lib64 | grep cudnn
image

Environment

TensorRT Version: TensorRT-8.2.0.6
NVIDIA GPU: P8
NVIDIA Driver Version: 465.19.01
CUDA Version: 11.1
CUDNN Version: 8.2.1.32
Operating System: Ubuntu 16.04.2 LTS
Python Version (if applicable): 3.7
Tensorflow Version (if applicable): 2.6.0
PyTorch Version (if applicable): 1.9.0+cu111

Relevant Files

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

优化模型时报错

Describe the bug

使用for-torch 优化模型时出错,以resnest50d为例:
image
代码如下:
`
origion_model = timm.create_model('resnest50d', pretrained=True)

infer_mode = 'float16' # float32 / float16 / int8_calib / int8
jit_model = torch.jit.script(origion_model).cpu().eval()

model_path="resnest50d_bs_{}-half_{}_jit_cpu.pt".format(batch_size,half)

jit_model.save(model_path)

dummy = torch.randn(batch_size, 3, 244, 244)

builder = forward.TorchBuilder()

builder.set_mode(infer_mode)

engine = builder.build(model_path, dummy)
print(engine)
outputs = engine.forward(dummy)
print(outputs)
os._exit(-1)
`

Environment

TensorRT Version: 7.2.3.4
NVIDIA GPU: T4
NVIDIA Driver Version: 440.33.01
CUDA Version: 10.2
CUDNN Version: 7
Operating System: ubuntu 18.04
Python Version (if applicable): 3.6
Tensorflow Version (if applicable): 2.3.4
PyTorch Version (if applicable): 1.10.1

Relevant Files

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

free(): invalid pointer

Describe the bug
An error is reported at the end of the program, free(): invalid pointer.

Environment

TensorRT Version: 7.2.3.4
NVIDIA GPU: TITAN Xp
NVIDIA Driver Version: 440.33.01
CUDA Version: 10.2
CUDNN Version: 8.0.4
Operating System: CentOS 8
Python Version (if applicable): 3.6.8
Tensorflow Version (if applicable): --
PyTorch Version (if applicable): 1.3.1 (libtorch cpu)

Relevant Files

To Reproduce
Build Forward:
cmake -DENABLE_LOGGING=OFF
-DENABLE_PROFILING=OFF
-DENABLE_DYNAMIC_BATCH=OFF
-DENABLE_TORCH=ON
-DBUILD_PYTHON_LIB=OFF
-DPYTHON_EXECUTABLE=/usr/bin/python3
-DENABLE_TENSORFLOW=OFF -DENABLE_KERAS=OFF
-DTORCH_CMAKE_PATH=/usr/local/lib/libtorch/share/cmake/Torch/
..
Steps to reproduce the behavior:

  1. #include "fwd_torch/torch_engine/torch_engine.h"
    #include "fwd_torch/torch_engine/torch_infer.h"
  2. create main function
    std::cout << "Hello World" << std::endl;
    after cmake and make, run this function.

Expected behavior
None

Screenshots
image

Additional context
The problem occurs in Forward cmake with libtorch.
When we use Forward which cmake with python, the problem do not come up.

与trtorch等项目的优劣对比

拜读完相关文档及code,了解到Forward直接使用trt的network class 逐个翻译 原始模型的每个layer,相当于给tf、torch实现了相应的parser(就像onnx-parser一样)。同时了解到目前nv自身也有类似的开源项目如trtorch、tf-trt,请问Forward 与这些项目相比的优劣分别是哪些点
谢谢!

reflectPad存在两个问题

Describe the bug
第一个问题是即使plugin继承了IPluginV2DynamicExt,但是不能用于动态输入当中,因为getOutputDimensions()的写法有误。
第二个问题是这个插件的padding过程在某些GPU卡上输出结果是错的,具体原因未知。如在2080ti中可以正常输出结果,但是在A100出错

Environment

TensorRT Version: 7.2.1
NVIDIA GPU: 2080TI & A100
NVIDIA Driver Version:
CUDA Version: 11.0
CUDNN Version:
Operating System:
Python Version (if applicable):
Tensorflow Version (if applicable):
PyTorch Version (if applicable):

Relevant Files

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

对‘fwd::TrtForwardEngine::Load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)’未定义的引用

  1. i have build and get the libtrt_engine.so, libfwd_torch.so
  2. when i build the demo/fwd_cpp, i get the error:
    [ 50%] Linking CXX executable test_fwd_engine CMakeFiles/test_fwd_engine.dir/test_fwd_engine.cpp.o:在函数‘main’中: test_fwd_engine.cpp:(.text+0x1c8):对‘fwd::TrtForwardEngine::Load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)’未定义的引用 collect2: error: ld returned 1 exit status CMakeFiles/test_fwd_engine.dir/build.make:110: recipe for target 'test_fwd_engine' failed make[2]: *** [test_fwd_engine] Error 1 CMakeFiles/Makefile2:96: recipe for target 'CMakeFiles/test_fwd_engine.dir/all' failed make[1]: *** [CMakeFiles/test_fwd_engine.dir/all] Error 2 Makefile:102: recipe for target 'all' failed make: *** [all] Error 2

Segmentation fault (core dumped)

Describe the bug
When I finished compiling cmake,then I copy forward.cpython-36m-aarch64-linux-gnu.so to a new directory, I run 'python test_forward.py' or ‘import forward’,an error appear ' Segmentation fault (core dumped) '

Environment

Device : Jetson Xavier NX
System: Jetpack4.4 [L4T 32.4.4]
TensorRT Version: 7.1.3
CUDA Version: 10.2.89
CUDNN Version: 8.0.0.180
Python Version (if applicable): 3.6.9
Tensorflow Version (if applicable): 1.15.2
Keras Version (if applicable): 2.1.5

Relevant Files

cmake successful
4

error information
3

some gdb information
1
2

What causes such error?
Looking forward to the answer!!

是否考虑重新写一个模型序列化??

我看项目中采用TrtNetworkDesc来进行模型转换,那是否考虑将TrtNetworkDesc保存下来,这样做到模型转换和推理分离。可以在不同trt版本上用同样的TrtNetworkDesc

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.