Giter Site home page Giter Site logo

nvidia / tensorrt Goto Github PK

View Code? Open in Web Editor NEW
10.5K 156.0 2.1K 117.57 MB

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.

Home Page: https://developer.nvidia.com/tensorrt

License: Apache License 2.0

CMake 0.05% Dockerfile 0.02% C++ 95.75% C 0.01% Cuda 0.51% Shell 0.02% Makefile 0.01% Python 2.11% PureBasic 0.01% Jupyter Notebook 1.52% HTML 0.01% PowerShell 0.01% Batchfile 0.01%
tensorrt nvidia deep-learning inference gpu-acceleration

tensorrt's Introduction

License Documentation

TensorRT Open Source Software

This repository contains the Open Source Software (OSS) components of NVIDIA TensorRT. It includes the sources for TensorRT plugins and ONNX parser, as well as sample applications demonstrating usage and capabilities of the TensorRT platform. These open source software components are a subset of the TensorRT General Availability (GA) release with some extensions and bug-fixes.

Need enterprise support? NVIDIA global support is available for TensorRT with the NVIDIA AI Enterprise software suite. Check out NVIDIA LaunchPad for free access to a set of hands-on labs with TensorRT hosted on NVIDIA infrastructure.

Join the TensorRT and Triton community and stay current on the latest product updates, bug fixes, content, best practices, and more.

Prebuilt TensorRT Python Package

We provide the TensorRT Python package for an easy installation.
To install:

pip install tensorrt

You can skip the Build section to enjoy TensorRT with Python.

Build

Prerequisites

To build the TensorRT-OSS components, you will first need the following software packages.

TensorRT GA build

  • TensorRT v10.3.0.26
    • Available from direct download links listed below

System Packages

Optional Packages

Downloading TensorRT Build

  1. Download TensorRT OSS

    git clone -b main https://github.com/nvidia/TensorRT TensorRT
    cd TensorRT
    git submodule update --init --recursive
  2. (Optional - if not using TensorRT container) Specify the TensorRT GA release build path

    If using the TensorRT OSS build container, TensorRT libraries are preinstalled under /usr/lib/x86_64-linux-gnu and you may skip this step.

    Else download and extract the TensorRT GA build from NVIDIA Developer Zone with the direct links below:

    Example: Ubuntu 20.04 on x86-64 with cuda-12.5

    cd ~/Downloads
    tar -xvzf TensorRT-10.3.0.26.Linux.x86_64-gnu.cuda-12.5.tar.gz
    export TRT_LIBPATH=`pwd`/TensorRT-10.3.0.26

    Example: Windows on x86-64 with cuda-12.5

    Expand-Archive -Path TensorRT-10.3.0.26.Windows.win10.cuda-12.5.zip
    $env:TRT_LIBPATH="$pwd\TensorRT-10.3.0.26\lib"

Setting Up The Build Environment

For Linux platforms, we recommend that you generate a docker container for building TensorRT OSS as described below. For native builds, please install the prerequisite System Packages.

  1. Generate the TensorRT-OSS build container.

    The TensorRT-OSS build container can be generated using the supplied Dockerfiles and build scripts. The build containers are configured for building TensorRT OSS out-of-the-box.

    Example: Ubuntu 20.04 on x86-64 with cuda-12.5 (default)

    ./docker/build.sh --file docker/ubuntu-20.04.Dockerfile --tag tensorrt-ubuntu20.04-cuda12.5

    Example: Rockylinux8 on x86-64 with cuda-12.5

    ./docker/build.sh --file docker/rockylinux8.Dockerfile --tag tensorrt-rockylinux8-cuda12.5

    Example: Ubuntu 22.04 cross-compile for Jetson (aarch64) with cuda-12.5 (JetPack SDK)

    ./docker/build.sh --file docker/ubuntu-cross-aarch64.Dockerfile --tag tensorrt-jetpack-cuda12.5

    Example: Ubuntu 22.04 on aarch64 with cuda-12.5

    ./docker/build.sh --file docker/ubuntu-22.04-aarch64.Dockerfile --tag tensorrt-aarch64-ubuntu22.04-cuda12.5
  2. Launch the TensorRT-OSS build container.

    Example: Ubuntu 20.04 build container

    ./docker/launch.sh --tag tensorrt-ubuntu20.04-cuda12.5 --gpus all

    NOTE:
    1. Use the --tag corresponding to build container generated in Step 1.
    2. NVIDIA Container Toolkit is required for GPU access (running TensorRT applications) inside the build container.
    3. sudo password for Ubuntu build containers is 'nvidia'.
    4. Specify port number using --jupyter <port> for launching Jupyter notebooks.

Building TensorRT-OSS

  • Generate Makefiles and build.

    Example: Linux (x86-64) build with default cuda-12.5

     cd $TRT_OSSPATH
     mkdir -p build && cd build
     cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out
     make -j$(nproc)

    Example: Linux (aarch64) build with default cuda-12.5

     cd $TRT_OSSPATH
     mkdir -p build && cd build
     cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH/cmake/toolchains/cmake_aarch64-native.toolchain
     make -j$(nproc)

    Example: Native build on Jetson (aarch64) with cuda-12.5

     cd $TRT_OSSPATH
     mkdir -p build && cd build
     cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out -DTRT_PLATFORM_ID=aarch64 -DCUDA_VERSION=12.5
    CC=/usr/bin/gcc make -j$(nproc)

    NOTE: C compiler must be explicitly specified via CC= for native aarch64 builds of protobuf.

    Example: Ubuntu 22.04 Cross-Compile for Jetson (aarch64) with cuda-12.5 (JetPack)

     cd $TRT_OSSPATH
     mkdir -p build && cd build
     cmake .. -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH/cmake/toolchains/cmake_aarch64.toolchain -DCUDA_VERSION=12.5 -DCUDNN_LIB=/pdk_files/cudnn/usr/lib/aarch64-linux-gnu/libcudnn.so -DCUBLAS_LIB=/usr/local/cuda-12.5/targets/aarch64-linux/lib/stubs/libcublas.so -DCUBLASLT_LIB=/usr/local/cuda-12.5/targets/aarch64-linux/lib/stubs/libcublasLt.so -DTRT_LIB_DIR=/pdk_files/tensorrt/lib
     make -j$(nproc)

    Example: Native builds on Windows (x86) with cuda-12.5

     cd $TRT_OSSPATH
     mkdir -p build
     cd -p build
     cmake .. -DTRT_LIB_DIR="$env:TRT_LIBPATH" -DCUDNN_ROOT_DIR="$env:CUDNN_PATH" -DTRT_OUT_DIR="$pwd\\out"
     msbuild TensorRT.sln /property:Configuration=Release -m:$env:NUMBER_OF_PROCESSORS

    NOTE:
    1. The default CUDA version used by CMake is 12.4.0. To override this, for example to 11.8, append -DCUDA_VERSION=11.8 to the cmake command.

  • Required CMake build arguments are:

    • TRT_LIB_DIR: Path to the TensorRT installation directory containing libraries.
    • TRT_OUT_DIR: Output directory where generated build artifacts will be copied.
  • Optional CMake build arguments:

    • CMAKE_BUILD_TYPE: Specify if binaries generated are for release or debug (contain debug symbols). Values consists of [Release] | Debug
    • CUDA_VERSION: The version of CUDA to target, for example [11.7.1].
    • CUDNN_VERSION: The version of cuDNN to target, for example [8.6].
    • PROTOBUF_VERSION: The version of Protobuf to use, for example [3.0.0]. Note: Changing this will not configure CMake to use a system version of Protobuf, it will configure CMake to download and try building that version.
    • CMAKE_TOOLCHAIN_FILE: The path to a toolchain file for cross compilation.
    • BUILD_PARSERS: Specify if the parsers should be built, for example [ON] | OFF. If turned OFF, CMake will try to find precompiled versions of the parser libraries to use in compiling samples. First in ${TRT_LIB_DIR}, then on the system. If the build type is Debug, then it will prefer debug builds of the libraries before release versions if available.
    • BUILD_PLUGINS: Specify if the plugins should be built, for example [ON] | OFF. If turned OFF, CMake will try to find a precompiled version of the plugin library to use in compiling samples. First in ${TRT_LIB_DIR}, then on the system. If the build type is Debug, then it will prefer debug builds of the libraries before release versions if available.
    • BUILD_SAMPLES: Specify if the samples should be built, for example [ON] | OFF.
    • GPU_ARCHS: GPU (SM) architectures to target. By default we generate CUDA code for all major SMs. Specific SM versions can be specified here as a quoted space-separated list to reduce compilation time and binary size. Table of compute capabilities of NVIDIA GPUs can be found here. Examples:
      • NVidia A100: -DGPU_ARCHS="80"
      • Tesla T4, GeForce RTX 2080: -DGPU_ARCHS="75"
      • Titan V, Tesla V100: -DGPU_ARCHS="70"
      • Multiple SMs: -DGPU_ARCHS="80 75"
    • TRT_PLATFORM_ID: Bare-metal build (unlike containerized cross-compilation). Currently supported options: x86_64 (default).

References

TensorRT Resources

Known Issues

tensorrt's People

Contributors

akhilg-nv avatar asfiyab-nvidia avatar azhurkevich avatar borisfom avatar brb-nv avatar char-nvidia avatar ddelange avatar dependabot[bot] avatar gcunhase avatar ilyasher avatar kevinch-nv avatar liji-nv avatar mvnvidia avatar nvluxiaoz avatar nvpohanh avatar oxana-nvidia avatar pbridger avatar pranavm-nvidia avatar rajeevsrao avatar rmccorm4 avatar samurdhikaru avatar shuyuelan avatar simengliu-nv avatar thehamsta avatar ttyio avatar tyler-d avatar vinhngx avatar wraveane avatar yuanyao-nv avatar zhimengf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorrt's Issues

About ONNX INT8 dynamic range method

I want to use onnx int8 without calibration method .something i want to make sure about getting the dynamic range. I train a model use pytorch and export to onnx then run it on tensor RT.
First, when I run the model in pytorch as follow:

model = detection_net()
model.load_state_dict(torch.load('detection_net_epoch200.pth'))
...
def get_features_hook(self,input,output):
    print(output.min(),output.max())
    model.layer[1].register_forward_hook(get_features_hook)
    a = model(input_img)
...

the output of get_features_hook is the dynamic range?
Second, in the sample of sampleINT8API(D:\TensorRT-5.1.5.0\samples),resnet50_per_tensor_dynamic_range.txt,

...
gpu_0/conv1_1:5.43116007373
gpu_0/res_conv1_bn_1:8.69735834748
gpu_0/res_conv1_bn_2:8.69735834748
...

the content of this txt such as: gpu_0/res_conv1_bn_1:8.69735834748, 8.69735834748 is the dynamic range? max or -max,which is it?

I'm a beginner of TensorRT, maybe these questions are too elementary.

Installation on Tegra platforms

The TensorRT release binaries are available for x86 and Power PCs. How can I try TensorRT OSS on TX1, TX2, and Tegra Nano?

TF-TRT5.1 - Unconverted TensorFlow Ops ArgMax on TensorFlow Container 19.07 (TensorFlow 1.14)

hi all, I am running tensorrt.py script using the TensorFlow Container 19.07 (TensorFlow 1.14) and I got the Unconverted TensorFlow Ops message There are 4 ops of 3 different types in the graph that are not converted to TensorRT: ArgMax, NoOp, Placeholder, (For more information see https://docs.nvidia.com/deeplearning/dgx/tf-trt-user-guide/index.html#supported-ops).

Why the system doesn't convert ArgMax operations eventhough it is shown as supported on TensorFlow Container 19.07 (TensorFlow 1.14)?

Windows support for TensorRT OSS

After some cmake tweaks I was able to make some progress building this library on windows. But I ran into a linking error where the external symbols for "find_divisor" and "nmsInference" are unable to be resolved. I ran dumpbin on nvinfer.lib/dll and I do not see those symbols being exported; however, with nm on linux I am able to see those symbols in libnvinfer.so.

Is there a way to get around this problem and successfully build on windows? I only require building the plugins, nothing else.

I have attached a patch of the tweaks I made, and the cmake commands I used are:
cmake .. -DTRT_LIB_DIR=D:/tmp/TensorRT-5.1.5.0/lib -DTRT_BIN_DIR=D:/tmp/TensorRT/build/out -DCUB_ROOT_DIR=D:/tmp/cub-1.8.0/ -DBUILD_SAMPLES=OFF -DBUILD_PARSERS=OFF -DCUDA_TOOLKIT_ROOT_DIR='C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1' -DCMAKE_CUDA_COMPILER='C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/bin/nvcc.exe' -G 'Visual Studio 15 2017 Win64' -DCUDA_NVCC_FLAGS='nvcc;--std;c++11'
cmake --build . --config Release

TensorRT-OSS-Win.patch.txt

undefined reference to `nvinfer1::plugin::createSSDPriorBoxPlugin(void const*, unsigned long)'

Hello, I compiled tensorrt 5.1.5 according to the tutorial, but found that:
Undefined reference to nvinfer1::plugin::createSSDPriorBoxPlugin(void const*, unsigned long)'; After checking, include/NvInferPlugin.h has the declaration of this function, but it seems that the corresponding function implementation can not be found, and the call will give an error. But I can directly use the official website compiled library is possible, there is the implementation of this function, is there a problem with my compilation? (英文不好,附上中文: 你好,我按照教程编译了tensorrt 5.1.5,但是在使用时发现: undefined reference to nvinfer1::plugin::createSSDPriorBoxPlugin(void const*, unsigned long)';
经检查,include/NvInferPlugin.h 里面是有这个函数的声明的,但好像找不到相应的函数实现,而导致调用时会报错。
但是我直接使用官网编译好的库是可以的,有这个函数的实现,请问是我的编译有问题吗?)

Could IBuilder create multi INetworkDefinition?

Could IBuilder create multi INetworkDefinition? And could IBuilder build an ICudaEngine used an INetworkDefinition which createt by another IBuilder?

In the samples, it is one IBuilder corresponding one INetworkDefinition and one ICudaEngine,
but i used one IBuilder to create multi INetworkDefinition and deliver the networks to other builders, it seems to work fine.

These compile errors came out from 'BERT' demo

Hi, thank you for your contribution.
I'm trying to execute 'BERT' demo.

I just followed the guideline to set up my environment.
(CentOS/RedHat 7 with cuda-10.0)
But these errors came out when I tried to compile it.
I will be glad to your help.

                           ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:1749:26: error: 'uint32_t' has not been declared
     virtual void setAxes(uint32_t axes) = 0;
                          ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:1756:13: error: 'uint32_t' does not name a type
     virtual uint32_t getAxes() const = 0;
             ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2563:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class RNNGateType : int
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2563:26: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class RNNGateType : int
                          ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2592:13: error: 'int32_t' does not name a type
     virtual int32_t getLayerCount() const = 0;   //< Get the layer count of the RNN
             ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2593:13: error: 'int32_t' does not name a type
     virtual int32_t getHiddenSize() const = 0;   //< Get the hidden size of the RNN
             ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2594:13: error: 'int32_t' does not name a type
     virtual int32_t getMaxSeqLength() const = 0; //< Get the maximum sequence length of the RNN
             ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2595:13: error: 'int32_t' does not name a type
     virtual int32_t getDataLength() const = 0;   //< Get the maximum data length of the RNN
             ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2789:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class PluginFormat : uint8_t
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2789:1: warning: elaborated-type-specifier for a scoped enum must not use the 'class' keyword
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2789:12: error: use of enum 'PluginFormat' without previous declaration
 enum class PluginFormat : uint8_t
            ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2789:25: error: expected unqualified-id before ':' token
 enum class PluginFormat : uint8_t
                         ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2804:20: error: 'PluginFormat' was not declared in this scope
 inline int EnumMax<PluginFormat>()
                    ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2804:12: error: template-id 'EnumMax<<expression error> >' for 'int nvinfer1::EnumMax()' does not match any template declaration
 inline int EnumMax<PluginFormat>()
            ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2001:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class ElementWiseOperation : int
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2001:35: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class ElementWiseOperation : int
                                   ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2943:48: error: 'PluginFormat' has not been declared
     virtual bool supportsFormat(DataType type, PluginFormat format) const = 0;
                                                ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2961:129: error: 'PluginFormat' has not been declared
     virtual void configureWithFormat(const Dims* inputDims, int nbInputs, const Dims* outputDims, int nbOutputs, DataType type, PluginFormat format, int maxBatchSize) = 0;
                                                                                                                                 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3038:48: error: 'PluginFormat' has not been declared
     virtual bool supportsFormat(DataType type, PluginFormat format) const = 0;
                                                ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3056:129: error: 'PluginFormat' has not been declared
     virtual void configureWithFormat(const Dims* inputDims, int nbInputs, const Dims* outputDims, int nbOutputs, DataType type, PluginFormat format, int maxBatchSize) = 0;
                                                                                                                                 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2160:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class RNNOperation : int
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2160:27: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class RNNOperation : int
                           ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2181:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class RNNDirection : int
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2181:27: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class RNNDirection : int
                           ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3210:95: error: 'PluginFormat' has not been declared
                                  const bool* inputIsBroadcast, const bool* outputIsBroadcast, PluginFormat floatFormat, int maxBatchSize)
                                                                                               ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2208:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class RNNInputMode : int
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2208:27: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class RNNInputMode : int
                           ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3259:68: error: 'PluginFormat' has not been declared
                              int /*nbOutputs*/, DataType /*type*/, PluginFormat /*format*/, int /*maxBatchSize*/) _TENSORRT_OVERRIDE _TENSORRT_FINAL {}
                                                                    ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2563:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class RNNGateType : int
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3313:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class PluginFieldType : int
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2563:26: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class RNNGateType : int
                          ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3313:30: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class PluginFieldType : int
                              ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2592:13: error: 'int32_t' does not name a type
     virtual int32_t getLayerCount() const = 0;   //< Get the layer count of the RNN
             ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3353:37: error: 'nullptr' was not declared in this scope
     PluginField(const char* name_ = nullptr, const void* data_ = nullptr, const PluginFieldType type_ = PluginFieldType::kUNKNOWN, int length_ = 0)
                                     ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2593:13: error: 'int32_t' does not name a type
     virtual int32_t getHiddenSize() const = 0;   //< Get the hidden size of the RNN
             ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3353:66: error: 'nullptr' was not declared in this scope
     PluginField(const char* name_ = nullptr, const void* data_ = nullptr, const PluginFieldType type_ = PluginFieldType::kUNKNOWN, int length_ = 0)
                                                                  ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2594:13: error: 'int32_t' does not name a type
     virtual int32_t getMaxSeqLength() const = 0; //< Get the maximum sequence length of the RNN
             ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3353:105: error: 'PluginFieldType' is not a class or namespace
     PluginField(const char* name_ = nullptr, const void* data_ = nullptr, const PluginFieldType type_ = PluginFieldType::kUNKNOWN, int length_ = 0)
                                                                                                         ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2595:13: error: 'int32_t' does not name a type
     virtual int32_t getDataLength() const = 0;   //< Get the maximum data length of the RNN
             ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3473:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class UnaryOperation : int
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3473:29: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class UnaryOperation : int
                             ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2789:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class PluginFormat : uint8_t
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3535:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class ReduceOperation : int
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2789:1: warning: elaborated-type-specifier for a scoped enum must not use the 'class' keyword
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3535:30: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class ReduceOperation : int
                              ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2789:12: error: use of enum 'PluginFormat' without previous declaration
 enum class PluginFormat : uint8_t
            ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2789:25: error: expected unqualified-id before ':' token
 enum class PluginFormat : uint8_t
                         ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2804:20: error: 'PluginFormat' was not declared in this scope
 inline int EnumMax<PluginFormat>()
                    ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3579:32: error: 'uint32_t' has not been declared
     virtual void setReduceAxes(uint32_t reduceAxes) = 0;
                                ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2804:12: error: template-id 'EnumMax<<expression error> >' for 'int nvinfer1::EnumMax()' does not match any template declaration
 inline int EnumMax<PluginFormat>()
            ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3586:13: error: 'uint32_t' does not name a type
     virtual uint32_t getReduceAxes() const = 0;
             ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2943:48: error: 'PluginFormat' has not been declared
     virtual bool supportsFormat(DataType type, PluginFormat format) const = 0;
                                                ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:2961:129: error: 'PluginFormat' has not been declared
     virtual void configureWithFormat(const Dims* inputDims, int nbInputs, const Dims* outputDims, int nbOutputs, DataType type, PluginFormat format, int maxBatchSize) = 0;
                                                                                                                                 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3816:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class TopKOperation : int
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3816:28: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class TopKOperation : int
                            ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3038:48: error: 'PluginFormat' has not been declared
     virtual bool supportsFormat(DataType type, PluginFormat format) const = 0;
                                                ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3056:129: error: 'PluginFormat' has not been declared
     virtual void configureWithFormat(const Dims* inputDims, int nbInputs, const Dims* outputDims, int nbOutputs, DataType type, PluginFormat format, int maxBatchSize) = 0;
                                                                                                                                 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3873:32: error: 'uint32_t' has not been declared
     virtual void setReduceAxes(uint32_t reduceAxes) = 0;
                                ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3880:13: error: 'uint32_t' does not name a type
     virtual uint32_t getReduceAxes() const = 0;
             ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3892:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class MatrixOperation : int
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3892:30: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class MatrixOperation : int
                              ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3210:95: error: 'PluginFormat' has not been declared
                                  const bool* inputIsBroadcast, const bool* outputIsBroadcast, PluginFormat floatFormat, int maxBatchSize)
                                                                                               ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3259:68: error: 'PluginFormat' has not been declared
                              int /*nbOutputs*/, DataType /*type*/, PluginFormat /*format*/, int /*maxBatchSize*/) _TENSORRT_OVERRIDE _TENSORRT_FINAL {}
                                                                    ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:4509:80: error: 'uint32_t' has not been declared
     virtual IReduceLayer* addReduce(ITensor& input, ReduceOperation operation, uint32_t reduceAxes, bool keepDimensions) = 0;
                                                                                ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:4539:74: error: 'uint32_t' has not been declared
     virtual ITopKLayer* addTopK(ITensor& input, TopKOperation op, int k, uint32_t reduceAxes) = 0;
                                                                          ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3313:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class PluginFieldType : int
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3313:30: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class PluginFieldType : int
                              ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:4670:51: error: 'int32_t' has not been declared
     virtual IRNNv2Layer* addRNNv2(ITensor& input, int32_t layerCount, int32_t hiddenSize, int32_t maxSeqLen, RNNOperation op) = 0;
                                                   ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:4670:71: error: 'int32_t' has not been declared
     virtual IRNNv2Layer* addRNNv2(ITensor& input, int32_t layerCount, int32_t hiddenSize, int32_t maxSeqLen, RNNOperation op) = 0;
                                                                       ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:4670:91: error: 'int32_t' has not been declared
     virtual IRNNv2Layer* addRNNv2(ITensor& input, int32_t layerCount, int32_t hiddenSize, int32_t maxSeqLen, RNNOperation op) = 0;
                                                                                           ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3353:37: error: 'nullptr' was not declared in this scope
     PluginField(const char* name_ = nullptr, const void* data_ = nullptr, const PluginFieldType type_ = PluginFieldType::kUNKNOWN, int length_ = 0)
                                     ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3353:66: error: 'nullptr' was not declared in this scope
     PluginField(const char* name_ = nullptr, const void* data_ = nullptr, const PluginFieldType type_ = PluginFieldType::kUNKNOWN, int length_ = 0)
                                                                  ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3353:105: error: 'PluginFieldType' is not a class or namespace
     PluginField(const char* name_ = nullptr, const void* data_ = nullptr, const PluginFieldType type_ = PluginFieldType::kUNKNOWN, int length_ = 0)
                                                                                                         ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5057:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class CalibrationAlgoType : int
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3473:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class UnaryOperation : int
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5057:34: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class CalibrationAlgoType : int
                                  ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3473:29: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class UnaryOperation : int
                             ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h: In member function 'virtual nvinfer1::CalibrationAlgoType nvinfer1::IInt8EntropyCalibrator::getAlgorithm()':
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5149:57: error: 'CalibrationAlgoType' is not a class or namespace
     virtual CalibrationAlgoType getAlgorithm() { return CalibrationAlgoType::kENTROPY_CALIBRATION; }
                                                         ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3535:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class ReduceOperation : int
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3535:30: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class ReduceOperation : int
                              ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h: At global scope:
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5163:38: warning: override controls (override/final) only available with -std=c++11 or -std=gnu++11
     CalibrationAlgoType getAlgorithm() override { return CalibrationAlgoType::kENTROPY_CALIBRATION_2; }
                                      ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h: In member function 'virtual nvinfer1::CalibrationAlgoType nvinfer1::IInt8EntropyCalibrator2::getAlgorithm()':
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5163:58: error: 'CalibrationAlgoType' is not a class or namespace
     CalibrationAlgoType getAlgorithm() override { return CalibrationAlgoType::kENTROPY_CALIBRATION_2; }
                                                          ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3579:32: error: 'uint32_t' has not been declared
     virtual void setReduceAxes(uint32_t reduceAxes) = 0;
                                ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3586:13: error: 'uint32_t' does not name a type
     virtual uint32_t getReduceAxes() const = 0;
             ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h: In member function 'virtual nvinfer1::CalibrationAlgoType nvinfer1::IInt8LegacyCalibrator::getAlgorithm()':
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5178:57: error: 'CalibrationAlgoType' is not a class or namespace
     virtual CalibrationAlgoType getAlgorithm() { return CalibrationAlgoType::kLEGACY_CALIBRATION; }
                                                         ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h: At global scope:
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5228:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class EngineCapability
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5263:28: error: 'allocate' declared as a 'virtual' field
     virtual void* allocate(uint64_t size, uint64_t alignment, uint32_t flags) = 0;
                            ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5263:19: error: expected ';' at end of member declaration
     virtual void* allocate(uint64_t size, uint64_t alignment, uint32_t flags) = 0;
                   ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5263:37: error: expected ')' before 'size'
     virtual void* allocate(uint64_t size, uint64_t alignment, uint32_t flags) = 0;
                                     ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5621:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class WeightsRole : int
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5621:26: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class WeightsRole : int
                          ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3816:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class TopKOperation : int
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3816:28: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class TopKOperation : int
                            ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3873:32: error: 'uint32_t' has not been declared
     virtual void setReduceAxes(uint32_t reduceAxes) = 0;
                                ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5806:5: warning: scoped enums only available with -std=c++11 or -std=gnu++11
     enum class Severity
     ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3880:13: error: 'uint32_t' does not name a type
     virtual uint32_t getReduceAxes() const = 0;
             ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3892:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class MatrixOperation : int
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:3892:30: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class MatrixOperation : int
                              ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5902:16: warning: non-static data member initializers only available with -std=c++11 or -std=gnu++11
     T instance{};
                ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5902:15: warning: extended initializer lists only available with -std=c++11 or -std=gnu++11
     T instance{};
               ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5902:16: warning: extended initializer lists only available with -std=c++11 or -std=gnu++11
     T instance{};
                ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:4509:80: error: 'uint32_t' has not been declared
     virtual IReduceLayer* addReduce(ITensor& input, ReduceOperation operation, uint32_t reduceAxes, bool keepDimensions) = 0;
                                                                                ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:4539:74: error: 'uint32_t' has not been declared
     virtual ITopKLayer* addTopK(ITensor& input, TopKOperation op, int k, uint32_t reduceAxes) = 0;
                                                                          ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:4670:51: error: 'int32_t' has not been declared
     virtual IRNNv2Layer* addRNNv2(ITensor& input, int32_t layerCount, int32_t hiddenSize, int32_t maxSeqLen, RNNOperation op) = 0;
                                                   ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:4670:71: error: 'int32_t' has not been declared
     virtual IRNNv2Layer* addRNNv2(ITensor& input, int32_t layerCount, int32_t hiddenSize, int32_t maxSeqLen, RNNOperation op) = 0;
                                                                       ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:4670:91: error: 'int32_t' has not been declared
     virtual IRNNv2Layer* addRNNv2(ITensor& input, int32_t layerCount, int32_t hiddenSize, int32_t maxSeqLen, RNNOperation op) = 0;
                                                                                           ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5057:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class CalibrationAlgoType : int
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5057:34: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class CalibrationAlgoType : int
                                  ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h: In member function 'virtual nvinfer1::CalibrationAlgoType nvinfer1::IInt8EntropyCalibrator::getAlgorithm()':
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5149:57: error: 'CalibrationAlgoType' is not a class or namespace
     virtual CalibrationAlgoType getAlgorithm() { return CalibrationAlgoType::kENTROPY_CALIBRATION; }
                                                         ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h: At global scope:
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5163:38: warning: override controls (override/final) only available with -std=c++11 or -std=gnu++11
     CalibrationAlgoType getAlgorithm() override { return CalibrationAlgoType::kENTROPY_CALIBRATION_2; }
                                      ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h: In member function 'virtual nvinfer1::CalibrationAlgoType nvinfer1::IInt8EntropyCalibrator2::getAlgorithm()':
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5163:58: error: 'CalibrationAlgoType' is not a class or namespace
     CalibrationAlgoType getAlgorithm() override { return CalibrationAlgoType::kENTROPY_CALIBRATION_2; }
                                                          ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h: In member function 'virtual nvinfer1::CalibrationAlgoType nvinfer1::IInt8LegacyCalibrator::getAlgorithm()':
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5178:57: error: 'CalibrationAlgoType' is not a class or namespace
     virtual CalibrationAlgoType getAlgorithm() { return CalibrationAlgoType::kLEGACY_CALIBRATION; }
                                                         ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h: At global scope:
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5228:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class EngineCapability
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5263:28: error: 'allocate' declared as a 'virtual' field
     virtual void* allocate(uint64_t size, uint64_t alignment, uint32_t flags) = 0;
                            ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5263:19: error: expected ';' at end of member declaration
     virtual void* allocate(uint64_t size, uint64_t alignment, uint32_t flags) = 0;
                   ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5263:37: error: expected ')' before 'size'
     virtual void* allocate(uint64_t size, uint64_t alignment, uint32_t flags) = 0;
                                     ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5621:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class WeightsRole : int
 ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5621:26: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class WeightsRole : int
                          ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5806:5: warning: scoped enums only available with -std=c++11 or -std=gnu++11
     enum class Severity
     ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5902:16: warning: non-static data member initializers only available with -std=c++11 or -std=gnu++11
     T instance{};
                ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5902:15: warning: extended initializer lists only available with -std=c++11 or -std=gnu++11
     T instance{};
               ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:5902:16: warning: extended initializer lists only available with -std=c++11 or -std=gnu++11
     T instance{};
                ^
In file included from /usr/include/c++/5/cstdint:35:0,
                 from /workspace/TensorRT/demo/BERT/../../include/NvInfer.h:21,
                 from /workspace/TensorRT/demo/BERT/plugins/emb_layer_norm_plugin.cu:17:
/usr/include/c++/5/bits/c++0x_warning.h:32:2: error: #error This file requires compiler and library support for the ISO C++ 2011 standard. This support must be enabled with the -std=c++11 or -std=gnu++11 compiler options.
 #error This file requires compiler and library support \
  ^
In file included from /usr/include/c++/5/cstdint:35:0,
                 from /workspace/TensorRT/demo/BERT/../../include/NvInfer.h:21,
                 from /workspace/TensorRT/demo/BERT/plugins/skip_layer_norm_plugin.cu:17:
/usr/include/c++/5/bits/c++0x_warning.h:32:2: error: #error This file requires compiler and library support for the ISO C++ 2011 standard. This support must be enabled with the -std=c++11 or -std=gnu++11 compiler options.
 #error This file requires compiler and library support \
  ^
In file included from /usr/include/c++/5/cstdint:35:0,
                 from /workspace/TensorRT/demo/BERT/../../include/NvInfer.h:21,
                 from /workspace/TensorRT/demo/BERT/plugins/qkv2context_plugin.cu:17:
/usr/include/c++/5/bits/c++0x_warning.h:32:2: error: #error This file requires compiler and library support for the ISO C++ 2011 standard. This support must be enabled with the -std=c++11 or -std=gnu++11 compiler options.
 #error This file requires compiler and library support \
  ^
In file included from /usr/include/c++/5/cstdint:35:0,
                 from /workspace/TensorRT/demo/BERT/../../include/NvInfer.h:21,
                 from /workspace/TensorRT/demo/BERT/plugins/gelu_plugin.cu:17:
/usr/include/c++/5/bits/c++0x_warning.h:32:2: error: #error This file requires compiler and library support for the ISO C++ 2011 standard. This support must be enabled with the -std=c++11 or -std=gnu++11 compiler options.
 #error This file requires compiler and library support \
  ^
In file included from /workspace/TensorRT/demo/BERT/util/data_utils.cpp:17:0:
/workspace/TensorRT/demo/BERT/util/data_utils.hpp:41:50: error: '>>' should be '> >' within a nested template argument list
     const std::map<std::string, std::vector<float>>& dict, int verbose);
                                                  ^
/workspace/TensorRT/demo/BERT/util/data_utils.hpp:47:44: error: '>>' should be '> >' within a nested template argument list
     std::map<std::string, std::vector<float>>& dict, cudaStream_t stream);
                                            ^
In file included from /workspace/TensorRT/samples/common/logger.h:20:0,
                 from /workspace/TensorRT/samples/common/logger.cpp:17:
/workspace/TensorRT/samples/common/logging.h:27:7: error: expected nested-name-specifier before 'Severity'
 using Severity = nvinfer1::ILogger::Severity;
       ^
/workspace/TensorRT/samples/common/logging.h:39:52: error: expected ',' or '...' before '&&' token
     LogStreamConsumerBuffer(LogStreamConsumerBuffer&& other)
                                                    ^
/workspace/TensorRT/samples/common/logging.h:39:60: error: invalid constructor; you probably meant 'LogStreamConsumerBuffer (const LogStreamConsumerBuffer&)'
     LogStreamConsumerBuffer(LogStreamConsumerBuffer&& other)
                                                            ^
/workspace/TensorRT/samples/common/logging.h:120:32: error: expected ')' before 'reportableSeverity'
     LogStreamConsumer(Severity reportableSeverity, Severity severity)
                                ^
/workspace/TensorRT/samples/common/logging.h:128:40: error: expected ',' or '...' before '&&' token
     LogStreamConsumer(LogStreamConsumer&& other)
                                        ^
/workspace/TensorRT/samples/common/logging.h:128:48: error: invalid constructor; you probably meant 'LogStreamConsumer (const LogStreamConsumer&)'
     LogStreamConsumer(LogStreamConsumer&& other)
                                                ^
/workspace/TensorRT/samples/common/logging.h:136:32: error: 'Severity' has not been declared
     void setReportableSeverity(Severity reportableSeverity)
                                ^
/workspace/TensorRT/samples/common/logging.h:143:26: error: expected ';' at end of member declaration
     static std::ostream& severityOstream(Severity severity)
                          ^
/workspace/TensorRT/samples/common/logging.h:143:51: error: expected ')' before 'severity'
     static std::ostream& severityOstream(Severity severity)
                                                   ^
/workspace/TensorRT/samples/common/logging.h:148:39: error: 'Severity' has not been declared
     static std::string severityPrefix(Severity severity)
                                       ^
/workspace/TensorRT/samples/common/logging.h:162:5: error: 'Severity' does not name a type
     Severity mSeverity;
     ^
/workspace/TensorRT/samples/common/logging.h: In member function 'void LogStreamConsumer::setReportableSeverity(int)':
/workspace/TensorRT/samples/common/logging.h:138:22: error: 'mSeverity' was not declared in this scope
         mShouldLog = mSeverity <= reportableSeverity;
                      ^
/workspace/TensorRT/samples/common/logging.h: In static member function 'static std::__cxx11::string LogStreamConsumer::severityPrefix(int)':
/workspace/TensorRT/samples/common/logging.h:152:14: error: 'Severity' has not been declared
         case Severity::kINTERNAL_ERROR: return "[F] ";
              ^
/workspace/TensorRT/samples/common/logging.h:153:14: error: 'Severity' has not been declared
         case Severity::kERROR: return "[E] ";
              ^
/workspace/TensorRT/samples/common/logging.h:154:14: error: 'Severity' has not been declared
         case Severity::kWARNING: return "[W] ";
              ^
/workspace/TensorRT/samples/common/logging.h:155:14: error: 'Severity' has not been declared
         case Severity::kINFO: return "[I] ";
              ^
/workspace/TensorRT/samples/common/logging.h:156:14: error: 'Severity' has not been declared
         case Severity::kVERBOSE: return "[V] ";
              ^
/workspace/TensorRT/samples/common/logging.h: At global scope:
/workspace/TensorRT/samples/common/logging.h:201:5: warning: scoped enums only available with -std=c++11 or -std=gnu++11
     enum class TestResult
     ^
/workspace/TensorRT/samples/common/logging.h:227:48: warning: override controls (override/final) only available with -std=c++11 or -std=gnu++11
     void log(Severity severity, const char* msg) override
                                                ^
/workspace/TensorRT/samples/common/logging.h:252:26: error: expected ',' or '...' before '&&' token
         TestAtom(TestAtom&&) = default;
                          ^
/workspace/TensorRT/samples/common/logging.h:252:32: warning: defaulted and deleted functions only available with -std=c++11 or -std=gnu++11
         TestAtom(TestAtom&&) = default;
                                ^
/workspace/TensorRT/samples/common/logging.h:252:32: error: invalid constructor; you probably meant 'Logger::TestAtom (const Logger::TestAtom&)'
/workspace/TensorRT/samples/common/logging.h:192:32: error: 'Severity' is not a class or namespace
     Logger(Severity severity = Severity::kWARNING)
                                ^
/workspace/TensorRT/samples/common/logging.h: In member function 'virtual void Logger::log(nvinfer1::ILogger::Severity, const char*)':
/workspace/TensorRT/samples/common/logging.h:229:56: error: no matching function for call to 'LogStreamConsumer::LogStreamConsumer(nvinfer1::ILogger::Severity&, nvinfer1::ILogger::Severity&)'
         LogStreamConsumer(mReportableSeverity, severity) << "[TRT] " << std::string(msg) << std::endl;
                                                        ^
/workspace/TensorRT/samples/common/logging.h:115:7: note: candidate: LogStreamConsumer::LogStreamConsumer()
 class LogStreamConsumer : protected LogStreamConsumerBase, public std::ostream
       ^
/workspace/TensorRT/samples/common/logging.h:115:7: note:   candidate expects 0 arguments, 2 provided
/workspace/TensorRT/samples/common/logging.h:115:7: note: candidate: LogStreamConsumer::LogStreamConsumer(const LogStreamConsumer&)
/workspace/TensorRT/samples/common/logging.h:115:7: note:   candidate expects 1 argument, 2 provided
/workspace/TensorRT/samples/common/logging.h: In static member function 'static Logger::TestAtom Logger::defineTest(const string&, int, const char* const*)':
/workspace/TensorRT/samples/common/logging.h:296:14: error: 'cmdline' does not name a type
         auto cmdline = genCmdlineString(argc, argv);
              ^
/workspace/TensorRT/samples/common/logging.h:297:33: error: 'cmdline' was not declared in this scope
         return defineTest(name, cmdline);
                                 ^
/workspace/TensorRT/samples/common/logging.h: In static member function 'static void Logger::reportTestStart(Logger::TestAtom&)':
/workspace/TensorRT/samples/common/logging.h:309:36: error: 'TestResult' is not a class or namespace
         reportTestResult(testAtom, TestResult::kRUNNING);
                                    ^
In file included from /usr/include/c++/5/cassert:43:0,
                 from /workspace/TensorRT/samples/common/logging.h:21,
                 from /workspace/TensorRT/samples/common/logger.h:20,
                 from /workspace/TensorRT/samples/common/logger.cpp:17:
/workspace/TensorRT/samples/common/logging.h: In static member function 'static void Logger::reportTestEnd(const Logger::TestAtom&, Logger::TestResult)':
/workspace/TensorRT/samples/common/logging.h:325:26: error: 'TestResult' is not a class or namespace
         assert(result != TestResult::kRUNNING);
                          ^
In file included from /workspace/TensorRT/samples/common/logger.h:20:0,
                 from /workspace/TensorRT/samples/common/logger.cpp:17:
/workspace/TensorRT/samples/common/logging.h: In static member function 'static int Logger::reportPass(const Logger::TestAtom&)':
/workspace/TensorRT/samples/common/logging.h:332:33: error: 'TestResult' is not a class or namespace
         reportTestEnd(testAtom, TestResult::kPASSED);
                                 ^
/workspace/TensorRT/samples/common/logging.h:333:16: error: 'EXIT_SUCCESS' was not declared in this scope
         return EXIT_SUCCESS;
                ^
/workspace/TensorRT/samples/common/logging.h: In static member function 'static int Logger::reportFail(const Logger::TestAtom&)':
/workspace/TensorRT/samples/common/logging.h:338:33: error: 'TestResult' is not a class or namespace
         reportTestEnd(testAtom, TestResult::kFAILED);
                                 ^
/workspace/TensorRT/samples/common/logging.h:339:16: error: 'EXIT_FAILURE' was not declared in this scope
         return EXIT_FAILURE;
                ^
/workspace/TensorRT/samples/common/logging.h: In static member function 'static int Logger::reportWaive(const Logger::TestAtom&)':
/workspace/TensorRT/samples/common/logging.h:344:33: error: 'TestResult' is not a class or namespace
         reportTestEnd(testAtom, TestResult::kWAIVED);
                                 ^
/workspace/TensorRT/samples/common/logging.h:345:16: error: 'EXIT_SUCCESS' was not declared in this scope
         return EXIT_SUCCESS;
                ^
/workspace/TensorRT/samples/common/logging.h: In static member function 'static const char* Logger::severityPrefix(nvinfer1::ILogger::Severity)':
/workspace/TensorRT/samples/common/logging.h:366:14: error: 'Severity' is not a class or namespace
         case Severity::kINTERNAL_ERROR: return "[F] ";
              ^
/workspace/TensorRT/samples/common/logging.h:367:14: error: 'Severity' is not a class or namespace
         case Severity::kERROR: return "[E] ";
              ^
/workspace/TensorRT/samples/common/logging.h:368:14: error: 'Severity' is not a class or namespace
         case Severity::kWARNING: return "[W] ";
              ^
/workspace/TensorRT/samples/common/logging.h:369:14: error: 'Severity' is not a class or namespace
         case Severity::kINFO: return "[I] ";
              ^
/workspace/TensorRT/samples/common/logging.h:370:14: error: 'Severity' is not a class or namespace
         case Severity::kVERBOSE: return "[V] ";
              ^
/workspace/TensorRT/samples/common/logging.h: In static member function 'static const char* Logger::testResultString(Logger::TestResult)':
/workspace/TensorRT/samples/common/logging.h:382:14: error: 'TestResult' is not a class or namespace
         case TestResult::kRUNNING: return "RUNNING";
              ^
/workspace/TensorRT/samples/common/logging.h:383:14: error: 'TestResult' is not a class or namespace
         case TestResult::kPASSED: return "PASSED";
              ^
/workspace/TensorRT/samples/common/logging.h:384:14: error: 'TestResult' is not a class or namespace
         case TestResult::kFAILED: return "FAILED";
              ^
/workspace/TensorRT/samples/common/logging.h:385:14: error: 'TestResult' is not a class or namespace
         case TestResult::kWAIVED: return "WAIVED";
              ^
/workspace/TensorRT/samples/common/logging.h: In static member function 'static std::ostream& Logger::severityOstream(nvinfer1::ILogger::Severity)':
/workspace/TensorRT/samples/common/logging.h:395:28: error: 'Severity' is not a class or namespace
         return severity >= Severity::kINFO ? std::cout : std::cerr;
                            ^
/workspace/TensorRT/samples/common/logging.h: In static member function 'static void Logger::reportTestResult(const Logger::TestAtom&, Logger::TestResult)':
/workspace/TensorRT/samples/common/logging.h:403:25: error: 'Severity' is not a class or namespace
         severityOstream(Severity::kINFO) << "&&&& " << testResultString(result) << " " << testAtom.mName << " # "
                         ^
/workspace/TensorRT/samples/common/logging.h: In function 'LogStreamConsumer {anonymous}::LOG_VERBOSE(const Logger&)':
/workspace/TensorRT/samples/common/logging.h:437:62: error: 'Severity' was not declared in this scope
     return LogStreamConsumer(logger.getReportableSeverity(), Severity::kVERBOSE);
                                                              ^
/workspace/TensorRT/samples/common/logging.h: In function 'LogStreamConsumer {anonymous}::LOG_INFO(const Logger&)':
/workspace/TensorRT/samples/common/logging.h:449:62: error: 'Severity' was not declared in this scope
     return LogStreamConsumer(logger.getReportableSeverity(), Severity::kINFO);
                                                              ^
/workspace/TensorRT/samples/common/logging.h: In function 'LogStreamConsumer {anonymous}::LOG_WARN(const Logger&)':
/workspace/TensorRT/samples/common/logging.h:461:62: error: 'Severity' was not declared in this scope
     return LogStreamConsumer(logger.getReportableSeverity(), Severity::kWARNING);
                                                              ^
/workspace/TensorRT/samples/common/logging.h: In function 'LogStreamConsumer {anonymous}::LOG_ERROR(const Logger&)':
/workspace/TensorRT/samples/common/logging.h:473:62: error: 'Severity' was not declared in this scope
     return LogStreamConsumer(logger.getReportableSeverity(), Severity::kERROR);
                                                              ^
/workspace/TensorRT/samples/common/logging.h: In function 'LogStreamConsumer {anonymous}::LOG_FATAL(const Logger&)':
/workspace/TensorRT/samples/common/logging.h:486:62: error: 'Severity' was not declared in this scope
     return LogStreamConsumer(logger.getReportableSeverity(), Severity::kINTERNAL_ERROR);
                                                              ^
/workspace/TensorRT/samples/common/logger.cpp: At global scope:
/workspace/TensorRT/samples/common/logger.cpp:20:15: warning: extended initializer lists only available with -std=c++11 or -std=gnu++11
 Logger gLogger{Logger::Severity::kINFO};
               ^
/workspace/TensorRT/samples/common/logger.cpp:20:24: error: 'Logger::Severity' is not a class or namespace
 Logger gLogger{Logger::Severity::kINFO};
                        ^
/workspace/TensorRT/samples/common/logger.cpp:20:39: error: in C++98 'gLogger' must be initialized by constructor, not by '{...}'
 Logger gLogger{Logger::Severity::kINFO};
                                       ^
/workspace/TensorRT/samples/common/logger.cpp:20:39: error: no matching function for call to 'Logger::Logger(<brace-enclosed initializer list>)'
In file included from /workspace/TensorRT/samples/common/logger.h:20:0,
                 from /workspace/TensorRT/samples/common/logger.cpp:17:
/workspace/TensorRT/samples/common/logging.h:192:5: note: candidate: Logger::Logger(nvinfer1::ILogger::Severity)
     Logger(Severity severity = Severity::kWARNING)
     ^
/workspace/TensorRT/samples/common/logging.h:192:5: note:   conversion of argument 1 would be ill-formed:
/workspace/TensorRT/samples/common/logging.h:189:7: note: candidate: Logger::Logger(const Logger&)
 class Logger : public nvinfer1::ILogger
       ^
/workspace/TensorRT/samples/common/logging.h:189:7: note:   conversion of argument 1 would be ill-formed:
/workspace/TensorRT/samples/common/logger.cpp:21:30: warning: extended initializer lists only available with -std=c++11 or -std=gnu++11
 LogStreamConsumer gLogVerbose{LOG_VERBOSE(gLogger)};
                              ^
/workspace/TensorRT/samples/common/logger.cpp:21:51: error: in C++98 'gLogVerbose' must be initialized by constructor, not by '{...}'
 LogStreamConsumer gLogVerbose{LOG_VERBOSE(gLogger)};
                                                   ^
In file included from /usr/include/c++/5/ios:42:0,
                 from /usr/include/c++/5/ostream:38,
                 from /usr/include/c++/5/iostream:39,
                 from /workspace/TensorRT/samples/common/logging.h:22,
                 from /workspace/TensorRT/samples/common/logger.h:20,
                 from /workspace/TensorRT/samples/common/logger.cpp:17:
/usr/include/c++/5/bits/ios_base.h: In copy constructor 'std::basic_ios<char>::basic_ios(const std::basic_ios<char>&)':
/usr/include/c++/5/bits/ios_base.h:855:5: error: 'std::ios_base::ios_base(const std::ios_base&)' is private
     ios_base(const ios_base&);
     ^
In file included from /usr/include/c++/5/ios:44:0,
                 from /usr/include/c++/5/ostream:38,
                 from /usr/include/c++/5/iostream:39,
                 from /workspace/TensorRT/samples/common/logging.h:22,
                 from /workspace/TensorRT/samples/common/logger.h:20,
                 from /workspace/TensorRT/samples/common/logger.cpp:17:
/usr/include/c++/5/bits/basic_ios.h:67:11: error: within this context
     class basic_ios : public ios_base
           ^
In file included from /workspace/TensorRT/samples/common/logger.h:20:0,
                 from /workspace/TensorRT/samples/common/logger.cpp:17:
/workspace/TensorRT/samples/common/logging.h: In copy constructor 'LogStreamConsumer::LogStreamConsumer(const LogStreamConsumer&)':
/workspace/TensorRT/samples/common/logging.h:115:7: note: synthesized method 'std::basic_ios<char>::basic_ios(const std::basic_ios<char>&)' first required here
 class LogStreamConsumer : protected LogStreamConsumerBase, public std::ostream
       ^
In file included from /usr/include/c++/5/ios:43:0,
                 from /usr/include/c++/5/ostream:38,
                 from /usr/include/c++/5/iostream:39,
                 from /workspace/TensorRT/samples/common/logging.h:22,
                 from /workspace/TensorRT/samples/common/logger.h:20,
                 from /workspace/TensorRT/samples/common/logger.cpp:17:
/usr/include/c++/5/streambuf: In copy constructor 'std::__cxx11::basic_stringbuf<char>::basic_stringbuf(const std::__cxx11::basic_stringbuf<char>&)':
/usr/include/c++/5/streambuf:804:7: error: 'std::basic_streambuf<_CharT, _Traits>::basic_streambuf(const std::basic_streambuf<_CharT, _Traits>&) [with _CharT = char; _Traits = std::char_traits<char>]' is private
       basic_streambuf(const basic_streambuf&);
       ^
In file included from /workspace/TensorRT/samples/common/logging.h:24:0,
                 from /workspace/TensorRT/samples/common/logger.h:20,
                 from /workspace/TensorRT/samples/common/logger.cpp:17:
/usr/include/c++/5/sstream:65:11: error: within this context
     class basic_stringbuf : public basic_streambuf<_CharT, _Traits>
           ^
In file included from /workspace/TensorRT/samples/common/logger.h:20:0,
                 from /workspace/TensorRT/samples/common/logger.cpp:17:
/workspace/TensorRT/samples/common/logging.h: In copy constructor 'LogStreamConsumerBuffer::LogStreamConsumerBuffer(const LogStreamConsumerBuffer&)':
/workspace/TensorRT/samples/common/logging.h:29:7: note: synthesized method 'std::__cxx11::basic_stringbuf<char>::basic_stringbuf(const std::__cxx11::basic_stringbuf<char>&)' first required here
 class LogStreamConsumerBuffer : public std::stringbuf
       ^
/workspace/TensorRT/samples/common/logging.h: In copy constructor 'LogStreamConsumerBase::LogStreamConsumerBase(const LogStreamConsumerBase&)':
/workspace/TensorRT/samples/common/logging.h:94:7: note: synthesized method 'LogStreamConsumerBuffer::LogStreamConsumerBuffer(const LogStreamConsumerBuffer&)' first required here
 class LogStreamConsumerBase
       ^
/workspace/TensorRT/samples/common/logging.h: In copy constructor 'LogStreamConsumer::LogStreamConsumer(const LogStreamConsumer&)':
/workspace/TensorRT/samples/common/logging.h:115:7: note: synthesized method 'LogStreamConsumerBase::LogStreamConsumerBase(const LogStreamConsumerBase&)' first required here
 class LogStreamConsumer : protected LogStreamConsumerBase, public std::ostream
       ^
/workspace/TensorRT/samples/common/logger.cpp: At global scope:
/workspace/TensorRT/samples/common/logger.cpp:21:51: note: synthesized method 'LogStreamConsumer::LogStreamConsumer(const LogStreamConsumer&)' first required here
 LogStreamConsumer gLogVerbose{LOG_VERBOSE(gLogger)};
                                                   ^
/workspace/TensorRT/samples/common/logger.cpp:22:27: warning: extended initializer lists only available with -std=c++11 or -std=gnu++11
 LogStreamConsumer gLogInfo{LOG_INFO(gLogger)};
                           ^
/workspace/TensorRT/samples/common/logger.cpp:22:45: error: in C++98 'gLogInfo' must be initialized by constructor, not by '{...}'
 LogStreamConsumer gLogInfo{LOG_INFO(gLogger)};
                                             ^
/workspace/TensorRT/samples/common/logger.cpp:23:30: warning: extended initializer lists only available with -std=c++11 or -std=gnu++11
 LogStreamConsumer gLogWarning{LOG_WARN(gLogger)};
                              ^
/workspace/TensorRT/samples/common/logger.cpp:23:48: error: in C++98 'gLogWarning' must be initialized by constructor, not by '{...}'
 LogStreamConsumer gLogWarning{LOG_WARN(gLogger)};
                                                ^
/workspace/TensorRT/samples/common/logger.cpp:24:28: warning: extended initializer lists only available with -std=c++11 or -std=gnu++11
 LogStreamConsumer gLogError{LOG_ERROR(gLogger)};
                            ^
/workspace/TensorRT/samples/common/logger.cpp:24:47: error: in C++98 'gLogError' must be initialized by constructor, not by '{...}'
 LogStreamConsumer gLogError{LOG_ERROR(gLogger)};
                                               ^
/workspace/TensorRT/samples/common/logger.cpp:25:28: warning: extended initializer lists only available with -std=c++11 or -std=gnu++11
 LogStreamConsumer gLogFatal{LOG_FATAL(gLogger)};
                            ^
/workspace/TensorRT/samples/common/logger.cpp:25:47: error: in C++98 'gLogFatal' must be initialized by constructor, not by '{...}'
 LogStreamConsumer gLogFatal{LOG_FATAL(gLogger)};
                                               ^
/workspace/TensorRT/samples/common/logger.cpp: In function 'void setReportableSeverity(nvinfer1::ILogger::Severity)':
/workspace/TensorRT/samples/common/logger.cpp:30:47: error: no matching function for call to 'LogStreamConsumer::setReportableSeverity(nvinfer1::ILogger::Severity&)'
     gLogVerbose.setReportableSeverity(severity);
                                               ^
In file included from /workspace/TensorRT/samples/common/logger.h:20:0,
                 from /workspace/TensorRT/samples/common/logger.cpp:17:
/workspace/TensorRT/samples/common/logging.h:136:10: note: candidate: void LogStreamConsumer::setReportableSeverity(int)
     void setReportableSeverity(Severity reportableSeverity)
          ^
/workspace/TensorRT/samples/common/logging.h:136:10: note:   no known conversion for argument 1 from 'nvinfer1::ILogger::Severity' to 'int'
/workspace/TensorRT/samples/common/logger.cpp:31:44: error: no matching function for call to 'LogStreamConsumer::setReportableSeverity(nvinfer1::ILogger::Severity&)'
     gLogInfo.setReportableSeverity(severity);
                                            ^
In file included from /workspace/TensorRT/samples/common/logger.h:20:0,
                 from /workspace/TensorRT/samples/common/logger.cpp:17:
/workspace/TensorRT/samples/common/logging.h:136:10: note: candidate: void LogStreamConsumer::setReportableSeverity(int)
     void setReportableSeverity(Severity reportableSeverity)
          ^
/workspace/TensorRT/samples/common/logging.h:136:10: note:   no known conversion for argument 1 from 'nvinfer1::ILogger::Severity' to 'int'
/workspace/TensorRT/samples/common/logger.cpp:32:47: error: no matching function for call to 'LogStreamConsumer::setReportableSeverity(nvinfer1::ILogger::Severity&)'
     gLogWarning.setReportableSeverity(severity);
                                               ^
In file included from /workspace/TensorRT/samples/common/logger.h:20:0,
                 from /workspace/TensorRT/samples/common/logger.cpp:17:
/workspace/TensorRT/samples/common/logging.h:136:10: note: candidate: void LogStreamConsumer::setReportableSeverity(int)
     void setReportableSeverity(Severity reportableSeverity)
          ^
/workspace/TensorRT/samples/common/logging.h:136:10: note:   no known conversion for argument 1 from 'nvinfer1::ILogger::Severity' to 'int'
/workspace/TensorRT/samples/common/logger.cpp:33:45: error: no matching function for call to 'LogStreamConsumer::setReportableSeverity(nvinfer1::ILogger::Severity&)'
     gLogError.setReportableSeverity(severity);
                                             ^
In file included from /workspace/TensorRT/samples/common/logger.h:20:0,
                 from /workspace/TensorRT/samples/common/logger.cpp:17:
/workspace/TensorRT/samples/common/logging.h:136:10: note: candidate: void LogStreamConsumer::setReportableSeverity(int)
     void setReportableSeverity(Severity reportableSeverity)
          ^
/workspace/TensorRT/samples/common/logging.h:136:10: note:   no known conversion for argument 1 from 'nvinfer1::ILogger::Severity' to 'int'
/workspace/TensorRT/samples/common/logger.cpp:34:45: error: no matching function for call to 'LogStreamConsumer::setReportableSeverity(nvinfer1::ILogger::Severity&)'
     gLogFatal.setReportableSeverity(severity);
                                             ^
In file included from /workspace/TensorRT/samples/common/logger.h:20:0,
                 from /workspace/TensorRT/samples/common/logger.cpp:17:
/workspace/TensorRT/samples/common/logging.h:136:10: note: candidate: void LogStreamConsumer::setReportableSeverity(int)
     void setReportableSeverity(Severity reportableSeverity)
          ^
/workspace/TensorRT/samples/common/logging.h:136:10: note:   no known conversion for argument 1 from 'nvinfer1::ILogger::Severity' to 'int'
CMakeFiles/common.dir/build.make:62: recipe for target 'CMakeFiles/common.dir/workspace/TensorRT/samples/common/logger.cpp.o' failed
make[2]: *** [CMakeFiles/common.dir/workspace/TensorRT/samples/common/logger.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
/workspace/TensorRT/demo/BERT/util/data_utils.cpp: In function 'int bert::type2bytes(nvinfer1::DataType)':
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:67:15: error: 'DataType' is not a class or namespace
     if (dt == DataType::kINT8)
               ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:69:15: error: 'DataType' is not a class or namespace
     if (dt == DataType::kHALF)
               ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp: At global scope:
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:86:37: error: 'DataType' is not a class or namespace
 const DataType T2DT<float>::value = DataType::kFLOAT;
                                     ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:87:35: error: 'DataType' is not a class or namespace
 const DataType T2DT<int>::value = DataType::kINT32;
                                   ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp: In function 'void bert::load_row(std::__cxx11::string&, T*&, int&, nvinfer1::Dims&, std::ifstream&)':
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:92:5: error: 'int32_t' was not declared in this scope
     int32_t type, n_dims, dim;
     ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:93:34: error: 'type' was not declared in this scope
     input >> name >> std::dec >> type >> n_dims;
                                  ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:93:42: error: 'n_dims' was not declared in this scope
     input >> name >> std::dec >> type >> n_dims;
                                          ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp: In function 'void bert::load_weights(const string&, std::map<std::__cxx11::basic_string<char>, nvinfer1::Weights>&)':
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:113:56: error: no matching function for call to 'std::basic_ifstream<char>::basic_ifstream(const string&, const openmode&)'
     std::ifstream input(wts_path, std::ios_base::binary);
                                                        ^
In file included from /workspace/TensorRT/demo/BERT/util/data_utils.cpp:21:0:
/usr/include/c++/5/fstream:495:7: note: candidate: std::basic_ifstream<_CharT, _Traits>::basic_ifstream(const char*, std::ios_base::openmode) [with _CharT = char; _Traits = std::char_traits<char>; std::ios_base::openmode = std::_Ios_Openmode]
       basic_ifstream(const char* __s, ios_base::openmode __mode = ios_base::in)
       ^
/usr/include/c++/5/fstream:495:7: note:   no known conversion for argument 1 from 'const string {aka const std::__cxx11::basic_string<char>}' to 'const char*'
/usr/include/c++/5/fstream:481:7: note: candidate: std::basic_ifstream<_CharT, _Traits>::basic_ifstream() [with _CharT = char; _Traits = std::char_traits<char>]
       basic_ifstream() : __istream_type(), _M_filebuf()
       ^
/usr/include/c++/5/fstream:481:7: note:   candidate expects 0 arguments, 2 provided
/usr/include/c++/5/fstream:455:11: note: candidate: std::basic_ifstream<char>::basic_ifstream(const std::basic_ifstream<char>&)
     class basic_ifstream : public basic_istream<_CharT, _Traits>
           ^
/usr/include/c++/5/fstream:455:11: note:   candidate expects 1 argument, 2 provided
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:114:5: error: 'int32_t' was not declared in this scope
     int32_t count;
     ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:115:14: error: 'count' was not declared in this scope
     input >> count;
              ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:122:23: error: 'nullptr' was not declared in this scope
         float* data = nullptr;
                       ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:137:20: error: 'DataType' is not a class or namespace
         tmp.type = DataType::kFLOAT;
                    ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:139:13: error: 'class nvinfer1::Weights' has no member named 'count'
         tmp.count = param_size;
             ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:146:16: error: ISO C++ forbids declaration of 'kv' with no type [-fpermissive]
     for (auto& kv : weight_dict)
                ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:146:21: warning: range-based 'for' loops only available with -std=c++11 or -std=gnu++11
     for (auto& kv : weight_dict)
                     ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:149:22: error: request for member 'first' in 'kv', which is of non-class type 'int'
         int pos = kv.first.find(BQ); // starting pos of BQ
                      ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:152:34: error: request for member 'second' in 'kv', which is of non-class type 'int'
             int hidden_size = kv.second.count;
                                  ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:153:37: error: request for member 'first' in 'kv', which is of non-class type 'int'
             std::string prefix = kv.first.substr(0, pos);
                                     ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:155:37: error: request for member 'second' in 'kv', which is of non-class type 'int'
             const Weights& Bq_ = kv.second;
                                     ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:164:69: error: 'malloc' was not declared in this scope
             float* Wall_ptr = (float*) malloc(wcount * sizeof(float));
                                                                     ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:177:43: error: 'DataType' is not a class or namespace
             weight_dict[prefix + WQKV] = {DataType::kFLOAT, Wall_ptr, wcount};
                                           ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:177:77: warning: extended initializer lists only available with -std=c++11 or -std=gnu++11
             weight_dict[prefix + WQKV] = {DataType::kFLOAT, Wall_ptr, wcount};
                                                                             ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:177:40: warning: extended initializer lists only available with -std=c++11 or -std=gnu++11
             weight_dict[prefix + WQKV] = {DataType::kFLOAT, Wall_ptr, wcount};
                                        ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:177:40: error: no match for 'operator=' (operand types are 'std::map<std::__cxx11::basic_string<char>, nvinfer1::Weights>::mapped_type {aka nvinfer1::Weights}' and '<brace-enclosed initializer list>')
In file included from /workspace/TensorRT/demo/BERT/util/data_utils.hpp:19:0,
                 from /workspace/TensorRT/demo/BERT/util/data_utils.cpp:17:
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:483:7: note: candidate: nvinfer1::Weights& nvinfer1::Weights::operator=(const nvinfer1::Weights&)
 class Weights
       ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:483:7: note:   no known conversion for argument 1 from '<brace-enclosed initializer list>' to 'const nvinfer1::Weights&'
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:178:43: error: 'DataType' is not a class or namespace
             weight_dict[prefix + BQKV] = {DataType::kFLOAT, Ball_ptr, bcount};
                                           ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:178:77: warning: extended initializer lists only available with -std=c++11 or -std=gnu++11
             weight_dict[prefix + BQKV] = {DataType::kFLOAT, Ball_ptr, bcount};
                                                                             ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:178:40: warning: extended initializer lists only available with -std=c++11 or -std=gnu++11
             weight_dict[prefix + BQKV] = {DataType::kFLOAT, Ball_ptr, bcount};
                                        ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:178:40: error: no match for 'operator=' (operand types are 'std::map<std::__cxx11::basic_string<char>, nvinfer1::Weights>::mapped_type {aka nvinfer1::Weights}' and '<brace-enclosed initializer list>')
In file included from /workspace/TensorRT/demo/BERT/util/data_utils.hpp:19:0,
                 from /workspace/TensorRT/demo/BERT/util/data_utils.cpp:17:
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:483:7: note: candidate: nvinfer1::Weights& nvinfer1::Weights::operator=(const nvinfer1::Weights&)
 class Weights
       ^
/workspace/TensorRT/demo/BERT/../../include/NvInfer.h:483:7: note:   no known conversion for argument 1 from '<brace-enclosed initializer list>' to 'const nvinfer1::Weights&'
/workspace/TensorRT/demo/BERT/util/data_utils.cpp: In function 'void bert::load_inputs(const string&, int&, int&, std::vector<nvinfer1::Weights>&, std::vector<nvinfer1::Weights>&, std::vector<nvinfer1::Weights>&, std::vector<nvinfer1::Dims>&)':
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:187:56: error: no matching function for call to 'std::basic_ifstream<char>::basic_ifstream(const string&, const openmode&)'
     std::ifstream input(wts_path, std::ios_base::binary);
                                                        ^
In file included from /workspace/TensorRT/demo/BERT/util/data_utils.cpp:21:0:
/usr/include/c++/5/fstream:495:7: note: candidate: std::basic_ifstream<_CharT, _Traits>::basic_ifstream(const char*, std::ios_base::openmode) [with _CharT = char; _Traits = std::char_traits<char>; std::ios_base::openmode = std::_Ios_Openmode]
       basic_ifstream(const char* __s, ios_base::openmode __mode = ios_base::in)
       ^
/usr/include/c++/5/fstream:495:7: note:   no known conversion for argument 1 from 'const string {aka const std::__cxx11::basic_string<char>}' to 'const char*'
/usr/include/c++/5/fstream:481:7: note: candidate: std::basic_ifstream<_CharT, _Traits>::basic_ifstream() [with _CharT = char; _Traits = std::char_traits<char>]
       basic_ifstream() : __istream_type(), _M_filebuf()
       ^
/usr/include/c++/5/fstream:481:7: note:   candidate expects 0 arguments, 2 provided
/usr/include/c++/5/fstream:455:11: note: candidate: std::basic_ifstream<char>::basic_ifstream(const std::basic_ifstream<char>&)
     class basic_ifstream : public basic_istream<_CharT, _Traits>
           ^
/usr/include/c++/5/fstream:455:11: note:   candidate expects 1 argument, 2 provided
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:188:5: error: 'int32_t' was not declared in this scope
     int32_t count;
     ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:189:14: error: 'count' was not declared in this scope
     input >> count;
              ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:199:21: error: 'nullptr' was not declared in this scope
         int* data = nullptr;
                     ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:214:20: error: 'DataType' is not a class or namespace
         tmp.type = DataType::kINT32;
                    ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:216:13: error: 'class nvinfer1::Weights' has no member named 'count'
         tmp.count = param_size;
             ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp: In function 'void bert::infer_network_sizes(const std::map<std::__cxx11::basic_string<char>, nvinfer1::Weights>&, int&, int&, int&)':
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:243:16: error: ISO C++ forbids declaration of 'kv' with no type [-fpermissive]
     for (auto& kv : init_dict)
                ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:243:21: warning: range-based 'for' loops only available with -std=c++11 or -std=gnu++11
     for (auto& kv : init_dict)
                     ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:245:16: error: request for member 'first' in 'kv', which is of non-class type 'int'
         if (kv.first.find("beta") != std::string::npos)
                ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:247:30: error: request for member 'second' in 'kv', which is of non-class type 'int'
             hidden_size = kv.second.count;
                              ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:251:16: error: ISO C++ forbids declaration of 'kv' with no type [-fpermissive]
     for (auto& kv : init_dict)
                ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:251:21: warning: range-based 'for' loops only available with -std=c++11 or -std=gnu++11
     for (auto& kv : init_dict)
                     ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:253:16: error: request for member 'first' in 'kv', which is of non-class type 'int'
         if (kv.first.find("intermediate_dense_bias") != std::string::npos)
                ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:255:36: error: request for member 'second' in 'kv', which is of non-class type 'int'
             intermediate_size = kv.second.count;
                                    ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:260:16: error: ISO C++ forbids declaration of 'kv' with no type [-fpermissive]
     for (auto& kv : init_dict)
                ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:260:21: warning: range-based 'for' loops only available with -std=c++11 or -std=gnu++11
     for (auto& kv : init_dict)
                     ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:262:16: error: request for member 'first' in 'kv', which is of non-class type 'int'
         if (kv.first[0] == 'l')
                ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:264:34: error: request for member 'first' in 'kv', which is of non-class type 'int'
             std::string tok = kv.first.substr(1, kv.first.find("_") - 1);
                                  ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:264:53: error: request for member 'first' in 'kv', which is of non-class type 'int'
             std::string tok = kv.first.substr(1, kv.first.find("_") - 1);
                                                     ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:265:25: error: 'stoi' is not a member of 'std'
             int layer = std::stoi(tok);
                         ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:267:29: error: request for member 'first' in 'kv', which is of non-class type 'int'
             std::cout << kv.first << " " << tok << " " << layer << std::endl;
                             ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp: In function 'void bert::alloc_bindings(const nvinfer1::ICudaEngine&, std::vector<void*>&, int, const std::map<std::__cxx11::basic_string<char>, nvinfer1::Weights>&, int)':
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:279:16: error: ISO C++ forbids declaration of 'kv' with no type [-fpermissive]
     for (auto& kv : dict)
                ^
CMakeFiles/bert_plugins.dir/build.make:101: recipe for target 'CMakeFiles/bert_plugins.dir/plugins/emb_layer_norm_plugin.cu.o' failed
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:279:21: warning: range-based 'for' loops only available with -std=c++11 or -std=gnu++11
     for (auto& kv : dict)
                     ^
make[2]: *** [CMakeFiles/bert_plugins.dir/plugins/emb_layer_norm_plugin.cu.o] Error 1
make[2]: *** Waiting for unfinished jobs....
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:281:9: error: 'tie' is not a member of 'std'
         std::tie(name, W) = kv;
         ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:287:24: error: 'class nvinfer1::Weights' has no member named 'count'
         int outlen = W.count * type2bytes(W.type);
                        ^
CMakeFiles/bert_plugins.dir/build.make:75: recipe for target 'CMakeFiles/bert_plugins.dir/plugins/skip_layer_norm_plugin.cu.o' failed
make[2]: *** [CMakeFiles/bert_plugins.dir/plugins/skip_layer_norm_plugin.cu.o] Error 1
/workspace/TensorRT/demo/BERT/util/data_utils.cpp: At global scope:
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:295:50: error: '>>' should be '> >' within a nested template argument list
     const std::map<std::string, std::vector<float>>& dict, int verbose)
                                                  ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp: In function 'void bert::alloc_bindings(const nvinfer1::ICudaEngine&, std::vector<void*>&, int, const std::map<std::__cxx11::basic_string<char>, std::vector<float> >&, int)':
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:298:16: error: ISO C++ forbids declaration of 'kv' with no type [-fpermissive]
     for (auto& kv : dict)
                ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:298:21: warning: range-based 'for' loops only available with -std=c++11 or -std=gnu++11
     for (auto& kv : dict)
                     ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:300:51: error: request for member 'first' in 'kv', which is of non-class type 'int'
         const int idx = engine.getBindingIndex(kv.first.c_str());
                                                   ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:302:49: error: request for member 'first' in 'kv', which is of non-class type 'int'
             printf(" idx %d name %s\n", idx, kv.first.c_str());
                                                 ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:305:41: error: request for member 'second' in 'kv', which is of non-class type 'int'
         int outlen = sizeof(float) * kv.second.size();
                                         ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp: In function 'void bert::upload(const nvinfer1::ICudaEngine&, std::vector<void*>&, int, const std::map<std::__cxx11::basic_string<char>, nvinfer1::Weights>&, cudaStream_t)':
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:319:15: error: 'kv' does not name a type
     for (auto kv : dict)
               ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:326:1: error: expected ';' before '}' token
 }
 ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:326:1: error: expected primary-expression before '}' token
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:326:1: error: expected ';' before '}' token
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:326:1: error: expected primary-expression before '}' token
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:326:1: error: expected ')' before '}' token
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:326:1: error: expected primary-expression before '}' token
/workspace/TensorRT/demo/BERT/util/data_utils.cpp: At global scope:
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:329:44: error: '>>' should be '> >' within a nested template argument list
     std::map<std::string, std::vector<float>>& dict, cudaStream_t stream)
                                            ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp: In function 'void bert::download(const nvinfer1::ICudaEngine&, std::vector<void*>&, int, std::map<std::__cxx11::basic_string<char>, std::vector<float> >&, cudaStream_t)':
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:331:16: error: ISO C++ forbids declaration of 'kv' with no type [-fpermissive]
     for (auto& kv : dict)
                ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:331:21: warning: range-based 'for' loops only available with -std=c++11 or -std=gnu++11
     for (auto& kv : dict)
                     ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:333:51: error: request for member 'first' in 'kv', which is of non-class type 'int'
         const int idx = engine.getBindingIndex(kv.first.c_str());
                                                   ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:334:22: error: request for member 'second' in 'kv', which is of non-class type 'int'
         int len = kv.second.size() * sizeof(float);
                      ^
/workspace/TensorRT/demo/BERT/util/data_utils.cpp:335:29: error: request for member 'second' in 'kv', which is of non-class type 'int'
         cudaMemcpyAsync(&kv.second[0], buffers[idx], batchSize * len, cudaMemcpyDeviceToHost, stream);
                             ^
CMakeFiles/bert_plugins.dir/build.make:88: recipe for target 'CMakeFiles/bert_plugins.dir/plugins/qkv2context_plugin.cu.o' failed
make[2]: *** [CMakeFiles/bert_plugins.dir/plugins/qkv2context_plugin.cu.o] Error 1
CMakeFiles/bert_plugins.dir/build.make:62: recipe for target 'CMakeFiles/bert_plugins.dir/plugins/gelu_plugin.cu.o' failed
make[2]: *** [CMakeFiles/bert_plugins.dir/plugins/gelu_plugin.cu.o] Error 1
CMakeFiles/Makefile2:147: recipe for target 'CMakeFiles/bert_plugins.dir/all' failed
make[1]: *** [CMakeFiles/bert_plugins.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
CMakeFiles/common.dir/build.make:75: recipe for target 'CMakeFiles/common.dir/util/data_utils.cpp.o' failed
make[2]: *** [CMakeFiles/common.dir/util/data_utils.cpp.o] Error 1
CMakeFiles/Makefile2:110: recipe for target 'CMakeFiles/common.dir/all' failed
make[1]: *** [CMakeFiles/common.dir/all] Error 2

Rules about dynamic range setting and quantization

We try to figure out how Trt actually do quantization,so we build some simple models using pytorch and convert to .onnx and do test. We set ranges using statistics coming from our tools. We found something confusing about setting ranges:

  1. For some tensors, the ranges we set are not actually working. Such as combination Conv+Relu, the tensor of Conv's output actually do not need ranges(different ranges get the same outputs).
  2. Some reshape layers(shuffle) need for ranges, which is unreasonable, while other reshape layers do need, we can't tell the difference between them.
  3. We found if any tensors marked as the outputs of network, will not be quantized(neither that layer).
  4. Fully connected layer in .onnx will be converted with extra shuffle layers in Trt.
  5. Do ranges for Int8 quantization of layers simply set to the max values in weights? Our experiments show it is may not that simple in Trt. And how about the bias?

We need to know if the result above is correct, and we want to know the specific method of range setting.
We use Tensorrt 5.1.2.2 and run sample_int8_api for test.
Thank you very much!

Different results for INT8 per run

A weird thing:

After INT8 calibration, the calibration table is cached.

Then, I repeatedly: loaded the calibration table and did INT8 inference, but got different outputs each time, as shown in the figure.

Anyone has the same issue? What may be the cause of it?

clip

How to assign each output of a plugin to different nodes

The NvFasterRCNNPlugin has two outputs: rois and pooling features. The TensorFlow nodes can be replaced by the NvFasterRCNNPlugin using dynamic_graph.collapse_namespaces. However, the UFF parser always assign the outputs[0] of the plugin to the next two nodes. I cannot figure out how to assign outputs[0] to the node in the branch1 and assign outputs[1] to the node in the branch2 .

Details about Int8

Hi, how can we get details about int8 implementation or the quantization scheme?

convert pytorch model to tensorrt

Firstly, I convert pytorch model resnet50 to onnx,which can be inferenced. Then,i convert the onnx file to trt file,but when it run the engine = builder.build_cuda_engine(network), got a None Engine.And I got [TensorRT] ERROR: Network must have at least one output.Can you tell me what cause this error?
Any response is appreciate.Thank you!

can not create plugin namespace

i create a pulgin name "testPlugin" and use REGISTER_TENSORRT_PLUGIN to register testPluginCreator class.
if namespace assigns a string , i get a nullptr from getPluginCreator("testPlugin", "1")

How does tensorrt.py script calibrate the graph without the the calibration dataset?

Hi all, i would like to know how does tensorrt.py script calibrate the graph without the calibration dataset?

def get_trt_graph_from_calib(graph_name, converter, data, input_node, output_node,
                             output_dir, num_loops=100):
  """Convert a TensorRT graph used for calibration to an inference graph."""
  converter.convert()
  def input_fn():
    iterator = get_iterator(data)
    return {input_node: iterator.get_next()}
  trt_graph = converter.calibrate(
    fetch_names=[output_node],
    num_runs=num_loops,
    input_map_fn=input_fn)
  write_graph_to_file(graph_name, trt_graph, output_dir)
  return trt_graph

sample_ssd issue

when sample ssd is run on x86 system , below error is shown

[E] [TRT] mbox_loc: all concat input tensors must have the same dimensions except on the concatenation axis
[E] [TRT] mbox_conf: all concat input tensors must have the same dimensions except on the concatenation axis
Caffe Parser: Invalid axis in softmax layer - Cannot perform softmax along batch size dimension and expects NCHW input. Negative axis is not supported in TensorRT, please use positive axis indexing
error parsing layer type Softmax index 98
Segmentation fault (core dumped)

I modified prototxt like below following guide document.

layer {
name: "mbox_conf_reshape"
type: "Reshape"
bottom: "mbox_conf"
top: "mbox_conf_reshape"
reshape_param {
shape {
dim: 0
dim: -1
dim: 1
dim: 1
}
}
}
How can I resolve this issue?
Thank you!

About Determinism of TensorRT

Hi,
I'm using TensorRT FP16 precision mode to optimize my deep learning model which takes an image input and outputs a steering angle. While testing my model, I have observed that FPS(frames per second) of TensorRT inference is different for same inputs. For example, when I run the model at time t for input A, FPS of inferencing is X. But at time t+1, FPS of inferencing for input A is Y.

My observation from results, TensorRT inference engine is non-deterministic if FP16 precision mode used.

I started to think that the source of the non-determinism is floating point operations when I see this comment about CUDA:

"If your code uses floating-point atomics, results may differ from run to run because floating-point operations are generally not associative, and the order in which data enters a computation (e.g. a sum) is non-deterministic when atomics are used."

Does type of precision such as FP16, FP32 and INT8 affect determinism of TensorRT?
Why the FPS values are varying even if the calculations are same?

Do you have any thoughs?

Best regards.

NOTES:

  • Hardware : Jetson TX2
  • I expect same FPS value on every execution.
  • I measure FPS like that:
clock_t beginExecuteEngine = clock();
context->execute(kBatchSize,bindings);
double deltaTimeExecuteEngine = double( clock() - beginExecuteEngine) /double(CLOCKS_PER_SEC);

it occur Cudnn Error when i run the examples in TRT

hello,

&&&& RUNNING TensorRT.sample_uff_mnist # ./sample_uff_mnist
[I] ../../../data/mnist/lenet5.uff
[E] [TRT] cuda/cudaConvolutionLayer.cpp (238) - Cudnn Error in execute: 8 (CUDNN_STATUS_EXECUTION_FAILED)
[E] [TRT] cuda/cudaConvolutionLayer.cpp (238) - Cudnn Error in execute: 8 (CUDNN_STATUS_EXECUTION_FAILED)
[E] Unable to create engine
[E] Model load failed
&&&& FAILED TensorRT.sample_uff_mnist # ./sample_uff_mnist

it happen a cudnn error just as above when i run the examples in tesla T4 device.
GPU type - Tesla T4
CUDA version -9.0, V9.0.176
CUDNN version - 7.5.0
TRT version - 5.1.5.0

what it heppen?

thks

how can I get pytorch tensor from GPU memory without copying?

I want to speed up the part of faster-rcnn-fpn, which is extractor of feature map. the feature map size is large. and I get the output of tensorrt which is mem_alloc object, but I need pytorch tensor object. I try to convert mem_alloc object to pytorch tensor, but it spend too much time in memcpy from gpu to cpu. how to convert data type from cuda.mem_alloc object to pytorch tensor object without copying?

my code:

binding = [int(d_input), int(d_output[0]), int(d_output[1]), int(d_output[2]), int(d_output[3])]
cuda.memcpy_htod_async(d_input, input_data_tensor.data.cpu().numpy().astype(NPDTYPE), stream)
context.execute(1, binding)
cuda.memcpy_dtoh_async(output1, d_output[0], stream)
cuda.memcpy_dtoh_async(output2, d_output[1], stream)
cuda.memcpy_dtoh_async(output3, d_output[2], stream)
cuda.memcpy_dtoh_async(output4, d_output[3], stream)
stream.synchronize()

ou1 = torch.tensor(output1, device="cuda")
ou2 = torch.tensor(output2, device="cuda")
ou3 = torch.tensor(output3, device="cuda")
ou4 = torch.tensor(output4, device="cuda") 

sample_ssd nms plugin issue

when code is running this line, error occurs.

bool status = context->execute(mParams.batchSize, buffers.getDeviceBindings().data());

error message is below

#assertion/home/suhyung/Downloads/TensorRT/plugin/nmsPlugin/nmsPlugin.cpp,118

I did modify detection_out layer like below

layer {
  name: "detection_out"
  type: "DetectionOutput"
  bottom: "mbox_loc"
  bottom: "mbox_conf_flatten"
  bottom: "mbox_priorbox"
  top: "keep_count"  # add output blob by suhyung
  top: "detection_out"
  include {
    phase: TEST
  }
  detection_output_param {
    num_classes: 21
    share_location: true
    background_label_id: 0
    nms_param {
      nms_threshold: 0.45
      top_k: 400
    }
    save_output_param {
      label_map_file: "data/VOC0712/labelmap_voc.prototxt"
    }
    code_type: CENTER_SIZE
    keep_top_k: 200
    confidence_threshold: 0.01
  }
}

Did I something wrong?
Thank you

Create the nvFasterRCNNPlugin error

In the code of RPROIPluginCreator, To correctly read the anchorsScales parameter, the anchorsScaleCount should be read first. How to ensure the order of reading parameters when parsing models.

if (!strcmp(attrName, "anchorsRatios"))
        {
            ASSERT(fields[i].type == PluginFieldType::kFLOAT32);
            anchorsRatios.reserve(params.anchorsRatioCount);
            const float* ratios = static_cast<const float*>(fields[i].data);
            for (int j = 0; j < params.anchorsRatioCount; ++j)
            {
                anchorsRatios.push_back(*ratios);
                ratios++;
            }
        }

onnx does not contain a CMakeLists.txt file

Configure output:

CMake Error at parsers/CMakeLists.txt:42 (add_subdirectory):
  The source directory

    /home/tom/projects/TensorRT/parsers/onnx

  does not contain a CMakeLists.txt file.

maybe this is supposed to be in an if (NVINTERNAL OR NVPARTNER) block?

Unpooling and Pooling with indices.

Hi there, any timelines or updates on unpooling? And also pooling with indices?
Pooling with indices and unpooling have been on ONNX since 1.3 and there are many segmentation models that could benefit from it, specially for segmentation applications.

I plan to look into it and now that TRT is OS, any similar layers I can check as samples? I think the interesting part would be the storage of those indices. Any help is appreciated!

Error in `python': malloc(): memory corruption: 0x000000002de41180

Hi @rajeevsrao, I use TensorRT to perform INT8 inference and met the error of memory corruption in the step of creating an engine. The code is as follows:

engine = trt.lite.Engine(framework="c1",
                             deployfile=prototxt_path,
                             modelfile=caffemodel_path,
                             max_batch_size=1,
                             max_workspace_size=14737418240,
                             input_nodes={"image": (CHANNEL, HEIGHT, WIDTH)},
                             output_nodes=["net_output"],
                             preprocessors={"image": sub_mean_chw},
                             data_type=trt.infer.DataType.INT8,
                             calibrator=int8_calibrator,
                             logger_severity=trt.infer.LogSeverity.INFO)

The intermediate outputs are as follows:

[TensorRT] INFO: Detecting Framework
[TensorRT] INFO: Parsing Model from caffe
......
......
[TensorRT] INFO: Data initialization and engine generation completed in 0.910275 seconds.
[TensorRT] INFO: Calculating Maxima

Then, performing calibration:

[TensorRT] INFO: Calibrating with batch 0
......
......
[TensorRT] INFO: Calibration completed in 74.9203 seconds.
[TensorRT] INFO: 
[TensorRT] INFO: --------------- Timing <reformat>(9)
[TensorRT] INFO: Tactic 0 time 0.02048
[TensorRT] INFO: 
[TensorRT] INFO: --------------- Timing conv1_1 + relu1_1(3)
[TensorRT] INFO: Tactic 0 time 0.231424
[TensorRT] INFO: Tactic 1 time 0.381952
......
......
[TensorRT] INFO: --------------- Timing conv5_5 (14)
[TensorRT] INFO: Tactic 1079542932488803809 time 0.036864
[TensorRT] INFO: Tactic 1413988017979900355 time 0.03072
[TensorRT] INFO: Tactic 6141827563175036240 time 0.03072
[TensorRT] INFO: Tactic 6668819484868377063 time 0.033792
[TensorRT] INFO: Tactic 8510081743513226211 time 0.033792
[TensorRT] INFO: Tactic 8813960566256626552 time 0.034816
[TensorRT] INFO: Tactic -8873261107183810632 time 0.034816
[TensorRT] INFO: Tactic -6305152426604635505 time 0.036864
[TensorRT] INFO: Tactic -362002645878708127 time 0.031744
[TensorRT] INFO: 
[TensorRT] INFO: --------------- Timing conv5_5 (1)
[TensorRT] INFO: --------------- Chose 14 (1413988017979900355)
[TensorRT] INFO: 
[TensorRT] INFO: --------------- Timing conv5_5 (3)
[TensorRT] INFO: Tactic 0 time 0.033792
[TensorRT] INFO: 
[TensorRT] INFO: --------------- Timing conv5_5 (14)
*** Error in `python': malloc(): memory corruption: 0x000000003ad7aa30 ***
======= Backtrace: =========
......
......
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0                  [vsyscall]
Aborted (core dumped)

Above information shows the engine generation is completed. The step of calibration is also finished. The memory corruption is happened at the time of analyzing one network layer. Could you tell me what is the reason? Thanks in advance.

How do I implement R-FCN in TensorRT

As I know, TensorRT provides sample of Faster R-CNN. In the sample of Faster R-CNN, Proposal and RoI Pooling are combined into one plugin. I want to ask how to use the proposal layer exception ROI Pooling.How I can get the output of the proposal layer?Do I need to fuse the proposal and PsRoiPooling layer? If so,how should I to do?
Any advice is appreciated.Thank you

How to make large batch inferences

Hello, I have a yolov3 trt model and a lot of test images, how to enter multiple images at the same time? not a single image. what should I do?

Build error TENSORRT_LIBRARY_INFER not found

CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
TENSORRT_LIBRARY_INFER
    linked by target "nvonnxparser_static" in directory /media/fagangjin/wd/permanent/software/source_codes/dl/TensorRT/parsers/onnx
    linked by target "nvonnxparser" in directory /media/fagangjin/wd/permanent/software/source_codes/dl/TensorRT/parsers/onnx
    linked by target "nvonnxparser_runtime" in directory /media/fagangjin/wd/permanent/software/source_codes/dl/TensorRT/parsers/onnx
    linked by target "nvonnxparser_plugin" in directory /media/fagangjin/wd/permanent/software/source_codes/dl/TensorRT/parsers/onnx
    linked by target "nvonnxparser_runtime_static" in directory /media/fagangjin/wd/permanent/software/source_codes/dl/TensorRT/parsers/onnx
TENSORRT_LIBRARY_INFER_PLUGIN

UFF Parser Error when build TRT Engine for SSD_mobilenet_v2

Env:

  • Xavier
  • Jetpack 4.2

Reproduce:

Error Message
[TensorRT] ERROR: UFFParser: Parser error: BoxPredictor_0/Reshape: Reshape: -1
dimension specified more than 1 time

Model:
https://drive.google.com/open?id=14r1d8vq7NmnmdW3IUBdhOnPQvYTJAxtm

Parse faster_rcnn_test_iplugin.prototxt error when running sampleFasterRCNN.

[I] Begin parsing model...
[libprotobuf ERROR E:\Perforce\rboissel_devdt_windows\sw\gpgpu\MachineLearning\DIT\dev\nvmake\externals\protobuf\3.0.0\src\google\protobuf\text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 483:21: Message type "ditcaffe.LayerParameter" has no field named "roi_pooling_param".
[E] [TRT] CaffeParser: Could not parse deploy file
[I] End parsing model...

I download the TensorRT-5.1.5.0 GA for windows from https://developer.nvidia.com/nvidia-tensorrt-5x-download. The sampleMNIST can run successfully, but the sample_fasterRCNN run failed.The error is strange,because my computer doesn't have an e-disk and my protobuf is v3.8.0. When comparing it with TensorRT OSS , I found that they are very different. And how build the TensorRT OSS on windows 10?

lose layers while parsing .onnx file.

I have a weird problem: I use “trt.OnnxParser(network, TRT_LOGGER)" to parse my well-work onnx file(retinanet about object detection). I find that it seems to have forgotten to parse the whole regression part, so the output of produced-engine just have the classification output. But the process could finish successfully without error. Did anyone have the same problem? what should I do...
811751982
1235271996
my process code:

def get_engine(onnx_file_path, engine_file_path=""):

    def build_engine():

        with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.OnnxParser(network,TRT_LOGGER) as parser:
            builder.max_workspace_size = 1 << 30

            with open(onnx_file_path, 'rb') as model:
                parser.parse(model.read())

            print(network.mark_output(network.get_layer(network.num_layers - 1).get_output(0)))
            engine = builder.build_cuda_engine(network)

            with open(engine_file_path, "wb") as f:
                f.write(engine.serialize())
            return engine

    if os.path.exists(engine_file_path):
        with open(engine_file_path, "rb") as f, trt.Runtime(TRT_LOGGER) as runtime:
            return runtime.deserialize_cuda_engine(f.read())
    else:
        return build_engine()

the info of the produced-engine:

for binding in engine:
    print(binding)

————————————————

0
758

Some doubts about buffers.h

hello
https://github.com/NVIDIA/TensorRT/blob/release/5.1/samples/common/buffers.h
line 155 use malloc() for host memory
and the example
https://github.com/NVIDIA/TensorRT/blob/release/5.1/samples/opensource/sampleMNIST/sampleMNIST.cpp
line 285 use buffers.copyInputToDeviceAsync(stream); call buffers.h line377
CHECK(cudaMemcpyAsync(dstPtr, srcPtr, byteSize, memcpyType, stream));
question is that:
Is malloc() compatible with cudaMemcpyAsync()?
cudaHostAlloc() or cudaHostAlloc() is better?

What is the meaning of Tactic when using trtexec

Hi, I'm trying to profile my model using trtexec.
I got this error but I can't understand the meaning of Tactic and message.
What is the meaning of Tactic?
Also, How can I calculate the time taken per layer?

--------------- Timing module/slice3/conv3/Conv2D(14)
Tactic 313790760974845026 time 8.02246
Tactic 4840826699149504854 time 11.5004
Tactic 5144900180810042977 time 12.6861
Tactic 7995089803833173579 time 12.6612
Tactic 8967263056866968576 time 11.786
Tactic -9018593586369639207 time 10.8774
Tactic -3998689942157953075 time 10.2664
Tactic -3448915218022119398 time 11.6557

--------------- Timing module/slice3/conv3/Conv2D(1)
Tactic 0 time 12.7072
Tactic 1 time 11.7415
Tactic 2 scratch requested: 301989888, available: 16777216
Tactic 4 scratch requested: 138792665088, available: 16777216
Tactic 5 scratch requested: 1149812736, available: 16777216
Tactic 6 scratch requested: 26216448, available: 16777216

PReLU layer without channel sharing

There is an NvPlugin for PReLU and its creator is defined as

TENSORRTAPI INvPlugin* createPReLUPlugin(float negSlope);

which takes a single floating value. The default PReLU layer defined in caffe does not share negSlope values over channels and each channel's negSlope value is kept in a weight parameter of the layer.

My questions:
1 - Does NvPlugin for PReLU supports different negSlope for each channel?
2 - If so how can I create plugin that parse the weights from the caffemodel file?
3 - If not what is your advice to implement default PReLU caffe layer for TRT with a smooth caffeparser?

Is RTX 2080 TI supported?

I am using the python library to build a model:

with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network:
    ...
    network.add_input(...)
    etc.

This has worked fine for months on the 1080 TI card. I've recently added a RTX 2080 TI to my machine and it does not work. While building the model, I see this error:

[TensorRT] ERROR: cuda/cudaConvolutionLayer.cpp (238) - Cudnn Error in execute: 8 (CUDNN_STATUS_EXECUTION_FAILED)

My configuration (using nvidia-docker):

  • OS: ubuntu 16.04
  • cuda: 10.0
  • cudnn: 7.5.0
  • tensorrt: 5.1.5
user@dd17d4e31b32:~$ dpkg -l | grep TensorRT
ii  libnvinfer-dev                                              5.1.5-1+cuda10.0                                      amd64        TensorRT development libraries and headers
ii  libnvinfer-samples                                          5.1.5-1+cuda10.0                                      all          TensorRT samples and documentation
ii  libnvinfer5                                                 5.1.5-1+cuda10.0                                      amd64        TensorRT runtime libraries
ii  python-libnvinfer                                           5.1.5-1+cuda10.0                                      amd64        Python bindings for TensorRT
ii  python-libnvinfer-dev                                       5.1.5-1+cuda10.0                                      amd64        Python development package for TensorRT
ii  tensorrt                                                    5.1.5.0-1+cuda10.0                                    amd64        Meta package of TensorRT

user@dd17d4e31b32:~$ dpkg -l | grep cudnn
ii  libcudnn7                                                   7.5.0.56-1+cuda10.0                                   amd64        cuDNN runtime libraries
ii  libcudnn7-dev                                               7.5.0.56-1+cuda10.0                                   amd64        cuDNN development libraries and headers

Relevant part of Dockerfile:

# cuda/cudnn
FROM nvidia/cuda:10.0-cudnn7-devel-ubuntu16.04

# Install TensorRT
RUN dpkg -i /debs/nv-tensorrt-repo-ubuntu1604-cuda10.0-trt5.1.5.0-ga-20190427_1-1_amd64.deb
RUN apt-key add /var/nv-tensorrt-repo-cuda10.0-trt5.1.5.0-ga-20190427/7fa2af80.pub
RUN apt-get update && apt-get -y install \
    libcudnn7=7.5.0.56-1+cuda10.0 \
    libcudnn7-dev=7.5.0.56-1+cuda10.0 \
    tensorrt=5.1.5.0-1+cuda10.0 \
    python-libnvinfer-dev=5.1.5-1+cuda10.0 \
    python-libnvinfer=5.1.5-1+cuda10.0 \
    libnvinfer5=5.1.5-1+cuda10.0 \
    libnvinfer-dev=5.1.5-1+cuda10.0

Is there a known issue with the newer cards in TensorRT or is there just a problem with my environment?

Unable to run sample programs

I am trying to execute sample: sample_onnx_mnist

But i get an exception @

float* hostDataBuffer = static_cast<float*>(buffers.getHostBuffer(mParams.inputTensorNames[0]));

There is no buffer with name "Input3" which is set by default.
I downloaded model from (https://github.com/onnx/models/tree/master/mnist )

Below i have attached output log of model parsing:
image

Here i can not see any node with name "Input3" and output node name also is different.

Would you please help me resolve the issue, TIA!

do I have to implement IPluginFactoryV2 in plugin?

I have seen the how to register plugin in the code, but If I have to implement IPluginFactoryV2 ? I haven't seen in the code, and just call the function 'initLibNvInferPlugins()' ? It can auto find plugin which I have registered or I have to set parser->setPluginFactoryV2(&pluginFactoryV2);
It confused me.

Pull the image error

Hi @rajeevsrao, when I pull the TensorRT image container, I meet the error as follows:

19.05-py3: Pulling from nvidia/tensorrt
7e6591854262: Pull complete
089d60cb4e0a: Pull complete
9c461696bc09: Pull complete
45085432511a: Pull complete
6ca460804a89: Extracting [==============> ] 2.261MB/7.715MB
2631f04ebf64: Download complete
86f56e03e071: Download complete
234646620160: Downloading [===> ] 42.09MB/615.2MB
7f717cd17058: Download complete
e69a2ba99832: Download complete
bc9bca17b13c: Download complete
1870788e477f: Download complete
603e0d586945: Downloading [===> ] 32.92MB/492.7MB
717dfedf079c: Downloading
1035ef613bc7: Waiting
c5bd7559c3ad: Waiting
d82c679b8708: Waiting
059d4f560014: Waiting
f3f14cff44df: Waiting
96502bde320c: Waiting
bc5bb9379810: Waiting
e4d8bb046bc2: Waiting
4e2187010a7c: Waiting
9d62684b94c3: Waiting
e70e61e48991: Waiting
adecb91612fe: Waiting
ba27dafb70e8: Waiting
16bde716c9b2: Waiting
476faeed0740: Waiting
5af7c8a6b101: Waiting
960591fee98d: Waiting
0dd138c184ff: Waiting
7ef953567062: Waiting
bd9a54f5a193: Waiting
144852c40661: Waiting
171a26eec2d4: Waiting
999acb71c4df: Waiting
3f301e4ba386: Waiting
3fc30e0f9cba: Waiting
38d1459042f4: Waiting
aafa1a9d16eb: Waiting
unauthorized: authentication required

I also meet the same error when I use docker file to build environment.
Do you know how to solve it? Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.