Giter Site home page Giter Site logo

对‘fwd::TrtForwardEngine::Load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)’未定义的引用 about forward HOT 9 CLOSED

tencent avatar tencent commented on April 27, 2024 2
对‘fwd::TrtForwardEngine::Load(std::__cxx11::basic_string, std::allocator > const&)’未定义的引用

from forward.

Comments (9)

chenjun2hao avatar chenjun2hao commented on April 27, 2024 2

why i change the code as follow, it can build.

fwd::TrtForwardEngine* fwd_engine = new fwd::TrtForwardEngine();

  // Update Step 1: Update the path to pb model
  std::string engine_path = "../data/softmax.pb.engine";

  ////////////  Load Engine  ////////////
  if (!fwd_engine->Load(engine_path)) {

from forward.

zhaoyiluo avatar zhaoyiluo commented on April 27, 2024 1

@chenjun2hao 问题发现了,以下是逐步说明,在你的 cmake信息 中,请留意这行提示:

-- CUDA_NVCC_FLAGS:  -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75
CMake Warning at CMakeLists.txt:147 (message):
  _GLIBCXX_USE_CXX11_ABIT=0 is set for PyTorch libraries.  Check dependencies
  for this flag.

由于我们要生成 fwd_torch 的 Python 版,在构建项目时,我们需要设置 -DBUILD_PYTHON_LIB=ONBUILD_PYTHON_LIB 的开启会触发 CMakeLists.txt 中的 add_definitions(-D_GLIBCXX_USE_CXX11_ABI=0)Line93。设置该宏为 0 的目的是为了兼容 PyTorch,使得 fwd_torch 的 Python 版可以正常运行。

针对 _GLIBCXX_USE_CXX11_ABI 宏的进一步解释。该宏决定了在编译阶段,C++ 库 是根据 旧的 ABI 标准 还是 新的 ABI 标准 进行编译。默认值是 1,即 _GLIBCXX_USE_CXX11_ABI == 1新的 ABI 标准 被启用,这也代表着编译器将根据 C++11 及它以后的版本来理解 C++ 代码,即 Modern C++。当我们把这个宏的值设为 0 后,编译器将根据旧版本的 C++ 来进行理解。

由于我们的项目是基于 C++11 的,当 _GLIBCXX_USE_CXX11_ABI == 0 时,编译器自然无法编译出正确的代码,这也是导致 undefined reference 未定义的引用 由来的原因。

如果你的项目需要使用 fwd_torch 的 C++ 版本,建议禁用 BUILD_PYTHON_LIB 选项。若有疑问,欢迎继续追问。

感谢 @yuanzexi 的帮助~

参考资料:

  1. Dual ABI Chapter 3. Using
  2. Understanding GCC 5's _GLIBCXX_USE_CXX11_ABI or the new ABI

from forward.

zhaoyiluo avatar zhaoyiluo commented on April 27, 2024

Hi @chenjun2hao ,

I will continue watching the issue under this thread.

目前我这边在构建完 Forward-Cpp 库后,并将 Forward/build/bin 目录下的 libfwd_torch.solibtrt_engine.so 拷贝到 demo/fwd_cpp/libs 目录后,能顺利运行 build.sh,并能成功转换测试模型 resnet50.pth

通过你目前给出的信息,我能推断出是在构建 demo 项目时 cmake 报错了,希望可以提供更多的信息让我方便帮你 debug :)

from forward.

chenjun2hao avatar chenjun2hao commented on April 27, 2024

@zhaoyiluo

cmake=3.12.2
libtorch=1.7.1
cuda=11.2
tensorrt=7.2.2.3

  1. 这是我修改后build.sh
rm -rf build
mkdir build

ENABLE_TORCH=ON
ENABLE_TENSORFLOW=OFF

TensorRT_ROOT=/home/darknet/CM/profile/TensorRT-7.2.2.3
LibTorch=/home/darknet/CM/profile/libtorch-cxx11-abi-shared-with-deps-1.7.1+cu110/libtorch
LibTensorflow=/path/to/tensorflow

cd build
make clean
cmake .. -DENABLE_TORCH=$ENABLE_TORCH -DENABLE_TENSORFLOW=$ENABLE_TENSORFLOW -DTensorRT_ROOT=$TensorRT_ROOT -DCMAKE_PREFIX_PATH=$LibTorch -DTensorflow_ROOT=$LibTensorflow

# make -j
  1. cmake ..出来的信息
-- The C compiler identification is GNU 7.5.0
-- The CXX compiler identification is GNU 7.5.0
-- The CUDA compiler identification is NVIDIA 11.1.74
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc -- works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Found CUDA: /usr/local/cuda (found version "11.1") 
-- Found TensorRT: /home/darknet/CM/profile/TensorRT-7.2.2.3/lib/libnvinfer.so;/home/darknet/CM/profile/TensorRT-7.2.2.3/lib/libnvinfer_plugin.so;/home/darknet/CM/profile/TensorRT-7.2.2.3/lib/libnvonnxparser.so;/home/darknet/CM/profile/TensorRT-7.2.2.3/lib/libnvparsers.so (found version "7.2.2") 
-- Configuring done
-- Generating done
CMake Warning:
  Manually-specified variables were not used by the project:

    Tensorflow_ROOT
-- Build files have been written to: /home/darknet/CM/12_tensorrt/Forward3/demo/fwd_cpp/build
  1. 编译时报错:
[100%] Linking CXX executable test_fwd_engine
CMakeFiles/test_fwd_engine.dir/test_fwd_engine.cpp.o:在函数‘main’中:
test_fwd_engine.cpp:(.text+0x1c8):对‘fwd::TrtForwardEngine::Load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)’未定义的引用
collect2: error: ld returned 1 exit status
CMakeFiles/test_fwd_engine.dir/build.make:90: recipe for target 'test_fwd_engine' failed
make[2]: *** [test_fwd_engine] Error 1
CMakeFiles/Makefile2:72: recipe for target 'CMakeFiles/test_fwd_engine.dir/all' failed
make[1]: *** [CMakeFiles/test_fwd_engine.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

from forward.

zhaoyiluo avatar zhaoyiluo commented on April 27, 2024

@chenjun2hao

对比了下我们之间的 cmake 输出信息,发现你这边 cmake 信息缺失 cuDNN 的 STATUS,想确认下你的环境中是否安装了 cuDNN?

-- The C compiler identification is GNU 9.1.0
-- The CXX compiler identification is GNU 9.1.0
-- The CUDA compiler identification is NVIDIA 11.1.105
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc - works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ - works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Found CUDA: /usr/local/cuda (found version "11.1") 
-- Found TensorRT: /data2/zhaoyiluo/libs/TensorRT-7.2.1.6/lib/libnvinfer.so;/data2/zhaoyiluo/libs/TensorRT-7.2.1.6/lib/libnvinfer_plugin.so;/data2/zhaoyiluo/libs/TensorRT-7.2.1.6/lib/libnvonnxparser.so;/data2/zhaoyiluo/libs/TensorRT-7.2.1.6/lib/libnvparsers.so (found version "7.2.1") 
-- Found CUDA: /usr/local/cuda (found version "11.1") 
-- Caffe2: CUDA detected: 11.1
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda
-- Caffe2: Header version is: 11.1
-- Found CUDNN: /usr/local/cuda/lib64/libcudnn.so  
-- Found cuDNN: v8.1.1  (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so)
-- Autodetected CUDA architecture(s):  7.0 7.0 7.0 7.0
-- Added CUDA NVCC flags for: -gencode;arch=compute_70,code=sm_70
-- Found Torch: /data2/zhaoyiluo/libs/libtorch171/lib/libtorch.so  
-- Configuring done
-- Generating done
CMake Warning:
  Manually-specified variables were not used by the project:

    Tensorflow_ROOT


-- Build files have been written to: /data2/zhaoyiluo/Forward/Forward-master/demo/fwd_cpp/build

from forward.

chenjun2hao avatar chenjun2hao commented on April 27, 2024

@zhaoyiluo
刚才我把test_fwd_torch.cpp部分的cmake注释掉了。这是使用后的cmake信息:

-- The C compiler identification is GNU 7.5.0
-- The CXX compiler identification is GNU 7.5.0
-- The CUDA compiler identification is NVIDIA 11.1.74
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc -- works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Found CUDA: /usr/local/cuda (found version "11.1") 
-- Found TensorRT: /home/darknet/CM/profile/TensorRT-7.2.2.3/lib/libnvinfer.so;/home/darknet/CM/profile/TensorRT-7.2.2.3/lib/libnvinfer_plugin.so;/home/darknet/CM/profile/TensorRT-7.2.2.3/lib/libnvonnxparser.so;/home/darknet/CM/profile/TensorRT-7.2.2.3/lib/libnvparsers.so (found version "7.2.2") 
-- Found CUDA: /usr/local/cuda (found version "11.1") 
-- Caffe2: CUDA detected: 11.1
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda
-- Caffe2: Header version is: 11.1
-- Found CUDNN: /usr/local/cuda/lib64/libcudnn.so  
-- Found cuDNN: v8.0.5  (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so)
-- Autodetected CUDA architecture(s):  8.6 8.6
-- Added CUDA NVCC flags for: -gencode;arch=compute_86,code=sm_86
-- Found Torch: /home/darknet/CM/profile/libtorch-cxx11-abi-shared-with-deps-1.7.1+cu110/libtorch/lib/libtorch.so  
-- Configuring done
-- Generating done
CMake Warning:
  Manually-specified variables were not used by the project:

    Tensorflow_ROOT


-- Build files have been written to: /home/darknet/CM/12_tensorrt/Forward3/demo/fwd_cpp/build

from forward.

zhaoyiluo avatar zhaoyiluo commented on April 27, 2024

@chenjun2hao

demo 这边的 cmake 我看了下没有什么疑点,我这边也确认了 ENABLE_TORCH=OFF(即只生成 test_fwd_engine)时,构建 demo 也没有出错。

我们进一步开始排查 libtrt_engine.solibfwd_torch.so 动态库的构建,希望你这边再提供些信息,

  1. 出现错误时你使用的代码分支版本
  2. 你在构建项目时,完整的 cmake 命令,如 cmake .. -DTensorRT_ROOT=path/to/tensorrt -DENABLE_TORCH=ON -DCMAKE_PREFIX_PATH=/path/to/libtorch ...
  3. 构建过程中,cmake 的生成信息

from forward.

chenjun2hao avatar chenjun2hao commented on April 27, 2024

@zhaoyiluo
我如果在构建libtrt_engine.so的cmakelists.txt中把demo编译了,就不会出现这个问题

  1. 测试过master,1.1.1,都是这样的错误
  2. 这是我编译libtrt_engine.so libfwd_torch.so 动态库时,cmake命令:
if [ ! -d "build" ]
then
mkdir build
fi

cd build
make clean
cmake .. -DENABLE_LOGGING=OFF         \
        -DTensorRT_ROOT=/home/darknet/CM/profile/TensorRT-7.2.2.3 \
        -DENABLE_PROFILING=OFF               \
        -DBUILD_PYTHON_LIB=ON                \
        -DENABLE_TORCH=ON                    \
        -DENABLE_TENSORFLOW=OFF               \
        -DENABLE_KERAS=OFF                    \
        -DENABLE_RNN=OFF                     \
        -DPYTHON_EXECUTABLE=$(which python) 
  1. cmake信息:
-- The C compiler identification is GNU 7.5.0
-- The CXX compiler identification is GNU 7.5.0
-- The CUDA compiler identification is NVIDIA 11.1.74
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc -- works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Found CUDA: /usr/local/cuda (found version "11.1") 
-- CUDA_NVCC_FLAGS:  -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75
CMake Warning at CMakeLists.txt:147 (message):
  _GLIBCXX_USE_CXX11_ABIT=0 is set for PyTorch libraries.  Check dependencies
  for this flag.


-- Found TensorRT: /home/darknet/CM/profile/TensorRT-7.2.2.3/lib/libnvinfer.so;/home/darknet/CM/profile/TensorRT-7.2.2.3/lib/libnvinfer_plugin.so;/home/darknet/CM/profile/TensorRT-7.2.2.3/lib/libnvonnxparser.so;/home/darknet/CM/profile/TensorRT-7.2.2.3/lib/libnvparsers.so (found version "7.2.2") 
-- Found CUDA: /usr/local/cuda (found version "11.1") 
-- Caffe2: CUDA detected: 11.1
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda
-- Caffe2: Header version is: 11.1
-- Found CUDNN: /usr/local/cuda/lib64/libcudnn.so  
-- Found cuDNN: v8.0.5  (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so)
-- Autodetected CUDA architecture(s):  8.6 8.6
-- Added CUDA NVCC flags for: -gencode;arch=compute_86,code=sm_86
-- Found Torch: /home/darknet/miniconda3/envs/torch170/lib/python3.6/site-packages/torch/lib/libtorch.so  
-- Find Torch VERSION: 1.7.0
-- Found PythonInterp: /home/darknet/miniconda3/envs/torch170/bin/python (found version "3.6.7") 
-- Found PythonLibs: /home/darknet/miniconda3/envs/torch170/lib/libpython3.6m.so
-- pybind11 v2.3.dev0
-- Performing Test HAS_FLTO
-- Performing Test HAS_FLTO - Success
-- LTO enabled
-- Configuring done
-- Generating done
-- Build files have been written to: /home/darknet/CM/12_tensorrt/Forward3/build

from forward.

chenjun2hao avatar chenjun2hao commented on April 27, 2024

@zhaoyiluo , thanks.

from forward.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.