Giter Site home page Giter Site logo

ihdia / palmira Goto Github PK

View Code? Open in Web Editor NEW
7.0 7.0 7.0 57.97 MB

📜 [ICDAR 2021] "A Deep Deformable Network for Instance Segmentation of Dense and Uneven Layouts in Handwritten Manuscripts", S P Sharan, Sowmya Aitha, Amandeep Kumar, Abhishek Trivedi, Aaron Augustine, Ravi Kiran Sarvadevabhatla

Home Page: https://ihdia.iiit.ac.in/Palmira

License: MIT License

Python 70.72% C++ 3.26% Cuda 26.01%
dataset deep-learning deformable-convolutional-network document-image-segmentation graph-neural-networks historical-document-analysis icdar2021 instance-segmentation pytorch

palmira's Issues

can't inference and there are requirement conflicts

hi there,

  • there is a conflict in the requirements as torchision=0.7.0 requires pytorch==1.6.0
  • pytorch 1.7.1 requires cuda 10.1 for in conda and pip
  • getting error when inferencing:
(d2) home@home-lnx:~/programs/Palmira$ python demo.py --input ./input/1.jpg --output ./output --config configs/palmira/Palmira.yaml --opts MODEL.WEIGHTS ./Palmira_indiscapes.pth
Using /home/home/.cache/torch_extensions as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/home/.cache/torch_extensions/check_condition_bbox/build.ninja...
Building extension module check_condition_bbox...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=check_condition_bbox -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/include -isystem /home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/include/TH -isystem /home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /home/home/anaconda3/envs/d2/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_61,code=sm_61 --compiler-options '-fPIC' -std=c++14 -c /home/home/programs/Palmira/defgrid/layers/DefGrid/check_condition_lattice_bbox/check_condition_lattice_for2.cu -o check_condition_lattice_for2.cuda.o 
FAILED: check_condition_lattice_for2.cuda.o 
/usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=check_condition_bbox -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/include -isystem /home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/include/TH -isystem /home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /home/home/anaconda3/envs/d2/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_61,code=sm_61 --compiler-options '-fPIC' -std=c++14 -c /home/home/programs/Palmira/defgrid/layers/DefGrid/check_condition_lattice_bbox/check_condition_lattice_for2.cu -o check_condition_lattice_for2.cuda.o 
/home/home/programs/Palmira/defgrid/layers/DefGrid/check_condition_lattice_bbox/check_condition_lattice_for2.cu(39): warning: missing return statement at end of non-void function "check_condition_cuda_divide_non_zero(scalar_t) [with scalar_t=double]"
          detected during:
            instantiation of "scalar_t check_condition_cuda_divide_non_zero(scalar_t) [with scalar_t=double]" 
(137): here
            instantiation of "void check_condition_cuda_forward_kernel_batch(scalar_t *, scalar_t *, scalar_t *, scalar_t *, int, int, int) [with scalar_t=double]" 
(170): here

/home/home/programs/Palmira/defgrid/layers/DefGrid/check_condition_lattice_bbox/check_condition_lattice_for2.cu(39): warning: missing return statement at end of non-void function "check_condition_cuda_divide_non_zero(scalar_t) [with scalar_t=float]"
          detected during:
            instantiation of "scalar_t check_condition_cuda_divide_non_zero(scalar_t) [with scalar_t=float]" 
(137): here
            instantiation of "void check_condition_cuda_forward_kernel_batch(scalar_t *, scalar_t *, scalar_t *, scalar_t *, int, int, int) [with scalar_t=float]" 
(170): here

/home/home/programs/Palmira/defgrid/layers/DefGrid/check_condition_lattice_bbox/check_condition_lattice_for2.cu: In lambda function:
/home/home/programs/Palmira/defgrid/layers/DefGrid/check_condition_lattice_bbox/check_condition_lattice_for2.cu:170:144: warning: ‘T* at::Tensor::data() const [with T = double]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
  AT_DISPATCH_FLOATING_TYPES(grid_bxkx3x2.type(), "check_condition_cuda_forward_batch", ([&] {
                                                                                                                                                ^
/home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:363:1: note: declared here
   T * data() const {
 ^ ~~
/home/home/programs/Palmira/defgrid/layers/DefGrid/check_condition_lattice_bbox/check_condition_lattice_for2.cu:170:178: warning: ‘T* at::Tensor::data() const [with T = double]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
  AT_DISPATCH_FLOATING_TYPES(grid_bxkx3x2.type(), "check_condition_cuda_forward_batch", ([&] {
                                                                                                                                                                                  ^
/home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:363:1: note: declared here
   T * data() const {
 ^ ~~
/home/home/programs/Palmira/defgrid/layers/DefGrid/check_condition_lattice_bbox/check_condition_lattice_for2.cu:170:214: warning: ‘T* at::Tensor::data() const [with T = double]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
  AT_DISPATCH_FLOATING_TYPES(grid_bxkx3x2.type(), "check_condition_cuda_forward_batch", ([&] {
                                                                                                                                                                                                                      ^
/home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:363:1: note: declared here
   T * data() const {
 ^ ~~
/home/home/programs/Palmira/defgrid/layers/DefGrid/check_condition_lattice_bbox/check_condition_lattice_for2.cu:170:247: warning: ‘T* at::Tensor::data() const [with T = double]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
  AT_DISPATCH_FLOATING_TYPES(grid_bxkx3x2.type(), "check_condition_cuda_forward_batch", ([&] {
                                                                                                                                                                                                                                                       ^
/home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:363:1: note: declared here
   T * data() const {
 ^ ~~
/home/home/programs/Palmira/defgrid/layers/DefGrid/check_condition_lattice_bbox/check_condition_lattice_for2.cu: In lambda function:
/home/home/programs/Palmira/defgrid/layers/DefGrid/check_condition_lattice_bbox/check_condition_lattice_for2.cu:170:142: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
  AT_DISPATCH_FLOATING_TYPES(grid_bxkx3x2.type(), "check_condition_cuda_forward_batch", ([&] {
                                                                                                                                              ^
/home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:363:1: note: declared here
   T * data() const {
 ^ ~~
/home/home/programs/Palmira/defgrid/layers/DefGrid/check_condition_lattice_bbox/check_condition_lattice_for2.cu:170:175: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
  AT_DISPATCH_FLOATING_TYPES(grid_bxkx3x2.type(), "check_condition_cuda_forward_batch", ([&] {
                                                                                                                                                                               ^
/home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:363:1: note: declared here
   T * data() const {
 ^ ~~
/home/home/programs/Palmira/defgrid/layers/DefGrid/check_condition_lattice_bbox/check_condition_lattice_for2.cu:170:210: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
  AT_DISPATCH_FLOATING_TYPES(grid_bxkx3x2.type(), "check_condition_cuda_forward_batch", ([&] {
                                                                                                                                                                                                                  ^
/home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:363:1: note: declared here
   T * data() const {
 ^ ~~
/home/home/programs/Palmira/defgrid/layers/DefGrid/check_condition_lattice_bbox/check_condition_lattice_for2.cu:170:242: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
  AT_DISPATCH_FLOATING_TYPES(grid_bxkx3x2.type(), "check_condition_cuda_forward_batch", ([&] {
                                                                                                                                                                                                                                                  ^
/home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:363:1: note: declared here
   T * data() const {
 ^ ~~
/usr/include/c++/7/bits/basic_string.tcc: In instantiation of ‘static std::basic_string<_CharT, _Traits, _Alloc>::_Rep* std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_S_create(std::basic_string<_CharT, _Traits, _Alloc>::size_type, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’:
/usr/include/c++/7/bits/basic_string.tcc:578:28:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&, std::forward_iterator_tag) [with _FwdIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’
/usr/include/c++/7/bits/basic_string.h:5042:20:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct_aux(_InIterator, _InIterator, const _Alloc&, std::__false_type) [with _InIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’
/usr/include/c++/7/bits/basic_string.h:5063:24:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&) [with _InIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’
/usr/include/c++/7/bits/basic_string.tcc:656:134:   required from ‘std::basic_string<_CharT, _Traits, _Alloc>::basic_string(const _CharT*, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’
/usr/include/c++/7/bits/basic_string.h:6688:95:   required from here
/usr/include/c++/7/bits/basic_string.tcc:1067:16: error: cannot call member function ‘void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’ without object
       __p->_M_set_sharable();
       ~~~~~~~~~^~
/usr/include/c++/7/bits/basic_string.tcc: In instantiation of ‘static std::basic_string<_CharT, _Traits, _Alloc>::_Rep* std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_S_create(std::basic_string<_CharT, _Traits, _Alloc>::size_type, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’:
/usr/include/c++/7/bits/basic_string.tcc:578:28:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&, std::forward_iterator_tag) [with _FwdIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’
/usr/include/c++/7/bits/basic_string.h:5042:20:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct_aux(_InIterator, _InIterator, const _Alloc&, std::__false_type) [with _InIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’
/usr/include/c++/7/bits/basic_string.h:5063:24:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&) [with _InIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’
/usr/include/c++/7/bits/basic_string.tcc:656:134:   required from ‘std::basic_string<_CharT, _Traits, _Alloc>::basic_string(const _CharT*, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’
/usr/include/c++/7/bits/basic_string.h:6693:95:   required from here
/usr/include/c++/7/bits/basic_string.tcc:1067:16: error: cannot call member function ‘void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’ without object
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "/home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1539, in _run_ninja_build
    env=env)
  File "/home/home/anaconda3/envs/d2/lib/python3.7/subprocess.py", line 512, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "demo.py", line 14, in <module>
    from defgrid.config import add_defgrid_maskhead_config
  File "/home/home/programs/Palmira/defgrid/__init__.py", line 2, in <module>
    from .mask_head import DefGridHead
  File "/home/home/programs/Palmira/defgrid/mask_head.py", line 17, in <module>
    from defgrid.layers.DefGrid.diff_variance import LatticeVariance
  File "/home/home/programs/Palmira/defgrid/layers/DefGrid/diff_variance.py", line 7, in <module>
    from defgrid.layers.DefGrid.check_condition_lattice_bbox.utils import check_condition_f_bbox
  File "/home/home/programs/Palmira/defgrid/layers/DefGrid/check_condition_lattice_bbox/utils.py", line 22, in <module>
    verbose=True,
  File "/home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 997, in load
    keep_intermediates=keep_intermediates)
  File "/home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1202, in _jit_compile
    with_cuda=with_cuda)
  File "/home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1300, in _write_ninja_file_and_build_library
    error_prefix="Error building extension '{}'".format(name))
  File "/home/home/anaconda3/envs/d2/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1555, in _run_ninja_build
    raise RuntimeError(message) from e
RuntimeError: Error building extension 'check_condition_bbox'

Inference colab file not running

Hi, I am running your code. When running the code on the colab notebook that you have shared, The following error comes
upon executing the script

!python demo.py
--input "/content/Picture1.jpg"
--output "/content/outputs"
--config configs/palmira/Palmira.yaml
--opts MODEL.WEIGHTS "/content/Palmira_pb/Palmira_indiscapes.pth"


Using /root/.cache/torch_extensions/py310_cu118 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /root/.cache/torch_extensions/py310_cu118/check_condition_bbox/build.ninja...
Building extension module check_condition_bbox...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/3] c++ -MMD -MF check_condition_lattice.o.d -DTORCH_EXTENSION_NAME=check_condition_bbox -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /usr/local/lib/python3.10/dist-packages/torch/include -isystem /usr/local/lib/python3.10/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.10/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.10/dist-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /usr/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -c /content/Palmira_pb/defgrid/layers/DefGrid/check_condition_lattice_bbox/check_condition_lattice.cpp -o check_condition_lattice.o
FAILED: check_condition_lattice.o
c++ -MMD -MF check_condition_lattice.o.d -DTORCH_EXTENSION_NAME=check_condition_bbox -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /usr/local/lib/python3.10/dist-packages/torch/include -isystem /usr/local/lib/python3.10/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.10/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.10/dist-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /usr/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -c /content/Palmira_pb/defgrid/layers/DefGrid/check_condition_lattice_bbox/check_condition_lattice.cpp -o check_condition_lattice.o
/content/Palmira_pb/defgrid/layers/DefGrid/check_condition_lattice_bbox/check_condition_lattice.cpp:1:10: fatal error: THC/THC.h: No such file or directory
1 | #include <THC/THC.h>
| ^~~~~~~~~~~
compilation terminated.
[2/3] /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=check_condition_bbox -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /usr/local/lib/python3.10/dist-packages/torch/include -isystem /usr/local/lib/python3.10/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.10/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.10/dist-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /usr/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS
-D__CUDA_NO_BFLOAT16_CONVERSIONS
-D__CUDA_NO_HALF2_OPERATORS
--expt-relaxed-constexpr -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 --compiler-options '-fPIC' -std=c++17 -c /content/Palmira_pb/defgrid/layers/DefGrid/check_condition_lattice_bbox/check_condition_lattice_for2.cu -o check_condition_lattice_for2.cuda.o
FAILED: check_condition_lattice_for2.cuda.o
/usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=check_condition_bbox -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /usr/local/lib/python3.10/dist-packages/torch/include -isystem /usr/local/lib/python3.10/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.10/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.10/dist-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /usr/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS
-D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 --compiler-options '-fPIC' -std=c++17 -c /content/Palmira_pb/defgrid/layers/DefGrid/check_condition_lattice_bbox/check_condition_lattice_for2.cu -o check_condition_lattice_for2.cuda.o
/content/Palmira_pb/defgrid/layers/DefGrid/check_condition_lattice_bbox/check_condition_lattice_for2.cu:5:10: fatal error: THC/THC.h: No such file or directory
5 | #include <THC/THC.h>
| ^~~~~~~~~~~
compilation terminated.
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py", line 1893, in _run_ninja_build
subprocess.run(
File "/usr/lib/python3.10/subprocess.py", line 526, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/content/Palmira_pb/demo.py", line 14, in
from defgrid.config import add_defgrid_maskhead_config
File "/content/Palmira_pb/defgrid/init.py", line 2, in
from .mask_head import DefGridHead
File "/content/Palmira_pb/defgrid/mask_head.py", line 17, in
from defgrid.layers.DefGrid.diff_variance import LatticeVariance
File "/content/Palmira_pb/defgrid/layers/DefGrid/diff_variance.py", line 7, in
from defgrid.layers.DefGrid.check_condition_lattice_bbox.utils import check_condition_f_bbox
File "/content/Palmira_pb/defgrid/layers/DefGrid/check_condition_lattice_bbox/utils.py", line 10, in
check_condition = load(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py", line 1284, in load
return _jit_compile(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py", line 1509, in _jit_compile
_write_ninja_file_and_build_library(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py", line 1624, in _write_ninja_file_and_build_library
_run_ninja_build(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py", line 1909, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error building extension 'check_condition_bbox'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.