Giter Site home page Giter Site logo

snuspl / nimble Goto Github PK

View Code? Open in Web Editor NEW
259.0 259.0 33.0 173.4 MB

Lightweight and Parallel Deep Learning Framework

License: Other

Python 32.68% Shell 0.55% C++ 53.56% Dockerfile 0.07% Batchfile 0.04% CMake 1.38% Makefile 0.01% Java 0.22% C 4.08% Cuda 6.10% Assembly 0.33% Starlark 0.20% Metal 0.08% Objective-C++ 0.47% Objective-C 0.01% PureBasic 0.21% LLVM 0.01% Yacc 0.01% CSS 0.01% HTML 0.01%
deep-learning framework gpu-task-scheduling inference parallel training

nimble's People

Contributors

alband avatar apaszke avatar bddppq avatar bwasti avatar colesbury avatar ezyang avatar gchanan avatar goldsborough avatar houseroad avatar jerryzh168 avatar jspark1105 avatar killeent avatar malfet avatar mrshenli avatar onnxbot avatar peterjc123 avatar pietern avatar rohan-varma avatar smessmer avatar soumith avatar ssnl avatar suo avatar supriyar avatar vishwakftw avatar wanchaol avatar xuhdev avatar yangqing avatar zasdfgbnm avatar zdevito avatar zou3519 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nimble's Issues

DDP comparability

Hello. First of all, thank you for the awesome project !
Is this framework comparable with DDP (DistributedDataParallel) ?
Or is there any result that is conducted in DDP environment ?

module 'torch.cuda' has no attribute 'Nimble'

I have installed nimble using the instruction but when I'm trying to run examples (run_inference.py and run_training.py) I get the following error.

Traceback (most recent call last): File "run_training.py", line 71, in <module> main(args) File "run_training.py", line 45, in main with closing(get_training_wrapper(model, dummy_input, dummy_label, args.mode, args.use_optimizer)) as training_wrapper: File "/home/amir/reps/nimble/experiment/utils.py", line 112, in get_training_wrapper wrapper = NimbleTrainingWrapper(model, dummy_input, dummy_label, criterion, optimizer, use_multi_stream=False) File "/home/amir/reps/nimble/experiment/utils.py", line 339, in __init__ nimble_model = torch.cuda.Nimble(model) AttributeError: module 'torch.cuda' has no attribute 'Nimble'

It seems like I have installed the original PyTorch and does not have the 'Nimble' class, while I have installed Nimble from the source using the instructions in the install.sh file.

ModuleNotFoundError: No module named 'torch._C'

Hi, I'm writing in English in case someone else meets the same issue.
I built Nimble on a docker container with the identical environment mentioned in the instruction guide except for cudnn (7605 in my case).

Once I run the inference code provided:

import torch
import torchvision

# Instantiate a PyTorch Module and move it to a GPU
model = torchvision.models.resnet50()
model = model.cuda()
model.eval()

# Prepare a dummy input
input_shape = [1, 3, 224, 224]
dummy_input = torch.randn(*input_shape).cuda()

# Create a Nimble object
nimble_model = torch.cuda.Nimble(model)
nimble_model.prepare(dummy_input, training=False)

# Execute the object
rand_input = torch.rand(*input_shape).cuda()
output = nimble_model(rand_input)

I get this error as below:

(nimble) root@d137ad00a74b:/workspace/nimble# python3 installation_test.py 
Traceback (most recent call last):
  File "installation_test.py", line 1, in <module>
    import torch
  File "/workspace/nimble/torch/__init__.py", line 81, in <module>
    from torch._C import *
ModuleNotFoundError: No module named 'torch._C'

I first thought that this is because I ran the script in nimble/ where another torch folder exists, but I think I am supposed to do so because torch.cuda.Nimble exists in the corresponding directory.

Could you please specify the guide to run the code after the installation?

My environment is as below (python has been executed in parent directory):
image

Thanks!

The CMake erro in the process of setup

CMake Error: CMake can not determine linker language for target: dnnl
CMake Error: CMake can not determine linker language for target: dnnl
CMake Error in third_party/ideep/mkl-dnn/src/CMakeLists.txt:
Exporting the target "dnnl" is not allowed since its linker language cannot
be determined

build without conda

📚 Documentation

Can I have this build without conda? If yes, please specify the steps.

Thank you.

Updates for PyTorch v1.7.1

Issue description

Nimble was initially developed on top of PyTorch v1.4, which was the latest version at the time of the development.
Now that PyTorch went through three major-version updates from v1.4.1 and started to support CUDA 11, we upgrade Nimble to be compatible with PyTorch v1.7.1 and CUDA 11.0. (+ torchvision 0.8)
New installation instructions can be found here.

There should be documentation of how to use nimble on Huggingface transformers

🚀 Feature

In much the same way that there is documentation of wrapping Nimble around a torchvision model, there should be documentation (and benchmarks?) around wrapping Nimble around 🤗 language models.
Unclear: whether this is a docs-only change.

Motivation

Nimble looks interesting and I am interesting in speeding up my NLP runs and I only see docs for torchvision

Pitch

  1. Adapt Nimble to ingest some 🤗 transformers or other models. (This may be a no-op)
  2. Write it up on the readme

Alternatives

Additional context

build error

I tried building Nimble.
I had the following error.
Can you help me?

Than you.

=========================================================================================

[1581/2921] Building NVCC (Device) object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cuda/torch_generated_UnaryOpsKernel.cu.o
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "setup.py", line 755, in
build_deps()
File "setup.py", line 316, in build_deps
cmake=cmake)
File "/root/nimble/tools/build_pytorch_libs.py", line 62, in build_caffe2
cmake.build(my_env)
File "/root/nimble/tools/setup_helpers/cmake.py", line 335, in build
self.run(build_args, my_env)
File "/root/nimble/tools/setup_helpers/cmake.py", line 141, in run
check_call(command, cwd=self.build_dir, env=env)
File "/root/anaconda3/envs/nimble/lib/python3.6/subprocess.py", line 311, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '48']' returned non-zero exit status 1.

윈도우에서도 사용가능한가요?

윈도우에서도 사용가능한가요?

Install 문서를 보니 우분투를 기준으로 작성된 것 같긴한데
윈도우에서도 환경변수 다 설정하고 설치 진행하면 사용가능할까요..?
Tensorrt의 공식 문서에서 윈도우 파이썬에선 동작하지 않는다 그래서 대체제를 찾고있다 생각이나서 들렀습니다 휴

How to build the docker file? fatal: Not a git repository (or any of the parent directories): .git

I am running the following command in docker/pytorch folder to build the docker image for nimble
docker build -t nimble:latest .
I am getting the following error

Step 8/13 : COPY . .
 ---> Using cache
 ---> 9b4c39717094
Step 9/13 : RUN git submodule sync && git submodule update --init --recursive
 ---> Running in 08f892ce7bb6
fatal: Not a git repository (or any of the parent directories): .git
The command '/bin/sh -c git submodule sync && git submodule update --init --recursive' returned a non-zero code: 128

Can I debug in nimble?

Thank you for great work.

I have a simple question, can I use a pdb in the model with nimble?

When I use pdb in the model and do nimble_model.prepare without any debug, it works. but try to call some value during debug, warning message comes out(just call, nothing change):

RuntimeWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results)

and after preparing, massive amount of the error comes out(below log shows a part of the error):

- %1403 : int[] = prim::ListConstruct(%37, %37, %37, %37), scope: __module.backbone/__module.backbone.high_level_features/__module.backbone.high_level_features.5 ? -- ^ ^ ^ ^ + %1398 : int[] = prim::ListConstruct(%32, %32, %32, %32), scope: __module.backbone/__module.backbone.high_level_features/__module.backbone.high_level_features.5 ? ++ ^ ^ ^ ^ - %input.38 : Tensor = aten::constant_pad_nd(%input.37, %1403, %39), scope: __module.backbone/__module.backbone.high_level_features/__module.backbone.high_level_features.5 # /home/rpmk/anaconda3/envs/nimble/lib/python3.7/site-packages/torch/nn/functional.py:3553:0 ? ------- + %input.38 : Tensor = aten::constant_pad_nd(%input.37, %1398, %34), scope: __module.backbone/__module.backbone.high_level_features/__module.backbone.high_level_features.5 # /home/rpmk/anaconda3/envs/nimble/lib/python3.7/site-packages/torch/nn/functional.py:3553:0 ? +++++++ - %1405 : int[] = prim::ListConstruct(%37, %37), scope: __module.backbone/__module.backbone.high_level_features/__module.backbone.high_level_features.5/__module.backbone.high_level_features.5.conv/__module.backbone.high_level_features.5.conv.0 ? ^^^ ^ ^ + %1400 : int[] = prim::ListConstruct(%32, %32), scope: __module.backbone/__module.backbone.high_level_features/__module.backbone.high_level_features.5/__module.backbone.high_level_features.5.conv/__module.backbone.high_level_features.5.conv.0 ? ^^^ ^ ^ - %1406 : int[] = prim::ListConstruct(%39, %39), scope: __module.backbone/__module.backbone.high_level_features/__module.backbone.high_level_features.5/__module.backbone.high_level_features.5.conv/__module.backbone.high_level_features.5.conv.0 ? ^^^ ^ ^ + %1401 : int[] = prim::ListConstruct(%34, %34), scope: __module.backbone/__module.backbone.high_level_features/__module.backbone.high_level_features.5/__module.backbone.high_level_features.5.conv/__module.backbone.high_level_features.5.conv.0 ? ^^^ ^ ^ - %1407 : int[] = prim::ListConstruct(%37, %37), scope: __module.backbone/__module.backbone.high_level_features/__module.backbone.high_level_features.5/__module.backbone.high_level_features.5.conv/__module.backbone.high_level_features.5.conv.0 ? ^^^ ^ ^ + %1402 : int[] = prim::ListConstruct(%32, %32), scope: __module.backbone/__module.backbone.high_level_features/__module.backbone.high_level_features.5/__module.backbone.high_level_features.5.conv/__module.backbone.high_level_features.5.conv.0 ? ^^^ ^ ^ - %1408 : int[] = prim::ListConstruct(%39, %39), scope: __module.backbone/__module.backbone.high_level_features/__module.backbone.high_level_features.5/__module.backbone.high_level_features.5.conv/__module.backbone.high_level_features.5.conv.0 ? ^^^ ^ ^ + %1403 : int[] = prim::ListConstruct(%34, %34), scope: __module.backbone/__module.backbone.high_level_features/__module.backbone.high_level_features.5/__module.backbone.high_level_features.5.conv/__module.backbone.high_level_features.5.conv.0 ? ^^^ ^ ^ - %input.39 : Tensor = aten::_convolution(%input.38, %1018, %35, %1405, %1406, %1407, %38, %1408, %37, %38, %38, %38, %40), scope: __module.backbone/__module.backbone.high_level_features/__module.backbone.high_level_features.5/__module.backbone.high_level_features.5.conv/__module.backbone.high_level_features.5.conv.0 # /home/rpmk/anac onda3/envs/nimble/lib/python3.7/site-packages/torch/nn/modules/conv.py:420:0 - %input.40 : Tensor = aten::batch_norm(%input.39, %1013, %1008, %1003, %998, %38, %33, %34, %38), scope: __module.backbone/__module.backbone.high_level_features/__module.backbone.high_level_features.5/__module.backbone.high_level_features.5.conv/__module.backbone.high_level_features.5.conv.1 # /home/rpmk/anaconda3/envs/nimble/lib/pyt hon3.7/site-packages/torch/nn/functional.py:2058:0 + %input.39 : Tensor = aten::_convolution(%input.38, %1013, %30, %1400, %1401, %1402, %33, %1403, %32, %33, %33, %33, %35), scope: __module.backbone/__module.backbone.high_level_features/__module.backbone.high_level_features.5/__module.backbone.high_level_features.5.conv/__module.backbone.high_level_features.5.conv.0 # /home/rpmk/anac onda3/envs/nimble/lib/python3.7/site-packages/torch/nn/modules/conv.py:420:0 + %input.40 : Tensor = aten::batch_norm(%input.39, %1008, %1003, %998, %993, %33, %28, %29, %33), scope: __module.backbone/__module.backbone.high_level_features/__module.backbone.high_level_features.5/__module.backbone.high_level_features.5.conv/__module.backbone.high_level_features.5.conv.1 # /home/rpmk/anaconda3/envs/nimble/lib/pyth on3.7/site-packages/torch/nn/functional.py:2058:0 - %input.41 : Tensor = aten::hardtanh_(%input.40, %31, %32), scope: __module.backbone/__module.backbone.high_level_features/__module.backbone.high_level_features.5/__module.backbone.high_level_features.5.conv/__module.backbone.high_level_features.5.conv.2 # /home/rpmk/anaconda3/envs/nimble/lib/python3.7/site-packages/torch/nn/function al.py:1186:0 ? ^^ - + %input.41 : Tensor = aten::hardtanh_(%input.40, %26, %27), scope: __module.backbone/__module.backbone.high_level_features/__module.backbone.high_level_features.5/__module.backbone.high_level_features.5.conv/__module.backbone.high_level_features.5.conv.2 # /home/rpmk/anaconda3/envs/nimble/lib/python3.7/site-packages/torch/nn/function al.py:1186:0 ? ^^ +

Can I get any advices?

Thank you.

Nimble training 시 에러가 납니다.

안녕하세요

Inference 코드를 실행할 때는 에러가 나지 않지만
Training 코드 실행 시 에러가 납니다.

import torch
import torchvision
import os

os.environ["CUDA_VISIBLE_DEVICES"]="1"

BATCH = 32


model = torchvision.models.resnet50(num_classes=10)
model = model.cuda()
model.train()

loss_fn = torch.nn.CrossEntropyLoss().cuda()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)

input_shape = [BATCH, 3, 32, 32]
dummy_input = torch.randn(*input_shape).cuda()

nimble_model = torch.cuda.Nimble(model)
nimble_model.prepare(dummy_input, training=True)

rand_input = torch.rand(*input_shape).cuda()
output = nimble_model(rand_input)

label = torch.zeros(BATCH, dtype=torch.long).cuda()
loss = loss_fn(output, label)

loss.backward()

optimizer.step()

위의 코드대로 실행 시 prepare에서

TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
With rtol=1e-05 and atol=1e-05, found 297 element(s) (out of 320) whose difference(s) exceeded the margin of error (including 0 nan comparisons). The greatest difference was 0.0005762577056884766 (-0.9435920119285583 vs. -0.9441682696342468), which occurred at index (15, 3).

위와 같은 에러가 나서 어떤 부분이 잘못된 것인지 질문드립니다.

환경 :
Ubuntu : 18.04
Linux : 5.4.0
Pytorch : 1.7.0
Python : 3.7.10
cuDNN, CUDA는 각각 Nimble에서 요구하는 환경입니다.

Questions about compatible version of torchvision

Hello. Thanks for sharing this project.

I could install nimble following the installation guide.
It seems that the torch version is "1.4.0a0+61ec0ca".
To use torch with torchvision, I installed it by the following script (torchvision of CUDA 10.2)
pip install torchvision==0.5.0 -f https://download.pytorch.org/whl/cu102/torch_stable.html
and since this reinstalls different version of PyTorch, I removed PyTorch and rebuilt the nimble.
I'm curious whether this method is correct, but I could import both torch==1.4.0a0+61ec0ca and torchvision==0.5.0 anyway.

However, I'm having an error which seems to be related to torchvison. For example,
import torch
torch.ops.torchvision.nms
generates a runtime error
RuntimeError: No such operator torchvision::nms.

Since the example code in README uses torchvision, could you let me know how to install torchvision which is compatible with nimble?

The effect of multiple streams is not obvious

Hi, I'm trying to reproduce nimble's experimental results. However, I found that the effect of multi-stream has little effect on the inference latency, but the paper says that it can be up to 1.8×, maybe I have something wrong, I hope you can give me some advice.
I successfully installed nimble in docker:
GPU: 2080s with 8G global memory
Ubuntu 18.04.6 LTS

# inception_v3 [1, 3, 299, 299]
         mean (ms)  stdev (ms)
pytorch   8.212887    0.211187

        mean (ms)  stdev (ms)
nimble    2.24783    0.003427

              mean (ms)  stdev (ms)
nimble-multi    2.31407    0.009554
# inception_v3 [8, 3, 299, 299]
         mean (ms)  stdev (ms)
pytorch  25.678553    0.287919

        mean (ms)  stdev (ms)
nimble  17.354554    0.065831

              mean (ms)  stdev (ms)
nimble-multi  16.428471    0.104019
# densenet201 [1, 3, 224, 224]
         mean (ms)  stdev (ms)
pytorch  29.020667    0.231637

        mean (ms)  stdev (ms)
nimble   5.537937    0.004089

              mean (ms)  stdev (ms)
nimble-multi   5.572467    0.004977
# densenet201 [8, 3, 224, 224]
         mean (ms)  stdev (ms)
pytorch  31.046828    0.164185

        mean (ms)  stdev (ms)
nimble  24.178936    0.032238

              mean (ms)  stdev (ms)
nimble-multi  24.125336    0.060498
# mnasnet0_5 [1, 3, 224, 224]
         mean (ms)  stdev (ms)
pytorch   4.477023    0.025759

        mean (ms)  stdev (ms)
nimble   0.565598    0.002112

              mean (ms)  stdev (ms)
nimble-multi   5.572467    0.004977

# mnasnet0_75 [1, 3, 224, 224]
         mean (ms)  stdev (ms)
pytorch   4.557251    0.037832

        mean (ms)  stdev (ms)
nimble    0.68727    0.002274

              mean (ms)  stdev (ms)
nimble-multi   0.679038    0.002025

# mnasnet1_3 [1, 3, 224, 224]
         mean (ms)  stdev (ms)
pytorch   4.780402     0.02905

        mean (ms)  stdev (ms)
nimble   0.950962     0.00627

              mean (ms)  stdev (ms)
nimble-multi   0.893742     0.06838
# mnasnet1_3 [8, 3, 224, 224]
         mean (ms)  stdev (ms)
pytorch   6.076544    0.567386

        mean (ms)  stdev (ms)
nimble   4.953977    0.023374

              mean (ms)  stdev (ms)
nimble-multi   4.976105    0.025923

getting the following error when building nimble in conda enviornment FAILED: caffe2/CMakeFiles/torch_cuda.dir/utils/torch_cuda_generated_math_gpu.cu.o subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '8']' returned non-zero exit status 1.

[46/402] Building NVCC (Device) object...ils/torch_cuda_generated_math_gpu.cu.o
FAILED: caffe2/CMakeFiles/torch_cuda.dir/utils/torch_cuda_generated_math_gpu.cu.o 
cd /home/umair/Desktop/umair/nimble2/nimble/build/caffe2/CMakeFiles/torch_cuda.dir/utils && /home/umair/anaconda3/envs/nimble/bin/cmake -E make_directory /home/umair/Desktop/umair/nimble2/nimble/build/caffe2/CMakeFiles/torch_cuda.dir/utils/. && /home/umair/anaconda3/envs/nimble/bin/cmake -D verbose:BOOL=OFF -D build_configuration:STRING=Release -D generated_file:STRING=/home/umair/Desktop/umair/nimble2/nimble/build/caffe2/CMakeFiles/torch_cuda.dir/utils/./torch_cuda_generated_math_gpu.cu.o -D generated_cubin_file:STRING=/home/umair/Desktop/umair/nimble2/nimble/build/caffe2/CMakeFiles/torch_cuda.dir/utils/./torch_cuda_generated_math_gpu.cu.o.cubin.txt -P /home/umair/Desktop/umair/nimble2/nimble/build/caffe2/CMakeFiles/torch_cuda.dir/utils/torch_cuda_generated_math_gpu.cu.o.Release.cmake
/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu(149): warning: the "__visibility__" attribute can only appear on functions and variables with external linkage

/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu(196): warning: the "__visibility__" attribute can only appear on functions and variables with external linkage

/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu(231): warning: the "__visibility__" attribute can only appear on functions and variables with external linkage

/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu(898): error: namespace "thrust" has no member "host_vector"

/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu(898): error: expected an expression

/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu(899): error: namespace "thrust" has no member "host_vector"

/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu(899): error: expected an expression

/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu(900): error: namespace "thrust" has no member "host_vector"

/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu(900): error: type name is not allowed

/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu(900): error: expected an expression

/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu(902): error: identifier "A_array" is undefined

/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu(903): error: identifier "B_array" is undefined

/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu(904): error: identifier "C_array" is undefined

/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu(907): error: identifier "A_array" is undefined

/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu(909): error: identifier "B_array" is undefined

/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu(910): error: identifier "C_array" is undefined

/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu(1749): warning: the "__visibility__" attribute can only appear on functions and variables with external linkage

/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu(2211): warning: the "__visibility__" attribute can only appear on functions and variables with external linkage

/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu(2258): warning: the "__visibility__" attribute can only appear on functions and variables with external linkage

/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu(2814): warning: the "__visibility__" attribute can only appear on functions and variables with external linkage

13 errors detected in the compilation of "/home/umair/Desktop/umair/nimble2/nimble/caffe2/utils/math_gpu.cu".
CMake Error at torch_cuda_generated_math_gpu.cu.o.Release.cmake:281 (message):
  Error generating file
  /home/umair/Desktop/umair/nimble2/nimble/build/caffe2/CMakeFiles/torch_cuda.dir/utils/./torch_cuda_generated_math_gpu.cu.o


[53/402] Building NVCC (Device) object...cuda_generated_elementwise_div_op.cu.o
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "setup.py", line 760, in <module>
    build_deps()
  File "setup.py", line 315, in build_deps
    cmake=cmake)
  File "/home/umair/Desktop/umair/nimble2/nimble/tools/build_pytorch_libs.py", line 62, in build_caffe2
    cmake.build(my_env)
  File "/home/umair/Desktop/umair/nimble2/nimble/tools/setup_helpers/cmake.py", line 345, in build
    self.run(build_args, my_env)
  File "/home/umair/Desktop/umair/nimble2/nimble/tools/setup_helpers/cmake.py", line 141, in run
    check_call(command, cwd=self.build_dir, env=env)
  File "/home/umair/anaconda3/envs/nimble/lib/python3.7/subprocess.py", line 363, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '8']' returned non-zero exit status 1.

How to install torchvision with nimble

I am having trouble installing nimble torch and the torchvision. I have tried to install 0.8.0 and 0.8.2 as well 0.9.0 but everytime i get the torchvision and torch compatibility error when it comes to using C++ compilation which says the same that torch and torchvision are not compatible can you please guide to install torchvision with nimble?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.