Comments (5)
You can download it from https://github.com/vllm-project/vllm-nccl/releases/tag/v0.1.0. Then place it in '/home/username/.config/vllm/nccl/cu12' and rename it as "libnccl.so.2.18.1" .
from vllm.
Out of curiosity, why not depend on
nvidia-nccl-cu12==2.18.1
?
Because pytorch already requires nvidia-nccl-cu12==2.19
.
from vllm.
Unless either NVIDIA/nccl#1234 or pypi/support#3792 is resolved, we have no choice but to bring libnccl.so this way. Sorry for the trouble. This is not what I want, either.
from vllm.
Fair enough, thanks guys 🙏 fingers crossed one of those comes through!
from vllm.
Out of curiosity, why not depend on nvidia-nccl-cu12==2.18.1
?
from vllm.
Related Issues (20)
- [Bug]: The tail problem HOT 1
- [New Model]: LLaVA-NeXT-Video support
- [Usage]: extractive question answering using VLLM
- [Feature]: Triton GPTQ
- [Feature]: How to Enable VLLM to Work with PreTrainedModel Objects in my MOE-LoRA? THX
- [Bug]: nsys cannot track the cuda kernel called by the process except rank 0 HOT 2
- [Usage]: Do we have any tutorials for using vllm with tensorrt-LLM? HOT 2
- [Usage]: how should I do data parallelism using vLLM?
- [Bug]: torch.cuda.OutOfMemoryError: CUDA out of memory when Handle inference requests HOT 1
- [Misc]: Should inference with temperature 0 generate the same results for a lora adapter and equivalent merged model? HOT 5
- [Bug] [spec decode] [flash_attn]: CUDA illegal memory access when calling flash_attn_cuda.fwd_kvcache
- [Bug]: The openai deployment model takes twice as long to deploy as fastapi's approach to offline inference. HOT 1
- [Feature]: Linear adapter support for Mixtral
- [Feature]: VLLM suport for function calling in Mistral-7B-Instruct-v0.3 HOT 1
- [Bug]: Issue with Token Processing Efficiency and Key-Value Cache Utilization in AsyncLLMEngine
- [Bug]: WSL2(Including Docker) 2 GPU problem --tensor-parallel-size 2 HOT 1
- [Bug]: Unable to Use Prefix Caching in AsyncLLMEngine HOT 10
- [Performance]: What can we learn from OctoAI HOT 7
- [Bug]: Model Launch Hangs with 16+ Ranks in vLLM HOT 2
- [Usage]: Prefix caching in VLLM HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from vllm.