Comments (7)
from vllm.
command:
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python3 -m vllm.entrypoints.openai.api_server \
--model Qwen1.5-72B-Chat \
--tensor-parallel-size 8 \
--max-model-len 8192 \
--trust-remote-code \
--disable-custom-all-reduce \
--enable-prefix-caching \
--tokenizer-mode slow \
--enforce-eager \
--gpu-memory-utilization 0.9 \
--port 8861
from vllm.
If you can biset to find out the commit that leads to the degradation, that would be helpful. Otherwise, it is very difficult to answer a generic report of a performance regression.
from vllm.
If you can biset to find out the commit that leads to the degradation, that would be helpful. Otherwise, it is very difficult to answer a generic report of a performance regression.
vLLM seems to have been under reconstruction recently. I'm not quite sure what's causing TPOT to be slow
from vllm.
most commits should be runnable since they pass the ci tests. it's not related with the refactoring. just bisect to find out the potentially commit that leads to this regression.
from vllm.
I have test on L20, not sure the device is same as in CI.
from vllm.
with cuda graph: 35.7ms -> 36.7ms, without cuda graph: 39ms -> 45ms
from vllm.
Related Issues (20)
- [Bug]: HOT 2
- [Usage]: I use llama3. I found that one token is 'Ġor' in tokenizer.get_vocab(). But when I use vllm server, I got ' or' in response. HOT 1
- [Bug]: Command-R incorrect output contains `<EOS_TOKEN>` and seems to do text prediction rather than conversation
- [Misc]: LLM is responding with advertisement HOT 2
- [Bug]: 英伟达最新驱动555.85,vllm运行报错 HOT 6
- [Feature]: Additional metrics to enable better autoscaling / load balancing of vLLM servers in Kubernetes HOT 6
- [Misc]: Understanding Batching Mechanism in Prefill and Decode Phases HOT 1
- [Installation]:
- [Feature]: Add num_requests_preempted metric HOT 1
- Running Vllm on ray cluster, logging stuck at loading HOT 3
- [Feature]: multi-steps model_runner? HOT 1
- [Bug]: Cannot build cpu docker image
- [Bug]: vllm.engine.async_llm_engine.AsyncEngineDeadError: Background loop has errored already. HOT 5
- [Usage]: not support for mistralai/Mistral-7B-Instruct-v0.3 HOT 3
- [Bug]: When load model weights, there are infinite loading HOT 6
- [Misc]: How to use guided decoding and regex as well? HOT 2
- [Feature]: Integration of transformers past_key_values into the vllm kvcache Function HOT 4
- [Bug]: The VRAM usage of calculating log_probs is not considered in profile run HOT 5
- [Bug]: Build/Install Issues with pip install -e . HOT 1
- [Performance]: A few performance-related questions.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from vllm.