Comments (8)
My friends,how to fix it ?
insert the code, at the line 106 of the file vllm/config
. before self.hf_config = get_config(self.model, trust_remote_code, revision, code_revision)
if VLLM_USE_MODELSCOPE:
from modelscope.hub.snapshot_download import snapshot_download
if not os.path.exists(model):
model_path = snapshot_download(model_id=model,
revision=revision)
else:
model_path = model
self.model = model_path
self.download_dir = model_path
self.tokenizer = model_path
Remember it is not the best way, just a temp workaround.
from vllm.
sorry why would it never work? if the env var is set VLLM_USE_MODELSCOPE=true
then the statement would evaluate to true
?
from vllm.
sorry why would it never work? if the env var is set
VLLM_USE_MODELSCOPE=true
then the statement would evaluate totrue
?
ah, I reread the code and found that I made a wrong debugging result.
Actually what I met is that when I run docker image vllm/vllm-openai:v0.4.1
, passing env with --env "VLLM_USE_MODELSCOPE=True"
, vllm still visits huggingface.co to download models. It seems that vllm doesnot take this env var into account.
from vllm.
My temp workaround is adding the origin snippet to vllm/config.py line 106:
...
if VLLM_USE_MODELSCOPE:
# download model from ModelScope hub,
# lazy import so that modelscope is not required for normal use.
# pylint: disable=C.
from modelscope.hub.snapshot_download import snapshot_download
if not os.path.exists(model):
model_path = snapshot_download(model_id=model, revision=revision)
else:
model_path = model
...
from vllm.
can you send a PR for what worked for you?
from vllm.
Sending a PR is easy. But my workaround is reverting some changes of a refactor which I think is the cause, I have to read the whole refactored code to confirm what to do is best.
related PR #4097
from vllm.
vllm/config.py
, line 107
self.hf_config = get_config(self.model, trust_remote_code, revision,
code_revision)
calls vllm/transformers_utils/config.py
, line 23,
config = AutoConfig.from_pretrained(
model,
trust_remote_code=trust_remote_code,
revision=revision,
code_revision=code_revision)
The code above does not check the environment variable VLLM_USE_MODELSCOPE
and it will download config file from huggingface.co by default.
Have no clue how to fix it elegantly now.
from vllm.
My friends,how to fix it ?
from vllm.
Related Issues (20)
- [Usage]: Multiple samplig params with OpenAI library HOT 5
- [Bug]: The tail problem HOT 1
- [New Model]: LLaVA-NeXT-Video support
- [Usage]: extractive question answering using VLLM
- [Feature]: Triton GPTQ
- [Feature]: How to Enable VLLM to Work with PreTrainedModel Objects in my MOE-LoRA? THX
- [Bug]: nsys cannot track the cuda kernel called by the process except rank 0 HOT 2
- [Usage]: Do we have any tutorials for using vllm with tensorrt-LLM? HOT 2
- [Usage]: how should I do data parallelism using vLLM?
- [Bug]: torch.cuda.OutOfMemoryError: CUDA out of memory when Handle inference requests
- [Misc]: Should inference with temperature 0 generate the same results for a lora adapter and equivalent merged model? HOT 5
- [Bug]: CUDA illegal memory access when calling flash_attn_cuda.fwd_kvcache
- [Bug]: The openai deployment model takes twice as long to deploy as fastapi's approach to offline inference. HOT 1
- [Feature]: Linear adapter support for Mixtral
- [Feature]: VLLM suport for function calling in Mistral-7B-Instruct-v0.3 HOT 1
- [Bug]: Issue with Token Processing Efficiency and Key-Value Cache Utilization in AsyncLLMEngine
- [Bug]: WSL2(Including Docker) 2 GPU problem --tensor-parallel-size 2
- [Bug]: Unable to Use Prefix Caching in AsyncLLMEngine HOT 7
- [Performance]: What can we learn from OctoAI HOT 3
- [Bug]: Model Launch Hangs with 16+ Ranks in vLLM HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from vllm.