Comments (3)
It needs to be running on a specified version of sglang, you can see at the sglang's repo, we have a WIP PR for it.
from llava-next.
Their fork of sglang works but the performance is pretty abysmal. I am getting about 1 caption/s on 2x 3090s with LLaMA 3 8B model. We desperately need quantized versions of the models.
from llava-next.
The announcement blog post indicates inference can be done with sglang, but attempting to load the 7b model with the sglang backend:
python -m sglang.launch_server --model-path ~/models/lmms-lab_LLaVA-NeXT-Video-7B-DPO --port 30000
Results in this key error:
/home/user/sglang/venv/lib/python3.11/site-packages/transformers/models/llava/configuration_llava.py:103: FutureWarning: The `vocab_size` argument is deprecated and will be removed in v4.42, since it can be inferred from the `text_config`. Passing this argument has no effect warnings.warn( You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers /home/user/sglang/venv/lib/python3.11/site-packages/transformers/models/llava/configuration_llava.py:143: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead. warnings.warn( Rank 0: load weight begin. /home/user/sglang/venv/lib/python3.11/site-packages/transformers/models/llava/configuration_llava.py:143: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead. warnings.warn( /home/user/sglang/venv/lib/python3.11/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. warnings.warn( /home/user/sglang/venv/lib/python3.11/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.__get__(instance, owner)() Process Process-1: router init state: Traceback (most recent call last): File "/home/user/sglang/venv/lib/python3.11/site-packages/sglang/srt/managers/router/manager.py", line 68, in start_router_process model_client = ModelRpcClient(server_args, port_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/sglang/venv/lib/python3.11/site-packages/sglang/srt/managers/router/model_rpc.py", line 619, in __init__ self.model_server.exposed_init_model(0, server_args, port_args) File "/home/user/sglang/venv/lib/python3.11/site-packages/sglang/srt/managers/router/model_rpc.py", line 70, in exposed_init_model self.model_runner = ModelRunner( ^^^^^^^^^^^^ File "/home/user/sglang/venv/lib/python3.11/site-packages/sglang/srt/managers/router/model_runner.py", line 287, in __init__ self.load_model() File "/home/user/sglang/venv/lib/python3.11/site-packages/sglang/srt/managers/router/model_runner.py", line 326, in load_model model.load_weights( File "/home/user/sglang/venv/lib/python3.11/site-packages/sglang/srt/models/llava.py", line 285, in load_weights param = params_dict[name] ~~~~~~~~~~~^^^^^^ KeyError: 'model.vision_resampler.mm_projector.0.bias'
Hi, the video support for sglang is merged in the main branch!
see: https://github.com/sgl-project/sglang/blob/main/examples/usage/llava_video/srt_example_llava_v.sh
from llava-next.
Related Issues (20)
- output of the demo code HOT 1
- videos of LLaVA-NeXT-interleave HOT 1
- When will mm_use_im_start_end be implemented in pre-training?
- LLaVA-NeXT-Interleave Training Details HOT 3
- how to get results? HOT 1
- Do we have some inference accelerate method for new llava-next-video models? HOT 1
- Eval results HOT 6
- How many A100s used for training? HOT 1
- Is LLaVA-NeXT-interleave 7B model availble? HOT 6
- Question about M4-Instruct datasets HOT 3
- Question regarding multi image inference - import vs demo HOT 3
- where is python3 llavavid/eval/eval_activitynet_qa.py? HOT 2
- question about the demo implementation HOT 2
- When will the training code be available? HOT 7
- Training dataset
- Requiremet File HOT 3
- Eval Results HOT 5
- Any plans to support vLLM?
- Can we add preprocessor_config.json for llava-next-interleave-qwen-7b model on Huggingface? HOT 1
- Chinese OCR Fine-tuning
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llava-next.