qwenlm / qwen-vl Goto Github PK
View Code? Open in Web Editor NEWThe official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.
License: Other
The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.
License: Other
Sample code:
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from transformers.generation import GenerationConfig
import torch
torch.manual_seed(1234)
model_name = "Qwen/Qwen-VL-Chat"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type='nf4',
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", trust_remote_code=True, bf16=True, quantization_config=quantization_config).eval()
Loading should not any error.
No response
- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):
No
No response
No response
No response
这个模型能用到图文检索中吗,类似clip那种?
怎样调用backbone抽feature,有代码参考吗?
No response
model = AutoModelForCausalLM.from_pretrained(
args.checkpoint_path,
device_map=device_map,
trust_remote_code=True,
resume_download=True,
).eval()
这段代码运行毫无反应,而checkpoint_path反复检查过多次,没有问题。
加载成功
No response
- OS:win11
- Python:3.10.11
- Transformers:
- PyTorch:2.01
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):11.8
No response
No response
No response
No response
希望增加一份使用自定义数据集的微调教程
无
希望增加一份使用自定义数据集的微调教程
希望增加一份使用自定义数据集的微调教程
Hi,
Can i check how do i get the normalisation range so that i can re-project the draw box coordinates myself on the same image? Or is there anyway to access the bbox_drawing function?
报错:
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.
No response
将sft的环境完全搭建后,直接运行modelscope的代码(除了修改hub的一些参数不做其他改动)
- OS: Centos
- Python: 3.10
- Transformers: 4.32.1
- PyTorch: 2.0.1
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`): 11.7
求指教,谢谢!
Evaluation of TouchStone
Can you provide automated evaluation scripts or more details about evaluation scripts?
运行如下代码,开启bf16
和device_map="auto"
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
import torch
torch.manual_seed(1234)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)
# 打开bf16精度,A100、H100、RTX3060、RTX3070等显卡建议启用以节省显存
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="auto", trust_remote_code=True, bf16=True).eval()
model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)
query = tokenizer.from_list_format([
{'image': 'https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg'}, # Either a local path or an url
{'text': '这是什么?'},
])
response, history = model.chat(tokenizer, query=query, history=None)
print(response)
response, history = model.chat(tokenizer, '框出图中击掌的位置', history=history)
print(response)
image = tokenizer.draw_bbox_on_latest_picture(response, history)
if image:
image.save('1.jpg')
else:
print("no box")
产生如下报错信息
(venv) PS D:\Python\Qwen-VL> python .\debug.py
Loading checkpoint shards: 100%|████████████| 10/10 [00:14<00:00, 1.47s/it]
Traceback (most recent call last):
File "D:\Python\Qwen-VL\debug.py", line 16, in <module>
response, history = model.chat(tokenizer, query=query, history=None)
File "D:\cache\huggingface\modules\transformers_modules\Qwen\Qwen-VL-Chat\1fb8c15e1ad1d0d4d5a17b550776d28a3f7ef028\modeling_qwen.py", line 918, in chat
outputs = self.generate(
File "D:\cache\huggingface\modules\transformers_modules\Qwen\Qwen-VL-Chat\1fb8c15e1ad1d0d4d5a17b550776d28a3f7ef028\modeling_qwen.py", line 1031, in generate
return super().generate(
File "D:\Python\Qwen-VL\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Python\Qwen-VL\venv\lib\site-packages\transformers\generation\utils.py", line 1642, in generate
return self.sample(
File "D:\Python\Qwen-VL\venv\lib\site-packages\transformers\generation\utils.py", line 2724, in sample
outputs = self(
File "D:\Python\Qwen-VL\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Python\Qwen-VL\venv\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "D:\cache\huggingface\modules\transformers_modules\Qwen\Qwen-VL-Chat\1fb8c15e1ad1d0d4d5a17b550776d28a3f7ef028\modeling_qwen.py", line 830, in forward
transformer_outputs = self.transformer(
File "D:\Python\Qwen-VL\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\cache\huggingface\modules\transformers_modules\Qwen\Qwen-VL-Chat\1fb8c15e1ad1d0d4d5a17b550776d28a3f7ef028\modeling_qwen.py", line 570, in forward
images = self.visual.encode(images)
File "D:\cache\huggingface\modules\transformers_modules\Qwen\Qwen-VL-Chat\1fb8c15e1ad1d0d4d5a17b550776d28a3f7ef028\visual.py", line 426, in encode
return self(images)
File "D:\Python\Qwen-VL\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Python\Qwen-VL\venv\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "D:\cache\huggingface\modules\transformers_modules\Qwen\Qwen-VL-Chat\1fb8c15e1ad1d0d4d5a17b550776d28a3f7ef028\visual.py", line 398, in forward
x = self.conv1(x) # shape = [*, width, grid, grid]
File "D:\Python\Qwen-VL\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Python\Qwen-VL\venv\lib\site-packages\accelerate\hooks.py", line 160, in new_forward
args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
File "D:\Python\Qwen-VL\venv\lib\site-packages\accelerate\hooks.py", line 290, in pre_forward
return send_to_device(args, self.execution_device), send_to_device(
File "D:\Python\Qwen-VL\venv\lib\site-packages\accelerate\utils\operations.py", line 151, in send_to_device
return honor_type(
File "D:\Python\Qwen-VL\venv\lib\site-packages\accelerate\utils\operations.py", line 83, in honor_type
return type(obj)(generator)
File "D:\Python\Qwen-VL\venv\lib\site-packages\accelerate\utils\operations.py", line 152, in <genexpr>
tensor, (send_to_device(t, device, non_blocking=non_blocking, skip_keys=skip_keys) for t in tensor)
File "D:\Python\Qwen-VL\venv\lib\site-packages\accelerate\utils\operations.py", line 167, in send_to_device
return tensor.to(device, non_blocking=non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!
No response
No response
- OS: Windows11
- Python: 3.10.9
- Transformers: 4.31.0和4.32.0都试过
- PyTorch: 2.0.1+cu118
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`): 11.8
- GPU: RTX4080
No response
My gpu server cannot connect internet, so i download the mode with snapshot_download:
from huggingface_hub import snapshot_download
snapshot_download(repo_id="Qwen/Qwen-VL-Chat")
the model put in /root/.cache/huggingface/hub/models--Qwen--Qwen-VL-Chat, it contain blobs/refs/snapshots folders.
when i load mode to run demo, error occur:
tokenizer = AutoTokenizer.from_pretrained("/root/.cache/huggingface/hub",repo_id="Qwen/Qwen-VL-Chat")
/root/.cache/huggingface/hub/ does not appear to have a file named config.json. Checkout 'https://huggingface.co//root/.cache/huggingface/hub//None' for available files.
also use this folder: /root/.cache/huggingface/hub/models--Qwen--Qwen-VL-Chat/snapshots/0eecbfae27b784c8d5e69b1d497d3589874565a8
ValueError: Tokenizer class QWenTokenizer does not exist or is not currently imported.
so How to load the snapshot_download model ? thankyou!
Is anyone working on recursive folder batch script for captioning stable diffusion captions? Speed needs to be under 2 seconds for 50 tokens max per image. English phrases
where can we find the parameters list?
for e.x: max_length, min_length, temperature & any other parameters.
it would be great if you can add the parameters list & descriptions on readme or somewhere.
No response
No response
No response
请问如何让程序并行执行,用到多GPU卡
请问如何让程序并行执行,用到多GPU卡
修改 device_map = "cuda" 为 device_map = "auto"
程序用了多卡,但报错:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cuda:3!
请问如何让程序并行执行,用到多GPU卡
No response
could GPT4 API support image uploading? Could you share with us some details about it?
是否有计划支持多GPU部署?如果可以的话真的太棒了!
No response
No response
No response
Can you provide LORA code for finetuning with image and text as input and text as output.
Custom data modelling.
Nothing
No response
Thanks for your work!
上传一张PNG图片,然后让其框选图中的某个元素,输出的图片全部都是黑色的。
正常图片,正常框选。
No response
- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):
原因是把PLT库加载的png图片通道数据,也当成了RGB数据进行处理,因为PNG通道数据是0-1的float类似,强制转为int的RGB数据,就全部都是黑色的了。
No response
No response
- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):
有没有更新对话状态?
Hi, thanks for your work.
The download links (https://ofasys-wlcb.oss-cn-wulanchabu.aliyuncs.com/QwenVL/xxxx) are not open to public.
I met this issue when downloading evaluation annotation files.
--2023-08-27 08:37:44-- https://ofasys-wlcb.oss-cn-wulanchabu.aliyuncs.com/QwenVL/evaluation/vizwiz/vizwiz_val.jsonl Resolving ofasys-wlcb.oss-cn-wulanchabu.aliyuncs.com (ofasys-wlcb.oss-cn-wulanchabu.aliyuncs.com)... 39.101.35.33 Connecting to ofasys-wlcb.oss-cn-wulanchabu.aliyuncs.com (ofasys-wlcb.oss-cn-wulanchabu.aliyuncs.com)|39.101.35.33|:443... connected. HTTP request sent, awaiting response... 403 Forbidden 2023-08-27 08:37:44 ERROR 403: Forbidden.
Could you change the permission for the download links?
No response
wget https://ofasys-wlcb.oss-cn-wulanchabu.aliyuncs.com/QwenVL/evaluation/nocaps/nocaps_val.json
- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):
No response
No response
No response
No response
- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):
evaluate_vqa.py 在docvqa 数据集上计算anls指标时,执行了python infographicsvqa_eval.py文件,然而没有找到这个代码。
Thank you for such wonderful open-source work.
Could show me a few details about clean pre-training data in the appendix A.1.
why Qwen-VL-Chat is giving different output for the same query/prompt?
for every run i'm getting the different results.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
torch.manual_seed(1234)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="cuda", bf16=True, trust_remote_code=True).eval()
model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)
query = tokenizer.from_list_format([
{'image': im2}, # Either a local path or an url
{'text': 'Compose a detailed account of the image'},])
response, history = model.chat(tokenizer, query=query, history=None)
print(response)
output1:
The image depicts a medical setting with two surgeons performing surgery in an operating room. Both surgeons are focused on a monitor, which displays an image of the patient's internal organs during the procedure. The monitor is located towards the right side of the room, with the surgeons standing on the left side.
The surgeons are wearing scrubs and appear to be in the middle of the operation. One of the surgeons is standing closer to the front of the room, while the other is positioned towards the back. A bed can be seen in the room, likely where the patient is lying during the surgery.
output2:
The image depicts a medical setting with two surgeons performing surgery in an operating room. Both surgeons are focused on a monitor, which displays an image of the area they are working on. The monitor is located towards the right side of the room, and the surgeons are positioned on the left side.
The surgeons are wearing surgical gowns and masks, indicating their professional attire for the procedure. One of the surgeons is holding a tool, likely a surgical instrument, while the other one is also equipped with a tool, possibly a scalpel. The operating room is equipped with various medical devices, including a monitor, a keyboard, and a mouse. A chair can be seen in the room, possibly for the patient to sit on during the surgery.
output3:
The image depicts a medical setting with two surgeons performing surgery in an operating room. Both surgeons are focused on a monitor, which displays an image of the patient's internal organs during the procedure. The monitor is located in the center of the room, providing the surgeons with real-time information to assist them in their work.
The surgeons are wearing blue gowns and masks, indicating their professional attire for the surgery. One of the surgeons is standing closer to the monitor, while the other is located more towards the right side of the room. The operating room is equipped with various medical devices, including a bed for the patient and a clock on the wall.
each run should give the same response.
follow the code given above.
- OS: AWS Sagemaker(Amazon Linux 2, Jupyter Lab 3
(notebook-al2-v2))
- Python: 3.10
- Transformers: 4.31.0
- PyTorch: 2.0.1
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`): 11.8
i have also tried with different images & different prompts, issue persists on them as well.
No response
No response
No response
是否有计划支持微调?
1
1
No response
The origin paper describes the GRIT processing details: "We use the greedy algorithm to clean the caption to make sure each image contains the most box labels with no recursive box labels."
What does this exactly mean? Can you provide some examples to explain this operation?
Thanks.
TouchStone is a VQA benchmark or a multi-turn dialogue benchmark? Will this benchmark be open-source?
No response
No response
- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):
No response
Thank you for the outstanding work. I would like to understand the reasons behind the exceptional performance of the model. Do you think it's related to the resolution? The resolution of mplug-doc is 1024, while yours is only 448, yet you achieved better performance than mplug-doc on docvqa. Additionally, I noticed that your adapter uses a query quantity of 256. Is this query quantity also a crucial factor?
I look forward to your response!
I notice that only 3.5M captions of GRIT are used to train grounding related tasks. However, the actual number of captions in GRIT is nearly 20M.
Could Qwen developers describe the filtering rule about GRIT captions?
Thanks.
webui加载模型后,回答全用英文
用中文回答
generation_config.json
{
"chat_format":"chatml",
"do_sample": true,
"eos_token_id": 151643,
"max_new_tokens": 512,
"max_window_size": 6144,
"pad_token_id": 151643,
"top_k": 0,
"top_p": 0.5,
"transformers_version": "4.31.0"
}
config.json
{
"_name_or_path": "./",
"architectures": [
"QWenLMHeadModel"
],
"attn_dropout_prob": 0.0,
"auto_map": {
"AutoConfig": "configuration_qwen.QWenConfig",
"AutoModelForCausalLM": "modeling_qwen.QWenLMHeadModel"
},
"bf16": false,
"emb_dropout_prob": 0.0,
"fp16": false,
"fp32": false,
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 22016,
"kv_channels": 128,
"layer_norm_epsilon": 1e-06,
"max_position_embeddings": 8192,
"model_type": "qwen",
"no_bias": true,
"num_attention_heads": 32,
"num_hidden_layers": 32,
"onnx_safe": null,
"rotary_emb_base": 10000,
"rotary_pct": 1.0,
"scale_attn_weights": true,
"seq_length": 2048,
"tie_word_embeddings": false,
"tokenizer_type": "QWenTokenizer",
"torch_dtype": "bfloat16",
"transformers_version": "4.31.0",
"use_cache": true,
"use_dynamic_ntk": true,
"use_flash_attn": false,
"use_logn_attn": true,
"visual": {
"heads": 16,
"image_size": 448,
"image_start_id": 151857,
"layers": 48,
"mlp_ratio": 4.9231,
"output_dim": 4096,
"patch_size": 14,
"width": 1664
},
"vocab_size": 151936
}
- OS:ubuntu
- Python:3.9
- Transformers:4.31.0
- PyTorch:2.0.1
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):11.7
No response
原因:代码中是从try_to_load_from_cache("Qwen/Qwen-VL-Chat","SimSun.ttf"),这种写法只适合HF框架自动下载model的情形,不适合手动Clone下载的事情。
No response
No response
- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):
No response
In Qwen-VL-Chat, the model takes an url or local path as input to load images. I tried to load images with a pre-defined dataset and build inputs_embeds
out of the forward function.
When I tried to pass inputs_embeds
into generate function, the following line (the first line in the forward function of QwenModel
) raise an error:
if past_key_values is None and torch.any(input_ids == self.config.visual['image_start_id']):
The input_ids
here is None
and the result is False
(a bool). torch.any
needs a tensor as input rather than a bool and the error is raised.
I think a quick check on whether the input_ids
is None or a tensor could fix it.
if past_key_values is None and input_ids is not None and torch.any(input_ids == self.config.visual['image_start_id']):
No response
No response
- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):
No response
08/28/2023
No response
有两个问题想请教下:
No response
No response
No response
No response
请问如何针对下游图文创作的细分任务做微调,能否开源微调教程?应该用Qwen-VL还是Qwen-VL-Chat?
RT
促进社区发展
No response
Will opensource training data?
No response
No response
No response
请问是否可以支持流式输出
1
提升体验
No response
Thank you for such excellent open-source work! The proportion of samples in some parts of the dataset differs significantly. Could you please explain how the dataset weights are set in Pretraining and Multi-task Pretraining?
No response
No response
No response
modelscope上的数据准备和处理说的不是很清楚
无
无
No response
No response
No response
No response
Is it possible to include it in the documentation?
N/A
N/A
No response
No response
No response
No response
OS: Ubuntu 20.04
Python: 3.8
Transformers: 4.31.0
PyTorch: 2.0.1
CUDA: 11.4
ValueError: Unrecognized configuration class <class 'transformers_modules.Qwen.Qwen-VL-Chat.a3d284e60f9c8298ed4c7fe6683f6dc1acff4c6c.configuration_qwen.QWenConfig'> to build an AutoTokenizer.
Model type should be one of AlbertConfig, AlignConfig, BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BlipConfig, Blip2Config, BloomConfig, BridgeTowerConfig, CamembertConfig, CanineConfig, ChineseCLIPConfig, ClapConfig, CLIPConfig, CLIPSegConfig, CodeGenConfig, ConvBertConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DebertaConfig, DebertaV2Config, DistilBertConfig, DPRConfig, ElectraConfig, ErnieConfig, ErnieMConfig, EsmConfig, FlaubertConfig, FNetConfig, FSMTConfig, FunnelConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, GPTSanJapaneseConfig, GroupViTConfig, HubertConfig, IBertConfig, InstructBlipConfig, JukeboxConfig, LayoutLMConfig, LayoutLMv2Config, LayoutLMv3Config, LEDConfig, LiltConfig, LlamaConfig, LongformerConfig, LongT5Config, LukeConfig, LxmertConfig, M2M100Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MgpstrConfig, MobileBertConfig, MPNetConfig, MraConfig, MT5Config, MvpConfig, NezhaConfig, NllbMoeConfig, NystromformerConfig, OneFormerConfig, OpenAIGPTConfig, OPTConfig, OwlViTConfig, PegasusConfig, PegasusXConfig, PerceiverConfig, Pix2StructConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, RagConfig, RealmConfig, ReformerConfig, RemBertConfig, RetriBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2TextConfig, Speech2Text2Config, SpeechT5Config, SplinterConfig, SqueezeBertConfig, SwitchTransformersConfig, T5Config, TapasConfig, TransfoXLConfig, UMT5Config, ViltConfig, VisualBertConfig, Wav2Vec2Config, Wav2Vec2ConformerConfig, WhisperConfig, XCLIPConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig, YosoConfig.
当输入图片,并让模型给出指定回答。模型回复并没有严格按照固定答案进行回答。如下
现象:
输入
图片
text: 这张图中人物是男性吗?如果是,请直接回答Yes,如果不是,请直接回答No,无法确定请直接回答无法确定,不允许出现其它回答
输出
模型回复: 不是。
问题 :回复是Yes/No/无法确定以外内容。
主要是web_demo_mm.py会出现这个问题,
请问这是prompt引导问题?还是模型?或者是web_demo_mm后处理问题?
No response
No response
- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):
No response
请问模型支持视频吗?为什么SEED-Bench上有视频的评分?
Thanks for the project ❤️ I made a colab. 🥳 I hope you like it. https://github.com/camenduru/Qwen-VL-Chat-colab
9/1/23
No response
No response
Hello, I appreciate your prompt response in providing evaluation data. Upon reviewing the information, I've noticed that certain datasets, such as GQA and docvqa, have not been released yet. I'm curious if there is a planned schedule for the release of the remaining evaluation data. Thank you.
data annotation files in eval_mm/EVALUATION.md
Some evaluation data is missing.
No response
Congratulations!
I notice that the value of kosmos-2 in this table is not fair for comparison. 66.7 on flickr30k and 45.6 on vqav2dev are obtained from the model w/o instruction tuning.
We have updated the final performance on flickr30k and vqav2 on our github page here. Specifically, kosmos-2 can achieve 80.5 on flickr30k and 51.1 on vqav2 under zeroshot setting.
Sorry for the misleading. Can you update the number for ours?
Thanks!
No response
No response
No response
It would be awesome if you guys could upload the model to replicate.com so it is more accessible in applications.
This is a popular website for model deployment. It would be awesome if you guys could upload it here so we can use the model API. https://replicate.com/
There are no drawbacks.
No response
Is the LLM part the same as Qwen7B-Chat?
如题
or something about filtering the data in the first stage?
I want to test the open vocabulary detection task on QWen-VL model, but I find that I can't let it output all detection boxes at the specific catogary through the instruct 'all shoes' or 'all clothes'.
How can it output all detection boxes at the specific catogary?
No response
- OS:Ubuntu
- Python:3.9
- Transformers:4.31.0
- PyTorch:1.12.0
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):11.4
No response
Will you open source the training codes?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.