Comments (6)
你好,目前有推断的硬件资源需求,https://github.com/QwenLM/Qwen-7B#quantization
Precision | MMLU | Memory |
---|---|---|
BF16 | 56.7 | 16.2G |
Int8 | 52.8 | 10.1G |
NF4 | 48.9 | 7.4G |
训练的硬件需求我们会后续更新。
from qwen.
我是A40 48G 显存,采用官方默认的加载精度(fp32),显存占用31G左右。
from qwen.
你好,目前有推断的硬件资源需求,https://github.com/QwenLM/Qwen-7B#quantization
Precision MMLU Memory
BF16 56.7 16.2G
Int8 52.8 10.1G
NF4 48.9 7.4G
训练的硬件需求我们会后续更新。
好的 谢谢
from qwen.
我是A40 48G 显存,采用官方默认的加载精度(fp32),显存占用31G左右。
好的,我算力不够,第一天量化int8加载报错,我今天改成fp16再试试
from qwen.
你好,目前有推断的硬件资源需求,https://github.com/QwenLM/Qwen-7B#quantization
Precision MMLU Memory
BF16 56.7 16.2G
Int8 52.8 10.1G
NF4 48.9 7.4G
训练的硬件需求我们会后续更新。
您好,请问能提供一个int8的Qwen-chat-7B下载链接吗?
from qwen.
你好,目前有推断的硬件资源需求,https://github.com/QwenLM/Qwen-7B#quantization
Precision MMLU Memory
BF16 56.7 16.2G
Int8 52.8 10.1G
NF4 48.9 7.4G
训练的硬件需求我们会后续更新。您好,请问能提供一个int8的Qwen-chat-7B下载链接吗?
我看了一下Quant用的是AutoGPTQ,如果不方便提供int8的模型的话,可以提供一下GPTQ量化时使用到的datasets吗?
from qwen.
Related Issues (20)
- 💡 [REQUEST] - <title> 请问千问团队有日常或者转正实习招聘吗?
- 使用官方cuda121的docker镜像加载自己finetune的lora模型时报错[BUG] <title> HOT 1
- Why "ZeRO3 is incompatible with LoRA when finetuning on base model" HOT 1
- 文档阅读的千问大模型 HOT 1
- 想请问模型合并的时候没有看到原模型是怎么加路径的,只看到lora权重路径 HOT 3
- 💡 [REQUEST] - <title> HOT 2
- 微调加载微调模型显存不够,RuntimeError: CUDA error: invalid device ordinal CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions HOT 2
- 損失函數計算(評估)方式 HOT 1
- 以lora、bfloat16方式微调模型,模型微调后采用lora参数和基座模型进行推理,使用merge_and_unload()类前后推理结果不一致,为什么会出现这种情况呢 HOT 3
- qwen-14B-chat-int8/4 vllm模式部署错误:no kernel image is available for execution on the device HOT 1
- 在使用多卡做Qwen-7B-Chat做微调过程中出现ValueError: Expected a string path to an existing deepspeed config, or a dictionary, or a base64 encoded string. Received: finetune/ds_config_zero3.json HOT 1
- [BUG] <title>cannot import name 'allow_in_graph' from partially initialized module 'torch._dynamo' (most likely due to a circular import) (/demo/miniconda3/envs/qwen/lib/python3.9/site-packages/torch/_dynamo/__init__.py) HOT 1
- 7B模型推理时生成非有效的idx,应该怎么处理? HOT 1
- deepspeed 单机多卡训练报错 HOT 6
- 请问计划什么时间支持让vllm-gptq运行Qwen-72B-Chat-INT8大模型? HOT 2
- [BUG] 对qwen-7b模型微调后,输出句子断句不正常,直接从句子中间停止 HOT 5
- 4张卡为什么没有并发推理 HOT 5
- Calculate language probabilities HOT 1
- 多卡并行微调卡住 HOT 4
- 微调过程中的taskType参数 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from qwen.