Giter Site home page Giter Site logo

intel / auto-round Goto Github PK

View Code? Open in Web Editor NEW
68.0 9.0 9.0 8.46 MB

SOTA Weight-only Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"

Home Page: https://arxiv.org/abs/2309.05516

License: Apache License 2.0

Python 97.57% Shell 2.43%
awq gptq int4 neural-compressor quantization rounding weight-only

auto-round's Introduction

AutoRound

Advanced Weight-Only Quantization Algorithm for LLMs

python version license

AutoRound is an advanced weight-only quantization algorithm for low-bits LLM inference. It's tailored for a wide range of models and consistently delivers noticeable improvements, often significantly outperforming SignRound with the cost of more tuning time for quantization.

Our method adopts sign gradient descent to fine-tune rounding values and minmax values of weights in just 200 steps, which competes impressively against recent methods without introducing any additional inference overhead. The below image presents an overview of AutoRound.

Prerequisites

  • Python 3.9 or higher

Installation

Build from Source

pip install -r requirements.txt
python setup.py install

Install from pypi

pip install auto-round

Model quantization

Gaudi2/ CPU/ GPU

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "meta-llama/Llama-2-7b-hf"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)

from auto_round import AutoRound

bits, group_size, sym = 4, 128, False
##device:Optional["auto", None, "hpu", "cpu", "cuda"]
autoround = AutoRound(model, tokenizer, bits=bits, group_size=group_size, sym=sym, device=None)
autoround.quantize()
output_dir = "./tmp_autoround"
autoround.save_quantized(output_dir)
Detailed Hyperparameters
  • model: The PyTorch model to be quantized.

  • tokenizer: An optional tokenizer for processing input data. If none is provided, a dataloader must be supplied.

  • bits (int): Number of bits for quantization (default is 4).

  • group_size (int): Size of the quantization group (default is 128).

  • sym (bool): Whether to use symmetric quantization.

  • enable_quanted_input (bool): Whether to use the output of the previous quantized block as the input for the current block (default is True).

  • enable_minmax_tuning (bool): Whether to enable weight min-max tuning (default is True).

  • iters (int): Number of tuning iterations (default is 200).

  • lr (float): The learning rate for rounding value (default is None, it will be set to 1.0/iters automatically).

  • minmax_lr (float): The learning rate for min-max tuning (default is None, it will be set to lr automatically).

  • n_samples (int): Number of samples for tuning (default is 512).

  • seqlen (int): Data length of the sequence for tuning (default is 2048).

  • batch_size (int): Batch size for training (default is 8).

  • scale_dtype (str): The data type of quantization scale to be used (default is "float32"), different kernels have different choices.

  • amp (bool): Whether to use automatic mixed precision (default is True).

  • n_blocks (int): Packing several blocks as one for tuning together (default is 1).

  • gradient_accumulate_steps (int): Number of gradient accumulation steps (default is 1).

  • low_gpu_mem_usage (bool): Whether to save GPU memory at the cost of a little tuning time (default is True).

  • dataset (str): The default dataset name for tuning (default is "NeelNanda/pile-10k").

  • dataset_split (str): The split of the dataset to be used for tuning (default is "train").

  • dataloader: The dataloader for tuning data.

  • weight_config (dict): Configuration for weight quantization (default is an empty dictionary), mainly for mixed bits or mixed precision.

  • device: The device to be used for tuning. The default is set to 'auto', allowing for automatic detection.

Model inference

Please run the quantization code first.

CPU

##Install the latest https://github.com/intel/intel-extension-for-transformers from source first.
from intel_extension_for_transformers.transformers import AutoModelForCausalLM
from transformers import AutoTokenizer

quantized_model_path = "./tmp_autoround"
model = AutoModelForCausalLM.from_pretrained(quantized_model_path, device_map="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(quantized_model_path, use_fast=True)
text = "There is a girl who likes adventure,"
inputs = tokenizer(text, return_tensors="pt").to(model.device)
print(tokenizer.decode(model.generate(**inputs, max_new_tokens=50)[0]))

GPU

from transformers import AutoModelForCausalLM, AutoTokenizer

quantized_model_path = "./tmp_autoround"
model = AutoModelForCausalLM.from_pretrained(quantized_model_path, device_map="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(quantized_model_path, use_fast=True)
text = "There is a girl who likes adventure,"
inputs = tokenizer(text, return_tensors="pt").to(model.device)
print(tokenizer.decode(model.generate(**inputs, max_new_tokens=50)[0]))

Support List

Model Supported
Intel/neural-chat-7b-v3-3 HF-int4-model, accuracy, recipe, example
Intel/neural-chat-7b-v3-1 HF-int4-model, accuracy, recipe, example
mistralai/Mistral-7B-v0.1 HF-int4-model, accuracy, recipe, example
microsoft/phi-2 HF-int4-model, accuracy, recipe, example
tiiuae/falcon-7b HF-int4-model, accuracy, recipe, example
google/gemma-2b HF-int4-model, accuracy, recipe, example
mistralai/Mistral-7B-Instruct-v0.2 HF-int4-model (under review), accuracy, recipe, example
google/gemma-7b HF-int4-model (under review), accuracy, recipe, example
google/gemma-7b-it HF-int4-model (under review), accuracy, recipe, example
mistralai/Mixtral-8x7B-Instruct-v0.1 HF-int4-model (under review), accuracy, recipe, example
mistralai/Mixtral-8x7B-v0.1 HF-int4-model (under review), accuracy, recipe, example
meta-llama/Meta-Llama-3-8B-Instruct accuracy, recipe, example
meta-llama/Llama-2-7b-chat-hf accuracy, recipe, example
Qwen/Qwen1.5-7B-Chat accuracy, sym recipe, asym recipe , example
baichuan-inc/Baichuan2-7B-Chat accuracy, recipe, example
01-ai/Yi-6B-Chat accuracy, recipe, example
facebook/opt-2.7b accuracy, recipe, example
bigscience/bloom-3b accuracy, recipe, example
EleutherAI/gpt-j-6b accuracy, recipe, example
Salesforce/codegen25-7b-multi example
huggyllama/llama-7b example
mosaicml/mpt-7b example
THUDM/chatglm3-6b example
MBZUAI/LaMini-GPT-124M example
EleutherAI/gpt-neo-125m example
databricks/dolly-v2-3b example
stabilityai/stablelm-base-alpha-3b example

Comparison with other methods

We provide a comprehensive analysis with other methods in our accuracy data section. In summary, our approach achieved superior performance compared to GPTQ, scoring 30/32, AWQ with 27/32, HQQ with 15/16, and OmniQuant with a perfect score of 16/16 across llamv1/llamav2/mistral-7b on W4G-1, W4G128, W3G128, and W2G128, based on the average accuracies of 11 zero-shot tasks.

Tips

1 Consider increasing tuning steps to achieve better results, albeit with increased tuning time.

2 Setting 'enable_quanted_input' to False has been observed to occasionally yield improved results.

3 Setting 'minmax_lr' to 2.0/iters has been observed to occasionally yield improved results.

Reference

If you find SignRound useful for your research, please cite our paper:

@article{cheng2023optimize,
  title={Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs},
  author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao},
  journal={arXiv preprint arXiv:2309.05516},
  year={2023}
}

auto-round's People

Contributors

chensuyue avatar hshen14 avatar lkk12014402 avatar pre-commit-ci[bot] avatar pursure-d avatar rdower avatar weiweizhang1 avatar wenhuach21 avatar yiliu30 avatar yintong-lu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

auto-round's Issues

GPU memery use

How much GPU memory and CPU size are required when I quantize the Chatglm3-6B model,I used A100-40G,but get an error ”killed“。

8-bit quantization support

I was wondering if it supports 8-bit quantization. The example code is as below.

python main.py \ --model_name "" \ --bits 8 \ --group_size 128 \ --train_bs 8 \ --gradient_accumulate_steps 8 \ --deployment_device 'gpu' \ --output_dir "./save_ckpt"

Quantization/layer speed is very slow

Current testing PR #87 and running into very slow quants for a Tinyllama 1.1B test model.

I am geting ~96s per layer in quantization on 4090 gpu with n_blocks = 1 and ~75s per layer with n_blocks = 2.

Is this the norm? I am new to autoround so don't have a baseline metric to go by.

Is gpu quant faster or slower than cpu quant? Are there optimizations to make it faster other than using a smaller model for testing? Thanks!

Env:
Ubuntu 22.04
Torch 2.2.2
Cpu: AMD ZEN3

env CUDA_DEVICE_ORDER=PCI_BUS_ID CUDA_VISIBLE_DEVICES=5 python3 main.py \
--model_name /local/TinyLlama-1.1B-intermediate-step-1341k-3T \
--device 0 \
--group_size 128 \
--bits 4 \
--iters 1000 \
--use_quant_input \
--quant_lm_head \
--n_blocks 4 \
--deployment_device 'gpu' \
--disable_low_gpu_mem_usage \
--output_dir "./tmp_autoround"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.