Giter Site home page Giter Site logo

Adding quantization about llm-pruner HOT 9 OPEN

horseee avatar horseee commented on May 17, 2024
Adding quantization

from llm-pruner.

Comments (9)

MarlNox avatar MarlNox commented on May 17, 2024 2

I assume the correct way to do it would go something like:
0. (optional) Increase size and topic breadth of LLM-Pruner Corpus

  1. LLM-Pruner
  2. LoRA/QLoRa
  3. GPTQ
    This is completely hypothetical at the moment though, and you'd need to try it out yourself to see if that'd work as intended.

from llm-pruner.

horseee avatar horseee commented on May 17, 2024 2

I assume the correct way to do it would go something like: 0. (optional) Increase size and topic breadth of LLM-Pruner Corpus

  1. LLM-Pruner
  2. LoRA/QLoRa
  3. GPTQ
    This is completely hypothetical at the moment though, and you'd need to try it out yourself to see if that'd work as intended.

Thanks for your kind response! We also assume that if quantization needs to be applied, the correct path is as you listed. One of the reasons is that if the pruning need to be performed under a CPU, certain operations, such as SiLU, are not supported on the CPU with FP16 and below. If you apply quantization first and then proceed with pruning, it could result in quantized weights being readjusted back to fp32.

from llm-pruner.

Duncan1115 avatar Duncan1115 commented on May 17, 2024

@horseee Hi, thanks for the good suggestion. And may I ask why the paper don't compare the results bewteen pure quantizing and pure pruning?

from llm-pruner.

horseee avatar horseee commented on May 17, 2024

@horseee Hi, may I ask you why you don't compare the results bewteen pure quantizing and pure pruning in the paper?

Hi. Quantization is orthogonal to pruning and hence can be readily deployed on top of pruning to further reduce the network size. These are two different lines of model compression strategy, focusing on different types of redundancy in models. Exactly for this reason, the majority of papers in pruning CNN/BERT will not compare the performance between these two methods.

from llm-pruner.

Duncan1115 avatar Duncan1115 commented on May 17, 2024

Thanks a lot! My question comes from the quantizing method such as GPTQ/AWQ can have a better performance with large compressing ratio than pruning method... Your answer deeply helped me~

@horseee Hi, may I ask you why you don't compare the results bewteen pure quantizing and pure pruning in the paper?

Hi. Quantization is orthogonal to pruning and hence can be readily deployed on top of pruning to further reduce the network size. These are two different lines of model compression strategy, focusing on different types of redundancy in models. Exactly for this reason, the majority of papers in pruning CNN/BERT will not compare the performance between these two methods.

from llm-pruner.

77h2l avatar 77h2l commented on May 17, 2024

@horseee hi, I have two questions hope you could reply,thx:

  1. does the model pruned by llm-pruner or other pruner tricks, could have a better inference performance under fp16;
  2. how could it be achieved to run a model pruned by llm-pruner, and then use gptq or other ways to quant the model to int8.

from llm-pruner.

horseee avatar horseee commented on May 17, 2024

Hi. We conducted a quick experiment and here are the inference performance:

Pruning Ratio #Param Memory Latency Speedup BoolQ PIQA HellaSwag WinoGrande ARC-e ARC-c OBQA Average
LLaMA-7B 6.74B 12884.5MiB 69.32s 1x 73.18 78.35 72.99 67.01 67.45 41.38 42.40 63.25
LLM.int8() 6.74B 6777.7MiB 76.20s 0.91x 73.36 78.18 73.01 66.93 67.47 40.87 41.80 63.09
LLaMA-5.4B 5.47B 10488.4MiB 58.55s 1.18x 76.57 77.37 66.60 65.82 70.62 40.70 38.80 62.36
LLaMA-5.4B + LLM.int8() 5.47B 5444.37MiB 63.10s 1.09x 76.39 76.71 66.62 66.46 70.54 40.19 39.20 62.30

The latency is tested on the test set of Wikitext2. LLM.int8() slows down the inference of the LLaMA-7B model in our case, as is also mentioned in the paper of LLM.int8() with the model size of 6.7B.

from llm-pruner.

77h2l avatar 77h2l commented on May 17, 2024

@horseee hi,thx for your kind reply.
Actually I'm not intend to compare the performance of pruner and quantization, as they are two different ways to compress the model.I mean how can we smoothly join the pruned model together with quantization, could it be done simply and directly?

from llm-pruner.

horseee avatar horseee commented on May 17, 2024

I mean how can we smoothly join the pruned model together with quantization, could it be done simply and directly?

In my experiment above, the pruned model is quantized following the instruction of bitsandbytes. I didn't try GPTQ since it seems more complicated if the model is not a standard model that cannot be loaded from .from_pretrained().

from llm-pruner.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.