Giter Site home page Giter Site logo

Comments (5)

LysandreJik avatar LysandreJik commented on July 24, 2024 1

Hey @SunMarc, would you have some bandwidth to take a look at this ? :)

from transformers.

PenutChen avatar PenutChen commented on July 24, 2024

I found that the PPL issue is related to Llama3 or llama.cpp. It doesn't happen with TinyLlama. I'll create another issue to discuss if needed.

from transformers.

PenutChen avatar PenutChen commented on July 24, 2024

It's easy to support GGUF FP16. Since BF16 is not supported by NumPy, my current workaround is to convert BF16 to FP16 using PyTorch, but it's not ideal to rely on PyTorch at this step.

Reference: main...PenutChen:transformers:main

def load_dequant_gguf_tensor(shape, ggml_type, data):
    if ggml_type == GGML_TYPES["F32"]:
        values = data
    elif ggml_type == GGML_TYPES["F16"]:
        values = data
    elif ggml_type == GGML_TYPES["BF16"]:
        import torch
        data_uint8 = data.view(np.uint8)
        tensor_uint8 = torch.from_numpy(data_uint8)
        values = tensor_uint8.view(torch.bfloat16).float().numpy()

Note that BF16 support requires modifying some code in gguf-py. Since the latest version of gguf-py from the llama.cpp repo doesn't work with the current HF integration (#31725), I modified the version from PyPI as follows:

class GGMLQuantizationType(IntEnum):
    F32  = 0
    F16  = 1
    BF16 = 30
    # ...

GGML_QUANT_SIZES = {
    GGMLQuantizationType.F32:  (1, 4),
    GGMLQuantizationType.F16:  (1, 2),
    GGMLQuantizationType.BF16: (1, 2),
    # ...
}

from transformers.

SunMarc avatar SunMarc commented on July 24, 2024

Hey @PenutChen, thanks for your research ! I think that we should just support FP16 first since supporting BF16 would require a new gguf release + transformers gguf integration is not compatible yet. LMK what you think ! If you have some time, would you like a open a PR ? Otherwise, I will do it !

from transformers.

PenutChen avatar PenutChen commented on July 24, 2024

@SunMarc Sure, I will do the necessary checks and open a PR! By the way, gguf-py on PyPI has not been updated for a long time. Most developers from llama.cpp seem to use gguf-py from the source. I think if we want to improve this integration, we should discuss it with the developers of llama.cpp.

from transformers.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.