Giter Site home page Giter Site logo

Comments (5)

SunMarc avatar SunMarc commented on August 28, 2024 1

Thanks for this report ! Really appreciate it ! It makes the review so much easier !

from transformers.

PenutChen avatar PenutChen commented on August 28, 2024

I tried to compare two model weights and found that only q_proj and k_proj are different:

import torch
from safetensors.torch import load_file

m1 = load_file("llama3-8b-single/model.safetensors")
m2 = load_file("llama3-8b-b3286/model.safetensors")

for k in m1:
    flag = torch.allclose(m1[k], m2[k])
    if not flag:
        print(k, flag)

"""
model.layers.0.self_attn.k_proj.weight False
model.layers.0.self_attn.q_proj.weight False
model.layers.1.self_attn.k_proj.weight False
model.layers.1.self_attn.q_proj.weight False
...
"""

k = "model.layers.0.self_attn.k_proj.weight"
for i, _ in enumerate(m1[k]):
    if not torch.allclose(m1[k][i], m2[k][i]):
        print(i)
        print(m1[k][i])
        print(m2[k][i])

"""
32
tensor([-0.0615, -0.0119, -0.0183,  ...,  0.0344, -0.0513,  0.0645],
       dtype=torch.float16)
tensor([-0.0447, -0.0293,  0.0396,  ...,  0.0067,  0.0242, -0.0035],
       dtype=torch.float16)

...

990
tensor([-0.0215, -0.0674,  0.0096,  ...,  0.0200, -0.0063, -0.0147],
       dtype=torch.float16)
tensor([-0.0272, -0.0378,  0.0124,  ...,  0.0242, -0.0052, -0.0052],
       dtype=torch.float16)
991
tensor([-0.0287, -0.0288,  0.0073,  ...,  0.0089, -0.0006, -0.0042],
       dtype=torch.float16)
tensor([-0.0052, -0.0284,  0.0289,  ...,  0.0135,  0.0055, -0.0042],
       dtype=torch.float16)
"""

According to convert-hf-to-gguf.py, there is a permute operation on these modules. This might be the root cause of the issue.

from transformers.

PenutChen avatar PenutChen commented on August 28, 2024

After comparing the metadata between the two versions, I think the shape issue is caused by the missing vocab_size information:

python llama-cpp-b3286/gguf-py/examples/reader.py llama3-8b/llama3-8b.b1742.f16.gguf > llama3-8b-b1742.txt
python llama-cpp-b3286/gguf-py/examples/reader.py llama3-8b/llama3-8b.b3286.f16.gguf > llama3-8b-b3286.txt 
$ diff llama3-8b-b1742.txt llama3-8b-b3286.txt 
4c4
< GGUF.kv_count                          : [19]
---
> GGUF.kv_count                          : [21]
6c6
< general.name                           : [46]
---
> general.name                           : [108 108  97 109  97  51  45  56  98]
16a17
> llama.vocab_size                       : [128256]
17a19
> tokenizer.ggml.pre                     : [108 108  97 109  97  45  98 112 101]
19d20
< tokenizer.ggml.scores                  : [0.]
23a25
> general.quantization_version           : [2]

When trying to load the old version of llama3 gguf, I think it misses the vocab size and defaults to 32000 somewhere.

from transformers.

PenutChen avatar PenutChen commented on August 28, 2024

The major cause is the q_proj and k_proj permutation. According to convert-hf-to-gguf.py, the permutation considers not only num_attention_heads but also num_key_value_heads, especially for a GQA model like Llama3. I tried to fix the reversed permutation like this: main...PenutChen:transformers:gguf-permute-fix

def reverse_hf_permute(weights: np.ndarray, n_head: int, n_head_kv: int) -> np.ndarray:
    if n_head_kv is not None and n_head != n_head_kv:
        n_head = n_head_kv

    dim = weights.shape[0] // n_head // 2
    w = weights.reshape(n_head, dim, 2, *weights.shape[1:])
    return w.swapaxes(2, 1).reshape(weights.shape)

# ...

if architecture == "llama" and (".attn_k." in name or ".attn_q." in name):
    num_heads = parsed_parameters["config"]["num_attention_heads"]
    n_head_kv = parsed_parameters["config"]["num_key_value_heads"]
    if ".attn_q." in name:
        weights = reverse_hf_permute(weights, num_heads, num_heads)
    elif ".attn_k." in name:
        weights = reverse_hf_permute(weights, num_heads, n_head_kv)

from transformers.

PenutChen avatar PenutChen commented on August 28, 2024

After this fix, there is still a minor PPL gap between the original HF model and the GGUF model, which might be caused by the tokenizer. Using the HF tokenizer should result in the same PPL.

from transformers.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.