[2024-05-31] llama.cpp 特性更新
- base由b1742更新为b2249
- 支持Yuan2.0、Yuan2.0-M32
- 修改lfa输入缓存为2个token
- 修改-i交互模式下的若干bug
-
包括如下项目:
- 适用于Yuan2.0大模型 的 llama.cpp: https://github.com/IEIT-Yuan/3rd_party/tree/main/llama-cpp
[2024-05-31] llama.cpp 特性更新
包括如下项目:
I was trying to convert the yuan model to gguf and getting the above error.
Vocab info: <VocabLoader with 134953 base tokens and 0 added tokens>
Special vocab info: <SpecialVocab with 0 merges, special tokens {'bos': 77185, 'eos': 77185, 'sep': 77185, 'pad': 77185, 'mask': 77185}, add special tokens {'bos': False, 'eos': False}>
Permuting layer 0
Permuting layer 1
Permuting layer 2
Permuting layer 3
Permuting layer 4
Permuting layer 5
Permuting layer 6
Permuting layer 7
Permuting layer 8
Permuting layer 9
Permuting layer 10
Permuting layer 11
Permuting layer 12
Permuting layer 13
Permuting layer 14
Permuting layer 15
Permuting layer 16
Permuting layer 17
Permuting layer 18
Permuting layer 19
Permuting layer 20
Permuting layer 21
Permuting layer 22
Permuting layer 23
model.embed_tokens.weight -> token_embd.weight | BF16 | [135040, 2048]
model.layers.0.self_attn.v_proj.weight -> blk.0.attn_v.weight | BF16 | [4096, 2048]
model.layers.0.self_attn.o_proj.weight -> blk.0.attn_output.weight | BF16 | [2048, 4096]
model.layers.0.self_attn.lf_gate.conv1.weight -> blk.0.conv1.weight | BF16 | [1024, 2048, 2, 1]
model.layers.0.self_attn.lf_gate.conv1.bias -> blk.0.conv1.bias | BF16 | [1024]
model.layers.0.self_attn.lf_gate.conv2.weight -> blk.0.conv2.weight | BF16 | [2048, 1024, 2, 1]
model.layers.0.self_attn.lf_gate.conv2.bias -> blk.0.conv2.bias | BF16 | [2048]
model.layers.0.self_attn.lf_gate.output_layernorm.weight -> blk.0.lf_output_norm.weight | BF16 | [2048]
model.layers.0.self_attn.q_proj.weight -> blk.0.attn_q.weight | BF16 | [4096, 2048]
model.layers.0.self_attn.k_proj.weight -> blk.0.attn_k.weight | BF16 | [4096, 2048]
Traceback (most recent call last):
File "3rd_party/llama-cpp/convert.py", line 1343, in
main()
File "3rd_party/llama-cpp/convert.py", line 1329, in main
model = convert_model_names(model, params) # 取消hf中的permut和pack,更改模型层的名字;
File "3rd_party/llama-cpp/convert.py", line 1105, in convert_model_names
raise Exception(f"Unexpected tensor name: {name}")
Exception: Unexpected tensor name: model.layers.0.mlp.gate.query.weight
Model: IEITYuan/Yuan2-2B-Mars-hf
Error:
ggml_metal_graph_compute_block_invoke: error: unsupported op 'REPEAT'
GGML_ASSERT: ggml-metal.m:798: !"unsupported op"
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.