Please answer the following questions for yourself before submitting an issue.
{'dim': 4096, 'multiple_of': 256, 'n_heads': 32, 'n_layers': 32, 'norm_eps': 1e-05, 'vocab_size': -1}
Namespace(dir_model='../llama.cpp/models/llama-2-7b/', ftype=1, vocab_only=0)
n_parts = 1
Processing part 0
Processing variable: tok_embeddings.weight with shape: torch.Size([32000, 4096]) and type: torch.bfloat16
Traceback (most recent call last):
File "/Users/vivekv/software/llama-go/./convert-pth-to-ggml.py", line 181, in <module>
main()
File "/Users/vivekv/software/llama-go/./convert-pth-to-ggml.py", line 174, in main
process_and_write_variables(fout, model, ftype)
File "/Users/vivekv/software/llama-go/./convert-pth-to-ggml.py", line 109, in process_and_write_variables
data = datao.numpy().squeeze()
^^^^^^^^^^^^^
TypeError: Got unsupported ScalarType BFloat16
$ python3 --version
Python 3.11.4
$ make --version
GNU Make 3.81
Copyright (C) 2006 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
This program built for i386-apple-darwin11.3.0
$ g++ --version
Apple clang version 14.0.3 (clang-1403.0.22.14.1)
Target: arm64-apple-darwin22.4.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin