- Run `ln -s stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors 4bit-128g.safetensors` in the model directory before attempting to load it.
The line above does not have the intended effects on its own. The model file should be renamed to "stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors".
I had to install the following packages from main.py. I'd add them to the readme, but I haven't worked with GitHub before and don't want to mess anything up for you.
Hello, great project and look forward to trying out KA (Kobold-Assisant).
Below excerpt from the Install guidance:
Download the *.whl file from the latest release on github
Run pip install *.whl.
I did a git clone of the repo, and searched thoroughly but cannot find a .whl file.
Am I mission something?