Giter Site home page Giter Site logo

gguf_gui's Introduction

GGUF_GUI: An easy way to convert your safetensors to GGUF

Easy installation:

./run.sh

This should pull the repos and install the requirements. You will need llama.cpp build using make or cmake. The script above will try to do that.

After the initial run you can just run:

streamlit run main.py

You can do this with CUDA as well.

To use the huggingface downloader you have to enter in the repo id: for example: username_or_org/repo_name or lysandre/test-model.

alt text

Docker

You can also build the apps as a container image:

cp Dockerfile.cpu Dockerfile # or Dockerfile.cuda
docker build -t gguf_gui .
docker run -v /path/to/your/models:/app/models -p 8501:8501 gguf_gui

gguf_gui's People

Contributors

kevkid avatar sammcj avatar ronykris avatar

Stargazers

Julien avatar  avatar  avatar  avatar Charles Cai avatar Cyclotomic Fields avatar  avatar  avatar  avatar  avatar Jiyong avatar Kory Kim avatar YunYuLai avatar  avatar Kozuch avatar dimubae<3 avatar  avatar Mike Peralta avatar Fikri Yoma Rosyidan avatar Daniel avatar Jun Zhan avatar  avatar  avatar Siva Balaji avatar  avatar  avatar  avatar  avatar Tristan avatar  avatar  avatar  avatar ʞ-uɐʇsıɹʇ avatar  avatar  avatar Fernando Bold avatar Kevin S Kirkup avatar Adam Mikulis avatar  avatar  avatar imi avatar  avatar Paul avatar  avatar  avatar mjtechguy avatar Jimmy Ruska avatar Reachsak Ly avatar Gary Blankenship avatar Alexandre avatar  avatar DefamationStation  avatar ddh0 avatar

Watchers

 avatar Nierto avatar  avatar

gguf_gui's Issues

any hints to install in win10

MinGW64
i created a conda env
activated
cloned the git
bash run.sh
... it runs
on error
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'llama.cpp/requirements/requirements-convert-hf-to-gguf.txt'
i fixed and renamed the file it was requirements_convert_hf_to_gguf.txt
... it runs further

but than:


Installing collected packages: sentencepiece, mpmath, sympy, safetensors, regex, protobuf, numpy, networkx, torch, gguf, tokenizers, transformers
Attempting uninstall: protobuf
Found existing installation: protobuf 5.27.2
Uninstalling protobuf-5.27.2:
Successfully uninstalled protobuf-5.27.2
Attempting uninstall: numpy
Found existing installation: numpy 2.0.0
Uninstalling numpy-2.0.0:
Successfully uninstalled numpy-2.0.0
Successfully installed gguf-0.6.0 mpmath-1.3.0 networkx-3.3 numpy-1.26.4 protobuf-4.25.3 regex-2024.5.15 safetensors-0.4.3 sentencepiece-0.2.0 sympy-1.13.0 tokenizers-0.19.1 torch-2.2.2+cpu transformers-4.42.3
which: no ccache in (/mingw64/bin:/usr/bin:/c/Users/kallemst/bin:/c/Users/kallemst/.conda/envs/convert:/c/Users/kallemst/.conda/envs/convert/Library/mingw-w64/bin:/c/Users/kallemst/.conda/envs/convert/Library/usr/bin:/c/Users/kallemst/.conda/envs/convert/Library/bin:/c/Users/kallemst/.conda/envs/convert/Scripts:/c/Users/kallemst/.conda/envs/convert/bin:/c/ProgramData/anaconda3/condabin:/bin:/c/Users/kallemst/bin:/mingw64/bin:/usr/local/bin:/usr/bin:/usr/bin:/mingw64/bin:/usr/bin:/c/Users/kallemst/bin:/e/text-generation-webui/installer_files/env:/mingw-w64/bin:/usr/bin:/usr/bin:/e/text-generation-webui/installer_files/env/Scripts:/c/VulkanSDK/1.3.268.0/Bin:/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.2/bin:/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.2/libnvvp:/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.7/bin:/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.7/libnvvp:/c/Program Files (x86)/ffmpeg/bin:/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.1/bin:/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.1/libnvvp:/c/Program Files (x86)/Common Files/Intel/Shared Libraries/redist/intel64/compiler:/c/Program Files (x86)/NVIDIA Corporation/PhysX/Common:/c/Program Files (x86)/Common Files/Oracle/Java/javapath:/c/Windows/system32:/c/Windows:/c/Windows/System32/Wbem:/c/Windows/System32/WindowsPowerShell/v1.0:/c/Windows/System32/OpenSSH:/c/Program Files (x86)/Windows Kits/8.1/Windows Performance Toolkit:/c/PROGRA2/TABLEC1.01:/c/PROGRA2/PEAKFI1.12:/c/Program Files/dotnet:/c/Program Files (x86)/dotnet:/c/Program Files/Intel/Intel(R) Memory and Storage Tool:/c/Program Files (x86)/QuickTime/QTSystem:/c/Program Files (x86)/Microsoft Visual Studio:/c/Program Files (x86)/Micros:/c/Program Files/CMake/bin:/c/Program Files/NVIDIA Corporation/Nsight Compute 2023.2.0:/c/ProgramData/chocolatey/bin:/c/Program Files (x86)/Yarn/bin:/c/Program Files/nodejs:/c/Program Files/eSpeak NG:/c/Program Files/Solidigm/Solidigm(TM) Storage Tool:/c/Program Files/Git/cmd:/c/Users/kallemst/.pyenv/pyenv-win/bin:/c/Users/kallemst/.pyenv/pyenv-win/shims:/c/Users/kallemst/AppData/Local/Programs/Python/Python310/Scripts:/c/Users/kallemst/AppData/Local/Programs/Python/Python310:/c/Users/kallemst/AppData/Local/Microsoft/WindowsApps:/c/Users/kallemst/.dotnet/tools:/c/Users/kallemst/AppData/Local/Yarn/bin:/c/Users/kallemst/AppData/Roaming/npm:/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.7/bin:/c/Users/kallemst/AppData/Roaming/pypoetry/venv/Scripts/poetry.exe:/c/Users/kallemst/AppData/Local/Programs/Ollama:/c/Program Files (x86)/EaseUS/Todo Backup/bin/x64:/usr/bin/vendor_perl:/usr/bin/core_perl)
I ccache not found. Consider installing it for faster compilation.
process_begin: CreateProcess(NULL, cc -dumpmachine, ...) failed.
Makefile:439: pipe: Bad file descriptor
process_begin: CreateProcess(NULL, cc --version, ...) failed.
scripts/get-flags.mk:1: pipe: No error
/usr/bin/sh: line 1: cc: command not found
expr: syntax error: unexpected argument ‘070100’
expr: syntax error: unexpected argument ‘080100’
I llama.cpp build info:
I UNAME_S: MINGW64_NT-10.0-19045
I UNAME_P: unknown
I UNAME_M: x86_64
I CFLAGS: -Iggml/include -Iggml/src -Iinclude -Isrc -Icommon -D_XOPEN_SOURCE=600 -DNDEBUG -DGGML_USE_OPENMP -DGGML_USE_LLAMAFILE -DGGML_USE_CUDA -Ic:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2/include -Ic:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2/targets/x86_64-linux/include -DGGML_CUDA_USE_GRAPHS -std=c11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -march=native -mtune=native -fopenmp -Wdouble-promotion
I CXXFLAGS: -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -fopenmp -march=native -mtune=native -Wno-array-bounds -Iggml/include -Iggml/src -Iinclude -Isrc -Icommon -D_XOPEN_SOURCE=600 -DNDEBUG -DGGML_USE_OPENMP -DGGML_USE_LLAMAFILE -DGGML_USE_CUDA -Ic:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2/include -Ic:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2/targets/x86_64-linux/include -DGGML_CUDA_USE_GRAPHS
I NVCCFLAGS: -std=c++11 -O3 -use_fast_math --forward-unknown-to-host-compiler -arch=native -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DK_QUANTS_PER_ITERATION=2 -DGGML_CUDA_PEER_MAX_BATCH_SIZE=128
I LDFLAGS: -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -Lc:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2/lib64 -L/usr/lib64 -Lc:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2/targets/x86_64-linux/lib -Lc:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2/lib64/stubs -L/usr/lib/wsl/lib
/usr/bin/sh: line 1: cc: command not found
I CC:
/usr/bin/sh: line 1: c++: command not found
I CXX:
I NVCC: Build cuda_12.2.r12.2/compiler.32965470_0

!!! DEPRECATION WARNING !!!
The following LLAMA_ options are deprecated and will be removed in the future. Use the GGML_ prefix instead

  • LLAMA_CUDA
  • LLAMA_METAL
  • LLAMA_METAL_EMBED_LIBRARY
  • LLAMA_OPENMP
  • LLAMA_RPC
  • LLAMA_SYCL
  • LLAMA_SYCL_F16
  • LLAMA_OPENBLAS
  • LLAMA_OPENBLAS64
  • LLAMA_BLIS
  • LLAMA_NO_LLAMAFILE
  • LLAMA_NO_ACCELERATE
  • LLAMA_NO_OPENMP
  • LLAMA_NO_METAL

c++ -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -fopenmp -march=native -mtune=native -Wno-array-bounds -Iggml/include -Iggml/src -Iinclude -Isrc -Icommon -D_XOPEN_SOURCE=600 -DNDEBUG -DGGML_USE_OPENMP -DGGML_USE_LLAMAFILE -DGGML_USE_CUDA -Ic:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2/include -Ic:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2/targets/x86_64-linux/include -DGGML_CUDA_USE_GRAPHS -c ggml/src/sgemm.cpp -o ggml/src/sgemm.o
process_begin: CreateProcess(NULL, c++ -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -fopenmp -march=native -mtune=native -Wno-array-bounds -Iggml/include -Iggml/src -Iinclude -Isrc -Icommon -D_XOPEN_SOURCE=600 -DNDEBUG -DGGML_USE_OPENMP -DGGML_USE_LLAMAFILE -DGGML_USE_CUDA -Ic:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2/include -Ic:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2/targets/x86_64-linux/include -DGGML_CUDA_USE_GRAPHS -c ggml/src/sgemm.cpp -o ggml/src/sgemm.o, ...) failed.
make (e=2): Das System kann die angegebene Datei nicht finden.
make: *** [Makefile:972: ggml/src/sgemm.o] Error 2
(convert)
kallemst@TheMaschine MINGW64 /f/models/convert/gguf_gui (main)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.