Giter Site home page Giter Site logo

one-click-installers's Introduction

This repository has been merged into https://github.com/oobabooga/text-generation-webui -- let's continue working on it there!

One-click installers

These are automated installers for oobabooga/text-generation-webui.

The idea is to allow people to use the program without having to type commands in the terminal, thus making it more accessible.

How it works

The start scripts download miniconda, create a conda environment inside the current folder, and then install the webui using that environment.

After the initial installation, the update scripts are then used to automatically pull the latest text-generation-webui code and upgrade its requirements.

Limitations

  • The start/update scripts themselves are not automatically updated. To update them, you have to re-download the zips listed on the main README and overwrite your existing files.

one-click-installers's People

Contributors

andmydignity avatar deevis avatar gavin660 avatar jllllll avatar loufe avatar mongolu avatar oobabooga avatar rsxdalv avatar samfundev avatar semjon00 avatar xnul avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

one-click-installers's Issues

llama not start over start-webui.bat

Hello.
if I try start llama decapoda-research/llama-7b-hf-int4 over start-webui.bat - all crashed
Please what need todo

Loading llama-7b-hf-int4... CUDA extension not installed. Traceback (most recent call last): File "H:\one-click-installers-oobabooga-windows\text-generation-webui\server.py", line 243, in shared.model, shared.tokenizer = load_model(shared.model_name) File "H:\one-click-installers-oobabooga-windows\text-generation-webui\modules\models.py", line 101, in load_model model = load_quantized(model_name) File "H:\one-click-installers-oobabooga-windows\text-generation-webui\modules\GPTQ_loader.py", line 64, in load_quantized model = load_quant(str(path_to_model), str(pt_path), shared.args.gptq_bits) TypeError: load_quant() missing 1 required positional argument: 'groupsize'

Error Critical libmamba

I've been looking for this error that stops the installation and I don't understand what I'm doing wrong.

Screenshot_2

I had read that it is a problem of the SSL that windows defend, that the servers... etc.

I am new at this. Sorry.

torch.cuda.OutOfMemoryError: CUDA out of memory

 File "D:\AI\oobabooga-windows\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 14.00 MiB (GPU 0; 6.00 GiB total capacity; 5.18 GiB already allocated; 0 bytes free; 5.35 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Connect oobabooga with sillytavern

Hi, i'm a total noob so hope please help me in the easy way. I'd installed oobabboga and sillytavern, now i must connect them together, and this is the hard part. What i should do, i open webui and add something. What i must add and where?

Connection Timeout error while collecting bits and bytes 0.39.0 whl during installation

I get a "connection timeout error" while downloading the bits and bytes whl.


WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000025C8107AA70>, 'Connection to [raw.githubusercontent.com](https://raw.githubusercontent.com/) timed out. (connect timeout=15)')': /jllllll/bitsandbytes-windows-webui/main/bitsandbytes-0.39.0-py3-none-any.whl

ERROR: Could not install packages due to an OSError: HTTPSConnectionPool(host='[raw.githubusercontent.com](https://raw.githubusercontent.com/)', port=443): Max retries exceeded with url: /jllllll/bitsandbytes-windows-webui/main/bitsandbytes-0.39.0-py3-none-any.whl (Caused by ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000025C81079E10>, 'Connection to [raw.githubusercontent.com](https://raw.githubusercontent.com/) timed out. (connect timeout=15)'))

This might very well be an isolated problem, but is there any work around to installing the whl? i can do git clone or pip install but i dont know where to place the .whl file if I manually clone. Any help?

CUDA Error

Fresh install this morning. Here's the trace.


===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\cuda_setup\main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {WindowsPath('G')}
  warn(msg)
G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\cuda_setup\main.py:136: UserWarning: G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\installer_files\env did not contain libcudart.so as expected! Searching further paths...
  warn(msg)
G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\cuda_setup\main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {WindowsPath('file'), WindowsPath('/G'), WindowsPath('/AI/one-click-installers-oobabooga-windows/one-click-installers-oobabooga-windows/installer_files/env/etc/xml/catalog')}
  warn(msg)
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64...
G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\cuda_setup\main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')}
  warn(msg)
CUDA SETUP: WARNING! libcuda.so not found! Do you have a CUDA driver installed? If you are on a cluster, make sure you are on a CUDA machine!
G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\cuda_setup\main.py:136: UserWarning: WARNING: No libcudart.so found! Install CUDA or the cudatoolkit package (anaconda)!
  warn(msg)
G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\cuda_setup\main.py:136: UserWarning: WARNING: No GPU detected! Check your CUDA paths. Proceeding to load CPU-only library...
  warn(msg)
CUDA SETUP: Loading binary G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64...
CUDA SETUP: WARNING! libcuda.so not found! Do you have a CUDA driver installed? If you are on a cluster, make sure you are on a CUDA machine!
CUDA SETUP: Loading binary G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64...
CUDA SETUP: WARNING! libcuda.so not found! Do you have a CUDA driver installed? If you are on a cluster, make sure you are on a CUDA machine!
CUDA SETUP: Loading binary G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
CUDA SETUP: Problem: The main issue seems to be that the main CUDA library was not detected.
CUDA SETUP: Solution 1): Your paths are probably not up-to-date. You can update them via: sudo ldconfig.
CUDA SETUP: Solution 2): If you do not have sudo rights, you can do the following:
CUDA SETUP: Solution 2a): Find the cuda library via: find / -name libcuda.so 2>/dev/null
CUDA SETUP: Solution 2b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_2a
CUDA SETUP: Solution 2c): For a permanent solution add the export from 2b into your .bashrc file, located at ~/.bashrc
Traceback (most recent call last):
  File "G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\text-generation-webui\server.py", line 13, in <module>
    from modules import chat, shared, training, ui
  File "G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\text-generation-webui\modules\training.py", line 11, in <module>
    from peft import (LoraConfig, get_peft_model, get_peft_model_state_dict,
  File "G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\peft\__init__.py", line 22, in <module>
    from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING, PEFT_TYPE_TO_CONFIG_MAPPING, get_peft_config, get_peft_model
  File "G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\peft\mapping.py", line 16, in <module>
    from .peft_model import (
  File "G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\peft\peft_model.py", line 31, in <module>
    from .tuners import LoraModel, PrefixEncoder, PromptEmbedding, PromptEncoder
  File "G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\peft\tuners\__init__.py", line 20, in <module>
    from .lora import LoraConfig, LoraModel
  File "G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\peft\tuners\lora.py", line 36, in <module>
    import bitsandbytes as bnb
  File "G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\__init__.py", line 7, in <module>
    from .autograd._functions import (
  File "G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\autograd\__init__.py", line 1, in <module>
    from ._functions import undo_layout, get_inverse_transform_indices
  File "G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\autograd\_functions.py", line 9, in <module>
    import bitsandbytes.functional as F
  File "G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\functional.py", line 17, in <module>
    from .cextension import COMPILED_WITH_CUDA, lib
  File "G:\AI\one-click-installers-oobabooga-windows\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\cextension.py", line 22, in <module>
    raise RuntimeError('''
RuntimeError:
        CUDA Setup failed despite GPU being available. Inspect the CUDA SETUP outputs above to fix your environment!
        If you cannot find any issues and suspect a bug, please open an issue with detals about your environment:
        https://github.com/TimDettmers/bitsandbytes/issues
Press any key to continue . . .```

Error on 'gallery' load?

Via one-click windows installer, post install (which seemed to complete without a hitch), upon running start_windows.bat (or cmd_windows.bat):

INFO:Gradio HTTP request redirected to localhost :)
bin C:\Users\shogz\SEA\oobabooga\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cuda117.dll
INFO:Loading facebook_opt-1.3b...
INFO:Loaded the model in 3.40 seconds.
INFO:Loading the extension "gallery"...
Traceback (most recent call last):
  File "C:\Users\shogz\SEA\oobabooga\text-generation-webui\server.py", line 885, in <module>
    create_interface()
  File "C:\Users\shogz\SEA\oobabooga\text-generation-webui\server.py", line 472, in create_interface
    with gr.Blocks(css=ui.css if not shared.is_chat() else ui.css + ui.chat_css, analytics_enabled=False, title=title, theme=ui.theme) as shared.gradio['interface']:
  File "C:\Users\shogz\SEA\oobabooga\installer_files\env\lib\site-packages\gradio\blocks.py", line 1285, in __exit__
    self.config = self.get_config_file()
  File "C:\Users\shogz\SEA\oobabooga\installer_files\env\lib\site-packages\gradio\blocks.py", line 1261, in get_config_file
    "input": list(block.input_api_info()),  # type: ignore
  File "C:\Users\shogz\SEA\oobabooga\installer_files\env\lib\site-packages\gradio_client\serializing.py", line 40, in input_api_info
    return (api_info["serialized_input"][0], api_info["serialized_input"][1])
KeyError: 'serialized_input'

Done!
Press any key to continue . . .

Share=True

Where do we add this flag so the oneclick is set to share publicly? I tried ./start_linux.sh --share but that did not work

ModuleNotFoundError: No module named 'llama_inference_offload'

I am trying to run this program in CPU mode. Whenever I try to run the install.bat file it crashed after downloading the assets. I believe there is something with installation it is hanging on. In the installer.bat I am selecting b for the choice and running start-webui.bat results in the ModuleNotFoundError: No module named 'llama_inference_offload' error. If I select a, the install goes according to plan and start-webui.bat starts with no errors. Once I send a prompt though the program crashes because there are no cuda cores because there is no GPU attached. I am attaching some log file I have created to show what the programs are doing while ran.

start-webui.bat python flags: call python server.py --cpu --chat --model vicuna-13b-GPTQ-4bit-128g --wbit 4 --groupsize 128

Running install.bat option b:

WARNING: This script relies on Micromamba which may have issues on some systems when installed under a path with spaces.
May also have issues with long paths.

What is your GPU?

A) NVIDIA
B) None (I want to run in CPU mode)

Input> Already up to date.
Collecting git+https://github.com/huggingface/transformers (from -r requirements.txt (line 15))
Cloning https://github.com/huggingface/transformers to c:\users\senpai\appdata\local\temp\pip-req-build-gkpc8g01
Resolved https://github.com/huggingface/transformers to commit 656e869a4523f6a0ce90b3aacbb05cc8fb5794bb
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: accelerate==0.18.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r requirements.txt (line 1)) (0.18.0)
Requirement already satisfied: bitsandbytes==0.37.2 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r requirements.txt (line 2)) (0.37.2)
Requirement already satisfied: datasets in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r requirements.txt (line 3)) (2.11.0)
Requirement already satisfied: flexgen==0.1.7 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r requirements.txt (line 4)) (0.1.7)
Requirement already satisfied: gradio==3.24.1 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r requirements.txt (line 5)) (3.24.1)
Requirement already satisfied: markdown in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r requirements.txt (line 6)) (3.4.3)
Requirement already satisfied: numpy in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r requirements.txt (line 7)) (1.23.5)
Collecting numpy
Using cached numpy-1.24.2-cp310-cp310-win_amd64.whl (14.8 MB)
Requirement already satisfied: peft==0.2.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r requirements.txt (line 8)) (0.2.0)
Requirement already satisfied: requests in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r requirements.txt (line 9)) (2.28.2)
Requirement already satisfied: rwkv==0.7.3 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r requirements.txt (line 10)) (0.7.3)
Requirement already satisfied: safetensors==0.3.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r requirements.txt (line 11)) (0.3.0)
Requirement already satisfied: sentencepiece in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r requirements.txt (line 12)) (0.1.97)
Requirement already satisfied: pyyaml in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r requirements.txt (line 13)) (6.0)
Requirement already satisfied: tqdm in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r requirements.txt (line 14)) (4.65.0)
Requirement already satisfied: psutil in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from accelerate==0.18.0->-r requirements.txt (line 1)) (5.9.4)
Requirement already satisfied: packaging>=20.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from accelerate==0.18.0->-r requirements.txt (line 1)) (23.0)
Requirement already satisfied: torch>=1.4.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from accelerate==0.18.0->-r requirements.txt (line 1)) (2.0.0)
Requirement already satisfied: pulp in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from flexgen==0.1.7->-r requirements.txt (line 4)) (2.7.0)
Requirement already satisfied: attrs in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from flexgen==0.1.7->-r requirements.txt (line 4)) (22.2.0)
Requirement already satisfied: gradio-client>=0.0.5 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (0.0.8)
Requirement already satisfied: typing-extensions in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (4.5.0)
Requirement already satisfied: aiofiles in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (23.1.0)
Requirement already satisfied: pydantic in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (1.10.7)
Requirement already satisfied: pandas in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (2.0.0)
Requirement already satisfied: uvicorn in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (0.21.1)
Requirement already satisfied: jinja2 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (3.1.2)
Requirement already satisfied: ffmpy in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (0.3.0)
Requirement already satisfied: aiohttp in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (3.8.4)
Requirement already satisfied: httpx in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (0.23.3)
Requirement already satisfied: pillow in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (9.4.0)
Requirement already satisfied: semantic-version in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (2.10.0)
Requirement already satisfied: websockets>=10.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (11.0.1)
Requirement already satisfied: altair>=4.2.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (4.2.2)
Requirement already satisfied: matplotlib in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (3.7.1)
Requirement already satisfied: python-multipart in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (0.0.6)
Requirement already satisfied: pydub in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (0.25.1)
Requirement already satisfied: markupsafe in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (2.1.2)
Requirement already satisfied: fastapi in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (0.95.0)
Requirement already satisfied: markdown-it-py[linkify]>=2.0.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (2.2.0)
Requirement already satisfied: orjson in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (3.8.9)
Requirement already satisfied: mdit-py-plugins<=0.3.3 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (0.3.3)
Requirement already satisfied: huggingface-hub>=0.13.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from gradio==3.24.1->-r requirements.txt (line 5)) (0.13.4)
Requirement already satisfied: tokenizers>=0.13.2 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from rwkv==0.7.3->-r requirements.txt (line 10)) (0.13.3)
Requirement already satisfied: multiprocess in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from datasets->-r requirements.txt (line 3)) (0.70.14)
Requirement already satisfied: pyarrow>=8.0.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from datasets->-r requirements.txt (line 3)) (11.0.0)
Requirement already satisfied: xxhash in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from datasets->-r requirements.txt (line 3)) (3.2.0)
Requirement already satisfied: responses<0.19 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from datasets->-r requirements.txt (line 3)) (0.18.0)
Requirement already satisfied: fsspec[http]>=2021.11.1 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from datasets->-r requirements.txt (line 3)) (2023.4.0)
Requirement already satisfied: dill<0.3.7,>=0.3.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from datasets->-r requirements.txt (line 3)) (0.3.6)
Requirement already satisfied: idna<4,>=2.5 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from requests->-r requirements.txt (line 9)) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from requests->-r requirements.txt (line 9)) (1.26.15)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from requests->-r requirements.txt (line 9)) (3.1.0)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from requests->-r requirements.txt (line 9)) (2022.12.7)
Requirement already satisfied: colorama in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from tqdm->-r requirements.txt (line 14)) (0.4.6)
Requirement already satisfied: filelock in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from transformers==4.28.0.dev0->-r requirements.txt (line 15)) (3.11.0)
Requirement already satisfied: regex!=2019.12.17 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from transformers==4.28.0.dev0->-r requirements.txt (line 15)) (2023.3.23)
Requirement already satisfied: toolz in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from altair>=4.2.0->gradio==3.24.1->-r requirements.txt (line 5)) (0.12.0)
Requirement already satisfied: entrypoints in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from altair>=4.2.0->gradio==3.24.1->-r requirements.txt (line 5)) (0.4)
Requirement already satisfied: jsonschema>=3.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from altair>=4.2.0->gradio==3.24.1->-r requirements.txt (line 5)) (4.17.3)
Requirement already satisfied: yarl<2.0,>=1.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from aiohttp->gradio==3.24.1->-r requirements.txt (line 5)) (1.8.2)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from aiohttp->gradio==3.24.1->-r requirements.txt (line 5)) (4.0.2)
Requirement already satisfied: frozenlist>=1.1.1 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from aiohttp->gradio==3.24.1->-r requirements.txt (line 5)) (1.3.3)
Requirement already satisfied: aiosignal>=1.1.2 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from aiohttp->gradio==3.24.1->-r requirements.txt (line 5)) (1.3.1)
Requirement already satisfied: multidict<7.0,>=4.5 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from aiohttp->gradio==3.24.1->-r requirements.txt (line 5)) (6.0.4)
Requirement already satisfied: mdurl~=0.1 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from markdown-it-py[linkify]>=2.0.0->gradio==3.24.1->-r requirements.txt (line 5)) (0.1.2)
Requirement already satisfied: linkify-it-py<3,>=1 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from markdown-it-py[linkify]>=2.0.0->gradio==3.24.1->-r requirements.txt (line 5)) (2.0.0)
Requirement already satisfied: tzdata>=2022.1 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from pandas->gradio==3.24.1->-r requirements.txt (line 5)) (2023.3)
Requirement already satisfied: python-dateutil>=2.8.2 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from pandas->gradio==3.24.1->-r requirements.txt (line 5)) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from pandas->gradio==3.24.1->-r requirements.txt (line 5)) (2023.3)
Requirement already satisfied: sympy in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from torch>=1.4.0->accelerate==0.18.0->-r requirements.txt (line 1)) (1.11.1)
Requirement already satisfied: networkx in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from torch>=1.4.0->accelerate==0.18.0->-r requirements.txt (line 1)) (3.1)
Requirement already satisfied: starlette<0.27.0,>=0.26.1 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from fastapi->gradio==3.24.1->-r requirements.txt (line 5)) (0.26.1)
Requirement already satisfied: rfc3986[idna2008]<2,>=1.3 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from httpx->gradio==3.24.1->-r requirements.txt (line 5)) (1.5.0)
Requirement already satisfied: httpcore<0.17.0,>=0.15.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from httpx->gradio==3.24.1->-r requirements.txt (line 5)) (0.16.3)
Requirement already satisfied: sniffio in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from httpx->gradio==3.24.1->-r requirements.txt (line 5)) (1.3.0)
Requirement already satisfied: kiwisolver>=1.0.1 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from matplotlib->gradio==3.24.1->-r requirements.txt (line 5)) (1.4.4)
Requirement already satisfied: cycler>=0.10 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from matplotlib->gradio==3.24.1->-r requirements.txt (line 5)) (0.11.0)
Requirement already satisfied: contourpy>=1.0.1 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from matplotlib->gradio==3.24.1->-r requirements.txt (line 5)) (1.0.7)
Requirement already satisfied: fonttools>=4.22.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from matplotlib->gradio==3.24.1->-r requirements.txt (line 5)) (4.39.3)
Requirement already satisfied: pyparsing>=2.3.1 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from matplotlib->gradio==3.24.1->-r requirements.txt (line 5)) (3.0.9)
Requirement already satisfied: click>=7.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from uvicorn->gradio==3.24.1->-r requirements.txt (line 5)) (8.1.3)
Requirement already satisfied: h11>=0.8 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from uvicorn->gradio==3.24.1->-r requirements.txt (line 5)) (0.14.0)
Requirement already satisfied: anyio<5.0,>=3.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from httpcore<0.17.0,>=0.15.0->httpx->gradio==3.24.1->-r requirements.txt (line 5)) (3.6.2)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from jsonschema>=3.0->altair>=4.2.0->gradio==3.24.1->-r requirements.txt (line 5)) (0.19.3)
Requirement already satisfied: uc-micro-py in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from linkify-it-py<3,>=1->markdown-it-py[linkify]>=2.0.0->gradio==3.24.1->-r requirements.txt (line 5)) (1.0.1)
Requirement already satisfied: six>=1.5 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from python-dateutil>=2.8.2->pandas->gradio==3.24.1->-r requirements.txt (line 5)) (1.16.0)
Requirement already satisfied: mpmath>=0.19 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from sympy->torch>=1.4.0->accelerate==0.18.0->-r requirements.txt (line 1)) (1.3.0)
Installing collected packages: numpy
Attempting uninstall: numpy
Found existing installation: numpy 1.23.5
Uninstalling numpy-1.23.5:
Successfully uninstalled numpy-1.23.5
Successfully installed numpy-1.24.2
Requirement already satisfied: flask_cloudflared==0.0.12 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r extensions\api\requirements.txt (line 1)) (0.0.12)
Requirement already satisfied: requests in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from flask_cloudflared==0.0.12->-r extensions\api\requirements.txt (line 1)) (2.28.2)
Requirement already satisfied: Flask>=0.8 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from flask_cloudflared==0.0.12->-r extensions\api\requirements.txt (line 1)) (2.2.3)
Requirement already satisfied: Jinja2>=3.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from Flask>=0.8->flask_cloudflared==0.0.12->-r extensions\api\requirements.txt (line 1)) (3.1.2)
Requirement already satisfied: click>=8.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from Flask>=0.8->flask_cloudflared==0.0.12->-r extensions\api\requirements.txt (line 1)) (8.1.3)
Requirement already satisfied: itsdangerous>=2.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from Flask>=0.8->flask_cloudflared==0.0.12->-r extensions\api\requirements.txt (line 1)) (2.1.2)
Requirement already satisfied: Werkzeug>=2.2.2 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from Flask>=0.8->flask_cloudflared==0.0.12->-r extensions\api\requirements.txt (line 1)) (2.2.3)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from requests->flask_cloudflared==0.0.12->-r extensions\api\requirements.txt (line 1)) (2022.12.7)
Requirement already satisfied: idna<4,>=2.5 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from requests->flask_cloudflared==0.0.12->-r extensions\api\requirements.txt (line 1)) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from requests->flask_cloudflared==0.0.12->-r extensions\api\requirements.txt (line 1)) (1.26.15)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from requests->flask_cloudflared==0.0.12->-r extensions\api\requirements.txt (line 1)) (3.1.0)
Requirement already satisfied: colorama in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from click>=8.0->Flask>=0.8->flask_cloudflared==0.0.12->-r extensions\api\requirements.txt (line 1)) (0.4.6)
Requirement already satisfied: MarkupSafe>=2.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from Jinja2>=3.0->Flask>=0.8->flask_cloudflared==0.0.12->-r extensions\api\requirements.txt (line 1)) (2.1.2)
Requirement already satisfied: elevenlabslib in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r extensions\elevenlabs_tts\requirements.txt (line 1)) (0.4.1)
Requirement already satisfied: soundfile in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r extensions\elevenlabs_tts\requirements.txt (line 2)) (0.12.1)
Requirement already satisfied: sounddevice in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r extensions\elevenlabs_tts\requirements.txt (line 3)) (0.4.6)
Requirement already satisfied: typing in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from elevenlabslib->-r extensions\elevenlabs_tts\requirements.txt (line 1)) (3.7.4.3)
Requirement already satisfied: requests in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from elevenlabslib->-r extensions\elevenlabs_tts\requirements.txt (line 1)) (2.28.2)
Requirement already satisfied: numpy in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from elevenlabslib->-r extensions\elevenlabs_tts\requirements.txt (line 1)) (1.24.2)
Requirement already satisfied: cffi>=1.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from soundfile->-r extensions\elevenlabs_tts\requirements.txt (line 2)) (1.15.1)
Requirement already satisfied: pycparser in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from cffi>=1.0->soundfile->-r extensions\elevenlabs_tts\requirements.txt (line 2)) (2.21)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from requests->elevenlabslib->-r extensions\elevenlabs_tts\requirements.txt (line 1)) (1.26.15)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from requests->elevenlabslib->-r extensions\elevenlabs_tts\requirements.txt (line 1)) (2022.12.7)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from requests->elevenlabslib->-r extensions\elevenlabs_tts\requirements.txt (line 1)) (3.1.0)
Requirement already satisfied: idna<4,>=2.5 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from requests->elevenlabslib->-r extensions\elevenlabs_tts\requirements.txt (line 1)) (3.4)
Requirement already satisfied: deep-translator==1.9.2 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r extensions\google_translate\requirements.txt (line 1)) (1.9.2)
Requirement already satisfied: beautifulsoup4<5.0.0,>=4.9.1 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from deep-translator==1.9.2->-r extensions\google_translate\requirements.txt (line 1)) (4.12.2)
Requirement already satisfied: requests<3.0.0,>=2.23.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from deep-translator==1.9.2->-r extensions\google_translate\requirements.txt (line 1)) (2.28.2)
Requirement already satisfied: soupsieve>1.2 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from beautifulsoup4<5.0.0,>=4.9.1->deep-translator==1.9.2->-r extensions\google_translate\requirements.txt (line 1)) (2.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from requests<3.0.0,>=2.23.0->deep-translator==1.9.2->-r extensions\google_translate\requirements.txt (line 1)) (1.26.15)
Requirement already satisfied: idna<4,>=2.5 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from requests<3.0.0,>=2.23.0->deep-translator==1.9.2->-r extensions\google_translate\requirements.txt (line 1)) (3.4)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from requests<3.0.0,>=2.23.0->deep-translator==1.9.2->-r extensions\google_translate\requirements.txt (line 1)) (2022.12.7)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from requests<3.0.0,>=2.23.0->deep-translator==1.9.2->-r extensions\google_translate\requirements.txt (line 1)) (3.1.0)
Requirement already satisfied: ipython in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r extensions\silero_tts\requirements.txt (line 1)) (8.12.0)
Requirement already satisfied: num2words in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r extensions\silero_tts\requirements.txt (line 2)) (0.5.12)
Requirement already satisfied: omegaconf in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r extensions\silero_tts\requirements.txt (line 3)) (2.3.0)
Requirement already satisfied: pydub in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r extensions\silero_tts\requirements.txt (line 4)) (0.25.1)
Requirement already satisfied: PyYAML in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r extensions\silero_tts\requirements.txt (line 5)) (6.0)
Requirement already satisfied: decorator in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from ipython->-r extensions\silero_tts\requirements.txt (line 1)) (5.1.1)
Requirement already satisfied: jedi>=0.16 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from ipython->-r extensions\silero_tts\requirements.txt (line 1)) (0.18.2)
Requirement already satisfied: traitlets>=5 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from ipython->-r extensions\silero_tts\requirements.txt (line 1)) (5.9.0)
Requirement already satisfied: stack-data in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from ipython->-r extensions\silero_tts\requirements.txt (line 1)) (0.6.2)
Requirement already satisfied: pygments>=2.4.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from ipython->-r extensions\silero_tts\requirements.txt (line 1)) (2.14.0)
Requirement already satisfied: colorama in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from ipython->-r extensions\silero_tts\requirements.txt (line 1)) (0.4.6)
Requirement already satisfied: prompt-toolkit!=3.0.37,<3.1.0,>=3.0.30 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from ipython->-r extensions\silero_tts\requirements.txt (line 1)) (3.0.38)
Requirement already satisfied: backcall in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from ipython->-r extensions\silero_tts\requirements.txt (line 1)) (0.2.0)
Requirement already satisfied: pickleshare in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from ipython->-r extensions\silero_tts\requirements.txt (line 1)) (0.7.5)
Requirement already satisfied: matplotlib-inline in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from ipython->-r extensions\silero_tts\requirements.txt (line 1)) (0.1.6)
Requirement already satisfied: docopt>=0.6.2 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from num2words->-r extensions\silero_tts\requirements.txt (line 2)) (0.6.2)
Requirement already satisfied: antlr4-python3-runtime==4.9.* in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from omegaconf->-r extensions\silero_tts\requirements.txt (line 3)) (4.9.3)
Requirement already satisfied: parso<0.9.0,>=0.8.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from jedi>=0.16->ipython->-r extensions\silero_tts\requirements.txt (line 1)) (0.8.3)
Requirement already satisfied: wcwidth in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from prompt-toolkit!=3.0.37,<3.1.0,>=3.0.30->ipython->-r extensions\silero_tts\requirements.txt (line 1)) (0.2.6)
Requirement already satisfied: pure-eval in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from stack-data->ipython->-r extensions\silero_tts\requirements.txt (line 1)) (0.2.2)
Requirement already satisfied: executing>=1.2.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from stack-data->ipython->-r extensions\silero_tts\requirements.txt (line 1)) (1.2.0)
Requirement already satisfied: asttokens>=2.1.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from stack-data->ipython->-r extensions\silero_tts\requirements.txt (line 1)) (2.2.1)
Requirement already satisfied: six in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from asttokens>=2.1.0->stack-data->ipython->-r extensions\silero_tts\requirements.txt (line 1)) (1.16.0)
Collecting git+https://github.com/Uberi/speech_recognition.git@010382b (from -r extensions\whisper_stt\requirements.txt (line 1))
Cloning https://github.com/Uberi/speech_recognition.git (to revision 010382b) to c:\users\senpai\appdata\local\temp\pip-req-build-4ojingrm
Resolved https://github.com/Uberi/speech_recognition.git to commit 010382b
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Requirement already satisfied: openai-whisper in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r extensions\whisper_stt\requirements.txt (line 2)) (20230314)
Requirement already satisfied: soundfile in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r extensions\whisper_stt\requirements.txt (line 3)) (0.12.1)
Requirement already satisfied: ffmpeg in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from -r extensions\whisper_stt\requirements.txt (line 4)) (1.4)
Requirement already satisfied: requests>=2.26.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from SpeechRecognition==3.9.0->-r extensions\whisper_stt\requirements.txt (line 1)) (2.28.2)
Requirement already satisfied: numpy in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (1.24.2)
Requirement already satisfied: more-itertools in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (9.1.0)
Requirement already satisfied: tiktoken==0.3.1 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (0.3.1)
Requirement already satisfied: ffmpeg-python==0.2.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (0.2.0)
Requirement already satisfied: torch in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (2.0.0)
Requirement already satisfied: numba in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (0.56.4)
Requirement already satisfied: tqdm in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (4.65.0)
Requirement already satisfied: future in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from ffmpeg-python==0.2.0->openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (0.18.3)
Requirement already satisfied: regex>=2022.1.18 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from tiktoken==0.3.1->openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (2023.3.23)
Requirement already satisfied: cffi>=1.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from soundfile->-r extensions\whisper_stt\requirements.txt (line 3)) (1.15.1)
Requirement already satisfied: pycparser in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from cffi>=1.0->soundfile->-r extensions\whisper_stt\requirements.txt (line 3)) (2.21)
Requirement already satisfied: idna<4,>=2.5 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from requests>=2.26.0->SpeechRecognition==3.9.0->-r extensions\whisper_stt\requirements.txt (line 1)) (3.4)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from requests>=2.26.0->SpeechRecognition==3.9.0->-r extensions\whisper_stt\requirements.txt (line 1)) (2022.12.7)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from requests>=2.26.0->SpeechRecognition==3.9.0->-r extensions\whisper_stt\requirements.txt (line 1)) (3.1.0)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from requests>=2.26.0->SpeechRecognition==3.9.0->-r extensions\whisper_stt\requirements.txt (line 1)) (1.26.15)
Requirement already satisfied: setuptools in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from numba->openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (67.6.1)
Requirement already satisfied: llvmlite<0.40,>=0.39.0dev0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from numba->openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (0.39.1)
Collecting numpy
Using cached numpy-1.23.5-cp310-cp310-win_amd64.whl (14.6 MB)
Requirement already satisfied: filelock in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from torch->openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (3.11.0)
Requirement already satisfied: typing-extensions in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from torch->openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (4.5.0)
Requirement already satisfied: sympy in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from torch->openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (1.11.1)
Requirement already satisfied: networkx in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from torch->openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (3.1)
Requirement already satisfied: jinja2 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from torch->openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (3.1.2)
Requirement already satisfied: colorama in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from tqdm->openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (0.4.6)
Requirement already satisfied: MarkupSafe>=2.0 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from jinja2->torch->openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (2.1.2)
Requirement already satisfied: mpmath>=0.19 in c:\users\senpai\downloads\oobabooga-windows\installer_files\env\lib\site-packages (from sympy->torch->openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (1.3.0)
Installing collected packages: numpy
Attempting uninstall: numpy
Found existing installation: numpy 1.24.2
Uninstalling numpy-1.24.2:
Successfully uninstalled numpy-1.24.2
Successfully installed numpy-1.23.5
Running command git clone --filter=blob:none --quiet https://github.com/huggingface/transformers 'C:\Users\senpai\AppData\Local\Temp\pip-req-build-gkpc8g01'
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
numba 0.56.4 requires numpy<1.24,>=1.18, but you have numpy 1.24.2 which is incompatible.
Running command git clone --filter=blob:none --quiet https://github.com/Uberi/speech_recognition.git 'C:\Users\senpai\AppData\Local\Temp\pip-req-build-4ojingrm'
WARNING: Did not find branch or tag '010382b', assuming revision or ref.
Running command git checkout -q 010382b
The system cannot find the batch label specified - bandaid

Running start-webui.bat

Starting the web UI...

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

CUDA SETUP: Loading binary C:\Users\senpai\Downloads\oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.dll...
C:\Users\senpai\Downloads\oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
Loading vicuna-13b-GPTQ-4bit-128g...
Traceback (most recent call last):
File "C:\Users\senpai\Downloads\oobabooga-windows\text-generation-webui\server.py", line 302, in
shared.model, shared.tokenizer = load_model(shared.model_name)
File "C:\Users\senpai\Downloads\oobabooga-windows\text-generation-webui\modules\models.py", line 100, in load_model
from modules.GPTQ_loader import load_quantized
File "C:\Users\senpai\Downloads\oobabooga-windows\text-generation-webui\modules\GPTQ_loader.py", line 14, in
import llama_inference_offload
ModuleNotFoundError: No module named 'llama_inference_offload'
Press any key to continue . . .

Possible installer bug?

Winver: Windows 10 22H2 (Virtualized)
System:
CPU: Xeon(R) CPU E5-2670 v3 (12 cores virtualized)
RAM: 32GB
HDD: 256GB

Missing LICENSE

I see you have no LICENSE file for this project. The default is copyright.

I would suggest releasing the code under the AGPL-3.0-or-later license so that others are encouraged to contribute changes back to your project and to match text-generation-webui.

macbook M2 runtime MPS issue

I am on the latest MacOs (13.4 (22f66)) on a MacBook air m2 16gb. I installed the opt6.7B version whilst choosing to run on CPU (also tried apple silicon, got same results). The issue occurs when I am in the localhost and when I press "Generate" with any prompt, I also enabled --mlock option, no other differences from default configuration.

What can cause this issue, and are there any fixes for it?

Traceback of the issue:

/Users/sadesguy/Downloads/one-click-installers-main/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/transformers/generation/utils.py:719: UserWarning: MPS: no support for int64 repeats mask, casting it to int32 (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1682343686130/work/aten/src/ATen/native/mps/operations/Repeat.mm:236.)
  input_ids = input_ids.repeat_interleave(expand_size, dim=0)
Traceback (most recent call last):
  File "/Users/sadesguy/Downloads/one-click-installers-main/oobabooga_macos/text-generation-webui/modules/callbacks.py", line 73, in gentask
    ret = self.mfunc(callback=_callback, **self.kwargs)
  File "/Users/sadesguy/Downloads/one-click-installers-main/oobabooga_macos/text-generation-webui/modules/text_generation.py", line 277, in generate_with_callback
    shared.model.generate(**kwargs)
  File "/Users/sadesguy/Downloads/one-click-installers-main/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Users/sadesguy/Downloads/one-click-installers-main/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/transformers/generation/utils.py", line 1568, in generate
    return self.sample(
  File "/Users/sadesguy/Downloads/one-click-installers-main/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/transformers/generation/utils.py", line 2615, in sample
    outputs = self(
  File "/Users/sadesguy/Downloads/one-click-installers-main/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/sadesguy/Downloads/one-click-installers-main/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/transformers/models/opt/modeling_opt.py", line 945, in forward
    outputs = self.model.decoder(
  File "/Users/sadesguy/Downloads/one-click-installers-main/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/sadesguy/Downloads/one-click-installers-main/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/transformers/models/opt/modeling_opt.py", line 655, in forward
    pos_embeds = self.embed_positions(attention_mask, past_key_values_length)
  File "/Users/sadesguy/Downloads/one-click-installers-main/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/sadesguy/Downloads/one-click-installers-main/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/transformers/models/opt/modeling_opt.py", line 115, in forward
    positions = (torch.cumsum(attention_mask, dim=1).type_as(attention_mask) * attention_mask).long() - 1
RuntimeError: MPS does not support cumsum op with int64 input
Output generated in 4.72 seconds (0.00 tokens/s, 0 tokens, context 63, seed 1888692510)

RuntimeError: Unrecognized CachingAllocator option: max_split_size_mb=512

Can't find the same issue like this. Kindly help, thank you.

Gradio HTTP request redirected to localhost :)
bin C:\text-generation-webui\one-click-installers-main\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cuda117.dll
Traceback (most recent call last):
File "C:\text-generation-webui\one-click-installers-main\text-generation-webui\server.py", line 44, in
from modules import chat, shared, training, ui
File "C:\text-generation-webui\one-click-installers-main\text-generation-webui\modules\training.py", line 13, in
from peft import (LoraConfig, get_peft_model, prepare_model_for_int8_training,
File "C:\text-generation-webui\one-click-installers-main\installer_files\env\lib\site-packages\peft_init_.py", line 22, in
from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING, PEFT_TYPE_TO_CONFIG_MAPPING, get_peft_config, get_peft_model
File "C:\text-generation-webui\one-click-installers-main\installer_files\env\lib\site-packages\peft\mapping.py", line 16, in
from .peft_model import (
File "C:\text-generation-webui\one-click-installers-main\installer_files\env\lib\site-packages\peft\peft_model.py", line 31, in
from .tuners import (
File "C:\text-generation-webui\one-click-installers-main\installer_files\env\lib\site-packages\peft\tuners_init_.py", line 21, in
from .lora import LoraConfig, LoraModel
File "C:\text-generation-webui\one-click-installers-main\installer_files\env\lib\site-packages\peft\tuners\lora.py", line 40, in
import bitsandbytes as bnb
File "C:\text-generation-webui\one-click-installers-main\installer_files\env\lib\site-packages\bitsandbytes_init_.py", line 6, in
from . import cuda_setup, utils, research
File "C:\text-generation-webui\one-click-installers-main\installer_files\env\lib\site-packages\bitsandbytes\research_init_.py", line 2, in
from .autograd._functions import (
File "C:\text-generation-webui\one-click-installers-main\installer_files\env\lib\site-packages\bitsandbytes\research\autograd_functions.py", line 10, in
from bitsandbytes.autograd.functions import MatmulLtState, GlobalOutlierPooler
File "C:\text-generation-webui\one-click-installers-main\installer_files\env\lib\site-packages\bitsandbytes\autograd_init
.py", line 1, in
from .functions import undo_layout, get_inverse_transform_indices
File "C:\text-generation-webui\one-click-installers-main\installer_files\env\lib\site-packages\bitsandbytes\autograd_functions.py", line 236, in
class MatmulLtState:
File "C:\text-generation-webui\one-click-installers-main\installer_files\env\lib\site-packages\bitsandbytes\autograd_functions.py", line 258, in MatmulLtState
formatB = F.get_special_format_str()
File "C:\text-generation-webui\one-click-installers-main\installer_files\env\lib\site-packages\bitsandbytes\functional.py", line 283, in get_special_format_str
major, minor = torch.cuda.get_device_capability()
File "C:\text-generation-webui\one-click-installers-main\installer_files\env\lib\site-packages\torch\cuda_init
.py", line 381, in get_device_capability
prop = get_device_properties(device)
File "C:\text-generation-webui\one-click-installers-main\installer_files\env\lib\site-packages\torch\cuda_init
.py", line 395, in get_device_properties
_lazy_init() # will define get_device_properties
File "C:\text-generation-webui\one-click-installers-main\installer_files\env\lib\site-packages\torch\cuda_init
.py", line 247, in _lazy_init
torch._C._cuda_init()
RuntimeError: Unrecognized CachingAllocator option: max_split_size_mb=512

Using RTX 3060 12G VRAM

Trying to run one-click-installers start_linux.sh on WSL/Ubuntu error: "Too many levels of symbolic links"

I have a symlink between ~/winhome and /mnt/c/Users/kayvan and the one-click-installers files are in a src/ subdirectory of that top level directory.

~/winhome/src/one-click-installers$ ./start_linux.sh
Downloading Miniconda from https://repo.anaconda.com/miniconda/Miniconda3-py310_23.1.0-1-Linux-x86_64.sh to /home/kayvan/winhome/src/one-click-installers/installer_files/miniconda_installer.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 70.9M  100 70.9M    0     0  2128k      0  0:00:34  0:00:34 --:--:-- 3923k
PREFIX=/home/kayvan/winhome/src/one-click-installers/installer_files/conda
Unpacking payload ...

Installing base environment...


Downloading and Extracting Packages


Downloading and Extracting Packages

Preparing transaction: done
Executing transaction: done
installation finished.
Miniconda version:
conda 23.1.0
Collecting package metadata (current_repodata.json): done
Solving environment: done


==> WARNING: A newer version of conda exists. <==
  current version: 23.1.0
  latest version: 23.3.1

Please update conda by running

    $ conda update -n base -c defaults conda

Or to minimize the number of packages updated during conda update use

     conda install conda=23.3.1



## Package Plan ##

  environment location: /home/kayvan/winhome/src/one-click-installers/installer_files/env

  added / updated specs:
    - python=3.10


The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    libffi-3.4.4               |       h6a678d5_0         142 KB
    openssl-1.1.1t             |       h7f8727e_0         3.7 MB
    pip-23.0.1                 |  py310h06a4308_0         2.6 MB
    python-3.10.11             |       h7a1cb2a_2        26.8 MB
    setuptools-66.0.0          |  py310h06a4308_0         1.2 MB
    sqlite-3.41.2              |       h5eee18b_0         1.2 MB
    tzdata-2023c               |       h04d1e81_0         116 KB
    wheel-0.38.4               |  py310h06a4308_0          64 KB
    xz-5.4.2                   |       h5eee18b_0         642 KB
    ------------------------------------------------------------
                                           Total:        36.4 MB

The following NEW packages will be INSTALLED:

  _libgcc_mutex      pkgs/main/linux-64::_libgcc_mutex-0.1-main
  _openmp_mutex      pkgs/main/linux-64::_openmp_mutex-5.1-1_gnu
  bzip2              pkgs/main/linux-64::bzip2-1.0.8-h7b6447c_0
  ca-certificates    pkgs/main/linux-64::ca-certificates-2023.01.10-h06a4308_0
  ld_impl_linux-64   pkgs/main/linux-64::ld_impl_linux-64-2.38-h1181459_1
  libffi             pkgs/main/linux-64::libffi-3.4.4-h6a678d5_0
  libgcc-ng          pkgs/main/linux-64::libgcc-ng-11.2.0-h1234567_1
  libgomp            pkgs/main/linux-64::libgomp-11.2.0-h1234567_1
  libstdcxx-ng       pkgs/main/linux-64::libstdcxx-ng-11.2.0-h1234567_1
  libuuid            pkgs/main/linux-64::libuuid-1.41.5-h5eee18b_0
  ncurses            pkgs/main/linux-64::ncurses-6.4-h6a678d5_0
  openssl            pkgs/main/linux-64::openssl-1.1.1t-h7f8727e_0
  pip                pkgs/main/linux-64::pip-23.0.1-py310h06a4308_0
  python             pkgs/main/linux-64::python-3.10.11-h7a1cb2a_2
  readline           pkgs/main/linux-64::readline-8.2-h5eee18b_0
  setuptools         pkgs/main/linux-64::setuptools-66.0.0-py310h06a4308_0
  sqlite             pkgs/main/linux-64::sqlite-3.41.2-h5eee18b_0
  tk                 pkgs/main/linux-64::tk-8.6.12-h1ccaba5_0
  tzdata             pkgs/main/noarch::tzdata-2023c-h04d1e81_0
  wheel              pkgs/main/linux-64::wheel-0.38.4-py310h06a4308_0
  xz                 pkgs/main/linux-64::xz-5.4.2-h5eee18b_0
  zlib               pkgs/main/linux-64::zlib-1.2.13-h5eee18b_0



Downloading and Extracting Packages

Preparing transaction: done
Verifying transaction: failed

# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<

    Traceback (most recent call last):
      File "/home/kayvan/winhome/src/one-click-installers/installer_files/conda/lib/python3.10/site-packages/conda/exceptions.py", line 1124, in __call__
        return func(*args, **kwargs)
      File "/home/kayvan/winhome/src/one-click-installers/installer_files/conda/lib/python3.10/site-packages/conda/cli/main.py", line 69, in main_subshell
        exit_code = do_call(args, p)
      File "/home/kayvan/winhome/src/one-click-installers/installer_files/conda/lib/python3.10/site-packages/conda/cli/conda_argparse.py", line 91, in do_call
        return getattr(module, func_name)(args, parser)
      File "/home/kayvan/winhome/src/one-click-installers/installer_files/conda/lib/python3.10/site-packages/conda/notices/core.py", line 109, in wrapper
        return func(*args, **kwargs)
      File "/home/kayvan/winhome/src/one-click-installers/installer_files/conda/lib/python3.10/site-packages/conda/cli/main_create.py", line 41, in execute
        install(args, parser, 'create')
      File "/home/kayvan/winhome/src/one-click-installers/installer_files/conda/lib/python3.10/site-packages/conda/cli/install.py", line 332, in install
        handle_txn(unlink_link_transaction, prefix, args, newenv)
      File "/home/kayvan/winhome/src/one-click-installers/installer_files/conda/lib/python3.10/site-packages/conda/cli/install.py", line 361, in handle_txn
        unlink_link_transaction.execute()
      File "/home/kayvan/winhome/src/one-click-installers/installer_files/conda/lib/python3.10/site-packages/conda/core/link.py", line 282, in execute
        self.verify()
      File "/home/kayvan/winhome/src/one-click-installers/installer_files/conda/lib/python3.10/site-packages/conda/common/io.py", line 84, in decorated
        return f(*args, **kwds)
      File "/home/kayvan/winhome/src/one-click-installers/installer_files/conda/lib/python3.10/site-packages/conda/core/link.py", line 243, in verify
        exceptions = self._verify(self.prefix_setups, self.prefix_action_groups)
      File "/home/kayvan/winhome/src/one-click-installers/installer_files/conda/lib/python3.10/site-packages/conda/core/link.py", line 637, in _verify
        for exc in self.verify_executor.map(UnlinkLinkTransaction._verify_individual_level,
      File "/home/kayvan/winhome/src/one-click-installers/installer_files/conda/lib/python3.10/site-packages/conda/common/io.py", line 547, in map
        yield func(thing)
      File "/home/kayvan/winhome/src/one-click-installers/installer_files/conda/lib/python3.10/site-packages/conda/core/link.py", line 454, in _verify_individual_level
        error_result = axn.verify()
      File "/home/kayvan/winhome/src/one-click-installers/installer_files/conda/lib/python3.10/site-packages/conda/core/path_actions.py", line 357, in verify
        source_size_in_bytes = getsize(self.source_full_path)
      File "/home/kayvan/winhome/src/one-click-installers/installer_files/conda/lib/python3.10/genericpath.py", line 50, in getsize
        return os.stat(filename).st_size
    OSError: [Errno 40] Too many levels of symbolic links: '/home/kayvan/winhome/src/one-click-installers/installer_files/conda/pkgs/ncurses-6.4-h6a678d5_0/share/terminfo/n/ncr260vt300wpp'

`$ /home/kayvan/winhome/src/one-click-installers/installer_files/conda/bin/conda create -y -k --prefix /home/kayvan/winhome/src/one-click-installers/installer_files/env python=3.10`

  environment variables:
                 CIO_TEST=<not set>
               CONDA_ROOT=/home/kayvan/winhome/src/one-click-installers/installer_files/conda
           CURL_CA_BUNDLE=<not set>
               LD_PRELOAD=<not set>
                     PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/game
                          s:/usr/local/games:/usr/lib/wsl/lib:/mnt/c/Program
                          Files/PowerShell/7:/mnt/c/Program Files/NVIDIA GPU Computing
                          Toolkit/CUDA/v12.1/bin:/mnt/c/Program Files/NVIDIA GPU Computing Toolk
                          it/CUDA/v12.1/libnvvp:/mnt/c/Windows/system32:/mnt/c/Windows:/mnt/c/Wi
                          ndows/System32/Wbem:/mnt/c/Windows/System32/WindowsPowerShell/v1.0/:/m
                          nt/c/Windows/System32/OpenSSH/:/mnt/c/Program
                          Files/PowerShell/7/:/mnt/c/Program Files/Rancher
                          Desktop/resources/resources/win32/bin/:/mnt/c/Program Files/Rancher
                          Desktop/resources/resources/linux/bin/:/mnt/c/Program Files/GitHub
                          CLI/:/mnt/c/Program Files/Git/cmd:/mnt/c/Program Files/NVIDIA
                          Corporation/Nsight Compute 2023.1.1/:/mnt/c/Program Files (x86)/NVIDIA
                          Corporation/PhysX/Common:/mnt/c/Program Files/NVIDIA
                          Corporation/NVIDIA NvDLISR:/mnt/c/Users/kayvan/scoop/shims:/mnt/c/User
                          s/kayvan/AppData/Local/Microsoft/WindowsApps:/mnt/c/Users/kayvan/AppDa
                          ta/Local/Programs/Microsoft VS Code/bin:/snap/bin
       REQUESTS_CA_BUNDLE=<not set>
            SSL_CERT_FILE=<not set>

     active environment : None
       user config file : /home/kayvan/.condarc
 populated config files :
          conda version : 23.1.0
    conda-build version : not installed
         python version : 3.10.9.final.0
       virtual packages : __archspec=1=x86_64
                          __cuda=12.1=0
                          __glibc=2.35=0
                          __linux=5.15.90.1=0
                          __unix=0=0
       base environment : /home/kayvan/winhome/src/one-click-installers/installer_files/conda  (writable)
      conda av data dir : /home/kayvan/winhome/src/one-click-installers/installer_files/conda/etc/conda
  conda av metadata url : None
           channel URLs : https://repo.anaconda.com/pkgs/main/linux-64
                          https://repo.anaconda.com/pkgs/main/noarch
                          https://repo.anaconda.com/pkgs/r/linux-64
                          https://repo.anaconda.com/pkgs/r/noarch
          package cache : /home/kayvan/winhome/src/one-click-installers/installer_files/conda/pkgs
                          /home/kayvan/.conda/pkgs
       envs directories : /home/kayvan/winhome/src/one-click-installers/installer_files/conda/envs
                          /home/kayvan/.conda/envs
               platform : linux-64
             user-agent : conda/23.1.0 requests/2.28.1 CPython/3.10.9 Linux/5.15.90.1-microsoft-standard-WSL2 ubuntu/22.04.2 glibc/2.35
                UID:GID : 1000:1000
             netrc file : None
           offline mode : False


An unexpected error has occurred. Conda has prepared the above report.

Upload successful.
Conda environment is empty.

CUDA error and libsbitsandbytes_cpu.so

CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
C:\Users\XXXX\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\bitsandbytes\cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "

After trying to load a model, I get this:

Loading alpaca-native-4bit...
CUDA extension not installed.
Loading model ...
Traceback (most recent call last):
File "D:\oobabooga-windows\text-generation-webui\server.py", line 273, in
shared.model, shared.tokenizer = load_model(shared.model_name)
File "D:\oobabooga-windows\text-generation-webui\modules\models.py", line 101, in load_model
model = load_quantized(model_name)
File "D:\oobabooga-windows\text-generation-webui\modules\GPTQ_loader.py", line 113, in load_quantized
model = load_quant(str(path_to_model), str(pt_path), shared.args.wbits, shared.args.groupsize, kernel_switch_threshold=threshold)
File "D:\oobabooga-windows\text-generation-webui\modules\GPTQ_loader.py", line 45, in _load_quant
model.load_state_dict(torch.load(checkpoint))
File "C:\Users\XXXX\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\torch\serialization.py", line 809, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "C:\Users\XXXX\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\torch\serialization.py", line 1172, in _load
result = unpickler.load()
File "C:\Users\XXXX\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\torch\serialization.py", line 1142, in persistent_load
typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
File "C:\Users\XXXX\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\torch\serialization.py", line 1116, in load_tensor
wrap_storage=restore_location(storage, location),
File "C:\Users\XXXX\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\torch\serialization.py", line 217, in default_restore_location
result = fn(storage, location)
File "C:\Users\XXXX\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\torch\serialization.py", line 182, in _cuda_deserialize
device = validate_cuda_device(location)
File "C:\Users\XXXX\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\torch\serialization.py", line 166, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
Press any key to continue . . .

I have 0 programming experience so I don't know how to fix this issue. My PC has an RTX 2060 with 6GB VRAM, 16 GB of RAM, and a Ryzen 5 3600

Some errors during one-click-installer on windows

I have repeatedly tried to relizar the installation following the instructions. The problem is that I always get the same errors and although I have individually looked for how to solve some of the errors, I still have not succeeded.
During installation, I get the following errors:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. xformers 0.0.16rc425 requires pyre-extensions==0.0.23, which is not installed. numba 0.56.4 requires numpy<1.24,>=1.18, but you have numpy 1.24.3 which is incompatible. clean-fid 0.1.29 requires requests==2.25.1, but you have requests 2.28.2 which is incompatible.

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. xformers 0.0.16rc425 requires pyre-extensions==0.0.23, which is not installed. clean-fid 0.1.29 requires requests==2.25.1, but you have requests 2.28.2 which is incompatible.

Finally, despite the above errors the installation continues, appearing at the end the following message:

C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\command\easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
running bdist_egg
running egg_info
creating quant_cuda.egg-info
writing quant_cuda.egg-info\PKG-INFO
writing dependency_links to quant_cuda.egg-info\dependency_links.txt
writing top-level names to quant_cuda.egg-info\top_level.txt
writing manifest file 'quant_cuda.egg-info\SOURCES.txt'
reading manifest file 'quant_cuda.egg-info\SOURCES.txt'
writing manifest file 'quant_cuda.egg-info\SOURCES.txt'
installing library code to build\bdist.win-amd64\egg
running install_lib
running build_ext
C:\Users\bissop\AppData\Roaming\Python\Python310\site-packages\torch\utils\cpp_extension.py:358: UserWarning: Error checking compiler version for cl: [WinError 2] El sistema no puede encontrar el archivo especificado
  warnings.warn(f'Error checking compiler version for {compiler}: {error}')
Traceback (most recent call last):
  File "C:\AI\oobabooga\text-generation-webui\repositories\GPTQ-for-LLaMa\setup_cuda.py", line 4, in <module>
    setup(
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\__init__.py", line 87, in setup
    return distutils.core.setup(**attrs)
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\_distutils\core.py", line 185, in setup
    return run_commands(dist)
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\_distutils\core.py", line 201, in run_commands
    dist.run_commands()
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\_distutils\dist.py", line 969, in run_commands
    self.run_command(cmd)
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\dist.py", line 1208, in run_command
    super().run_command(command)
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
    cmd_obj.run()
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\command\install.py", line 74, in run
    self.do_egg_install()
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\command\install.py", line 123, in do_egg_install
    self.run_command('bdist_egg')
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command
    self.distribution.run_command(command)
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\dist.py", line 1208, in run_command
    super().run_command(command)
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
    cmd_obj.run()
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\command\bdist_egg.py", line 165, in run
    cmd = self.call_command('install_lib', warn_dir=0)
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\command\bdist_egg.py", line 151, in call_command
    self.run_command(cmdname)
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command
    self.distribution.run_command(command)
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\dist.py", line 1208, in run_command
    super().run_command(command)
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
    cmd_obj.run()
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\command\install_lib.py", line 11, in run
    self.build()
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\_distutils\command\install_lib.py", line 112, in build
    self.run_command('build_ext')
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command
    self.distribution.run_command(command)
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\dist.py", line 1208, in run_command
    super().run_command(command)
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
    cmd_obj.run()
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\command\build_ext.py", line 84, in run
    _build_ext.run(self)
  File "C:\AI\oobabooga\installer_files\env\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 346, in run
    self.build_extensions()
  File "C:\Users\bissop\AppData\Roaming\Python\Python310\site-packages\torch\utils\cpp_extension.py", line 499, in build_extensions
    _check_cuda_version(compiler_name, compiler_version)
  File "C:\Users\bissop\AppData\Roaming\Python\Python310\site-packages\torch\utils\cpp_extension.py", line 386, in _check_cuda_version
    raise RuntimeError(CUDA_MISMATCH_MESSAGE.format(cuda_str_version, torch.version.cuda))
RuntimeError:
The detected CUDA version (12.1) mismatches the version that was used to compile
PyTorch (11.7). Please make sure to use the same CUDA versions.

CUDA kernel compilation failed.
Attempting installation with wheel.
Collecting quant-cuda==0.0.0
  Using cached https://github.com/jllllll/GPTQ-for-LLaMa-Wheels/raw/main/quant_cuda-0.0.0-cp310-cp310-win_amd64.whl (398 kB)
Installing collected packages: quant-cuda
Successfully installed quant-cuda-0.0.0

Request for aarch64 Installation Support Without Torchaudio

I am attempting to execute start_linux.sh on an aarch64 system. The initial architecture checks in the script seem to imply that aarch64 should be a compatible architecture. Nevertheless, the installation process fails when attempting to install torchaudio, which, as my research indicates, lacks official aarch64 support.

Further investigation revealed that both torchaudio and its dependency, PyTorch, do not officially support the aarch64 architecture. I understand that this is a limitation inherent to these libraries and not a direct result of the implementation of the script.

Given this situation, I propose the following possible solutions:

Modify the script to make torchaudio installation optional or conditional based on the detected architecture.
Investigate if there are alternative libraries or packages compatible with aarch64 that could serve as substitutes for torchaudio and PyTorch.
If the software functionality is not critically dependent on torchaudio, these modifications could enable aarch64 support, providing a provisional solution until (and if) torchaudio and PyTorch introduce official aarch64 support.

I'm ready and willing to provide further information, assist in testing, or participate in potential solution implementation. Please let me know if there are any alternative solutions that I should explore, or if there's any other way I can contribute to resolving this issue.

Thank you for your time and consideration.

Linux install script failing to run on AWS Linux

I'm trying to run webui on AWS linux instance - t2.2xlarge with the following:

vCPU: 8
Memory: 32 GiB

Additionally, I provisioned 100GB storage. I'm running into this issue:

0tar: Child died with signal 13 tar: Error is not recoverable: exiting now 0 5051k 0 23942 0 0 16051 0 0:05:22 0:00:01 0:05:21 210k curl: (23) Failure writing output to destination Micromamba version: Packages to install: pytorch torchvision torchaudio cpuonly conda git There was a problem while initializing micromamba. Cannot continue.

are there any dependencies I need to install first?

Traceback at the end of the update since the commit 4babb22f846e74f096af5f487a2b4a6942b3f3c3

Hello,

I did a fresh install with the lastest files from the commit 4babb22 and I'm noticing a traceback when using update_windows.bat

Traceback (most recent call last): File "G:\oobabooga_windows\webui.py", line 175, in <module> update_dependencies() File "G:\oobabooga_windows\webui.py", line 133, in update_dependencies os.rename("setup_cuda.py", "setup.py") FileNotFoundError: [WinError 2] Le fichier spécifié est introuvable: 'setup_cuda.py' -> 'setup.py'

Seem the file setup_cuda.py is missing. I don't know if it could be related to the failed CUDA kernel compilation during the installation (because of missing microsoft c++ compiler) but the wheel got correctly downloaded instead.

SSL error

One click windows installer produces this error when I try to run it:
image
It seems to be related to this bug:
python/cpython#79846
I have posted a comment with a workaround there when I first encountered it with Stable Diffusion. It was taken from yt-dlp project.

ModuleNotFoundError: No module named 'gradio'

Hi,
I get the following error:
[axel@fedora oobabooga_linux]$ sh start_linux.sh Traceback (most recent call last): File "/home/axel/Ai/oobabooga_linux/text-generation-webui/server.py", line 17, in <module> import gradio as gr ModuleNotFoundError: No module named 'gradio'

Steps to reproduce:

  1. Download the linux zip
  2. run start_linux.sh with sh start_linux.sh in terminal
  3. install any model vie the terminal
  4. again run start_linux.sh

It would seem that 'gradio' is not downloaded as part of the env?

My system is:

  • linux fedora
  • AMD CPU
  • Nvidia GPU

An error running text-generation-webu

When I run the command python3 server.py --chat I get this

Gradio HTTP request redirected to localhost :)
Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.9/threading.py", line 892, in run
    self._target(*self._args, **self._kwargs)
  File "/home/Pitata/.local/lib/python3.9/site-packages/gradio/strings.py", line 38, in get_updated_messaging
    updated_messaging = requests.get(MESSAGING_API_ENDPOINT, timeout=3).json()
  File "/usr/lib/python3/dist-packages/requests/models.py", line 900, in json
    return complexjson.loads(self.text, **kwargs)
  File "/usr/lib/python3/dist-packages/simplejson/__init__.py", line 525, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3/dist-packages/simplejson/decoder.py", line 370, in decode
    obj, end = self.raw_decode(s)
  File "/usr/lib/python3/dist-packages/simplejson/decoder.py", line 400, in raw_decode
    return self.scan_once(s, idx=_w(s, idx).end())
simplejson.errors.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Illegal instruction

I have installed the requirements without any errors and I am running it on a raspberry pi 4b

compiling quant_cuda_kernel.o failed

This is the error I got:

/home/test/2TB/GITS/text-generation-webui/one-click-installers/text-generation-webui/repositories/GPTQ-for-LLaMa/quant_cuda_kernel.cu(654): error: identifier "__hfma2" is undefined

Any idea why?

Power off after 3 or 4 input to the webui

i Have 3090 which i bought used a month ago when i start to chat after 3. or 4. text that i sent power goes off on my pc.
i have same issue wih Stable diffussion but not that often. it happend more when i make hires.
can you help me to solve that problem pls

"Error checking compiler version for cl" / Warning about Microsoft C++ Build tools required when updating

Hello,

I'm using the one-click-installer on Windows and when I use "update_windows.bat" to check and update the installer, at the very end, I get this warning :

error checking compiler

I indeed doesn't have the Microsoft C++ Build Tools installed. Should I install it? Is it really required? I'm using 4 bit quantized gptq models with a nvidia GPU and so far, it's working. Why does this warning about C++ Build Tools is appearing? Is there a situation or a model type where it should be installed?

"Conda is not installed"

Everytime ive tried to run the one click installer, tells me "Conda is not installed" When it very much is.
Ive uninstalled and reinstalled conda, I've deleted the files and redownloaded and installed everything. Still the same "Conda is not installed" Even though I've even downloaded and ran the OB UI through co
Screenshot_1
nda itself.

one click installer.. cuda not working now.. was working previously

fyi this was working fine before

(C:\Projects\AI\one-click-installers\installer_files\env) C:\Projects\AI\one-click-installers>python -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 2.0.1+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect

MacOS unable to install using start_macos.sh

Hi, I can't install it using the start_macos.sh. Once I run it I get the error: `Cloning into 'text-generation-webui'...
dyld[65256]: Library not loaded: @rpath/libssl.1.1.dylib
Referenced from: /Users/kirillevseev/Downloads/oobabooga_macos/installer_files/env/lib/libcurl.4.dylib
Reason: tried: '/Users/kirillevseev/Downloads/oobabooga_macos/installer_files/env/lib/libssl.1.1.dylib' (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64')), '/Users/kirillevseev/Downloads/oobabooga_macos/installer_files/env/lib/lib/libssl.1.1.dylib' (no such file), '/Users/kirillevseev/Downloads/oobabooga_macos/installer_files/env/libexec/git-core/../../lib/libssl.1.1.dylib' (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64')), '/Users/kirillevseev/Downloads/oobabooga_macos/installer_files/env/lib/lib/libssl.1.1.dylib' (no such file), '/Users/kirillevseev/Downloads/oobabooga_macos/installer_files/env/libexec/git-core/../../lib/libssl.1.1.dylib' (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64')), '/usr/local/lib/libssl.1.1.dylib' (no such file), '/usr/lib/libssl.1.1.dylib' (no such file)
error: git-remote-https died of signal 6
Traceback (most recent call last):
File "/Users/kirillevseev/Downloads/oobabooga_macos/webui.py", line 162, in
install_dependencies()
File "/Users/kirillevseev/Downloads/oobabooga_macos/webui.py", line 60, in install_dependencies
update_dependencies()
File "/Users/kirillevseev/Downloads/oobabooga_macos/webui.py", line 64, in update_dependencies
os.chdir("text-generation-webui")
FileNotFoundError: [Errno 2] No such file or directory: 'text-generation-webui'

Done!
`
So I installed git text-generation-webui manually. But then when I try running it again it fails on the server.py: no gradio module #30. However the I freshly installed anaconda, freshly installed gradio using pip, I also specified the path to gradio in the server.py file but I keep getting this error
Machine: MacOS 13.3.1
Apple M1

No GPU support for Apple Silicon

When I start the application this message shown up.
The disk usage is very high, the usage of RAM is very low. I couldn't find out why.

/Volumes/PaulSSD/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/bitsandbytes/cextension.py:33: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. 

Using an Apple M1 MacBook
16GB RAM

update_windows error

After an initial installation that appears successful (no errors from what I could see), I ran the update_windows.bat file to make sure it is updated as needed and received the following error at the end of execution:

Requirement already satisfied: sniffio>=1.1 in c:\users\digdug01\desktop\one-click-installers-main\installer_files\env\lib\site-packages (from anyio<5,>=3.4.0->starlette<0.27.0,>=0.26.1->fastapi>=0.85.1->chromadb==0.3.18->-r extensions\superbig\requirements.txt (line 2)) (1.3.0)
Building wheels for collected packages: hnswlib
Building wheel for hnswlib (pyproject.toml) ... error
error: subprocess-exited-with-error

× Building wheel for hnswlib (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [5 lines of output]
running bdist_wheel
running build
running build_ext
building 'hnswlib' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for hnswlib
Failed to build hnswlib
ERROR: Could not build wheels for hnswlib, which is required to install pyproject.toml-based projects
Command '"C:\Users\DigDug01\Desktop\one-click-installers-main\installer_files\conda\condabin\conda.bat" activate "C:\Users\DigDug01\Desktop\one-click-installers-main\installer_files\env" >nul && python -m pip install -r extensions\superbig\requirements.txt --upgrade' failed with exit status code '1'. Exiting...

Done!
Press any key to continue . . .

text-generation-webui Missing

Hi, when I download the one-click-installer and follow the instructions in the .txt file, the only folder that appears is the installer_files the text-generation-webui folder does not appear, I dont know what to do. I will be grateful for any help, thanks

Fresh install on Windows not working

Error while running start_windows.bat as a fresh install:

Traceback (most recent call last):
  File "D:\repos\one-click-installers\text-generation-webui\server.py", line 1087, in <module>
    shared.model, shared.tokenizer = load_model(shared.model_name)
  File "D:\repos\one-click-installers\text-generation-webui\modules\models.py", line 95, in load_model
    output = load_func(model_name)
  File "D:\repos\one-click-installers\text-generation-webui\modules\models.py", line 263, in llamacpp_loader
    from modules.llamacpp_model import LlamaCppModel
  File "D:\repos\one-click-installers\text-generation-webui\modules\llamacpp_model.py", line 11, in <module>
    from llama_cpp import Llama, LlamaCache
  File "D:\repos\one-click-installers\installer_files\env\lib\site-packages\llama_cpp\__init__.py", line 1, in <module>
    from .llama_cpp import *
  File "D:\repos\one-click-installers\installer_files\env\lib\site-packages\llama_cpp\llama_cpp.py", line 73, in <module>
    _lib = _load_shared_library(_lib_base_name)
  File "D:\repos\one-click-installers\installer_files\env\lib\site-packages\llama_cpp\llama_cpp.py", line 52, in _load_shared_library
    os.add_dll_directory(os.path.join(os.environ["CUDA_PATH"], "bin"))
  File "D:\repos\one-click-installers\installer_files\env\lib\os.py", line 1118, in add_dll_directory
    cookie = nt._add_dll_directory(path)
FileNotFoundError: [WinError 2] The system cannot find the file specified: 'D:\\repos\\one-click-installers\\installer_files\\env\\bin'

Not sure why it's referencing bin while on windows? Seems that llama cpp is giving me issues on this one-click install. Note that the Scripts folder does exist here and yes this issue isn't exactly from this repo as its coming from the llama_cpp package. That being said, it is still an issue that arose.

Anyone know a quick fix?

Suggestion: Adding the ability to pass in CLI params from a Windows batch files to `webui.py`

This is a suggestion, I'd be happy to raise a PR if you feel this would be something worth implementing.

At the moment, if you have some models that are fussy (eg. needing to offload layers, specify model type etc.), then the default start_windows.bat and webui.py don't make this too easy, as the CLI params for Ooba are in the webui.py file hard-coded as:

CMD_FLAGS = '--chat --model-menu'

I'm no Python coder, but if we change the above line to something like:

user_cmd_flags = os.getenv('CMD_FLAGS')

if user_cmd_flags == 'None':
    CMD_FLAGS = '--chat --model-menu'
else:
    CMD_FLAGS = user_cmd_flags

Then in the start_windows.bat you can make a copy of it, eg. start_fussy_model.bat and then add something like this:
set CMD_FLAGS=--chat --auto-devices --model some-model-name --wbits 4 --groupsize 128 --sdp-attention --api

Or whatever you want, without needing to go through the model selection menu. As the default menu selector may not work for all models, such as models that are larger than your GPU memory allows. Also you might not want to change the default settings (in webui.py) for all models.

What do you think? Or is there a better way to achieve this?

Error on fresh install

Hello, I performed a fresh install using the one-click installer for Windows, that is to say, I simply ran start_windows.bat and prayed the massive mountain of dependencies would be set up correctly.

I selected to set up with NVIDIA hardware (an RTX 4090), and selected not to download any models, as I'd like to use Vicuna 13B.

image

bin C:\oobabooga\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cuda117.dll
Traceback (most recent call last):
File "C:\oobabooga\text-generation-webui\server.py", line 47, in
from modules import chat, shared, training, ui, utils
File "C:\oobabooga\text-generation-webui\modules\training.py", line 14, in
from peft import (LoraConfig, get_peft_model, prepare_model_for_int8_training,
File "C:\oobabooga\installer_files\env\lib\site-packages\peft_init_.py", line 22, in
from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING, PEFT_TYPE_TO_CONFIG_MAPPING, get_peft_config, get_peft_model
File "C:\oobabooga\installer_files\env\lib\site-packages\peft\mapping.py", line 16, in
from .peft_model import (
File "C:\oobabooga\installer_files\env\lib\site-packages\peft\peft_model.py", line 31, in
from .tuners import (
File "C:\oobabooga\installer_files\env\lib\site-packages\peft\tuners_init_.py", line 21, in
from .lora import LoraConfig, LoraModel
File "C:\oobabooga\installer_files\env\lib\site-packages\peft\tuners\lora.py", line 735, in
class Linear4bit(bnb.nn.Linear4bit, LoraLayer):
AttributeError: module 'bitsandbytes.nn' has no attribute 'Linear4bit'. Did you mean: 'Linear8bitLt'?

Done!
Press any key to continue . . .

More information about the downloadable models?

The installer dumps this choice on you, but there's no way to know what any of these are and google results are mostly non-technical news stories. A link to a page containing a comparison chart of some kind and a brief summary, also with further links, would be nice.

Select the model that you want to download:

A) OPT 6.7B
B) OPT 2.7B
C) OPT 1.3B
D) OPT 350M
E) GALACTICA 6.7B
F) GALACTICA 1.3B
G) GALACTICA 125M
H) Pythia-6.9B-deduped
I) Pythia-2.8B-deduped
J) Pythia-1.4B-deduped
K) Pythia-410M-deduped
L) Manually specify a Hugging Face model
M) Do not download a model

Input> 

AssertionError("Torch not compiled with CUDA enabled")

зображення_2023-04-17_152312730
So i was trying to run alpaca on with oobabooga webui, and got some errors:
1.UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
Torch not compiled with CUDA enabled.
After this one, it crashes. I used the universal one-click installation bcuz the mamba one didn't work.

I am unable to install the webui ERROR: Could not install packages due to an OSError:

WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000024E0EA92980>, 'Connection to raw.githubusercontent.com timed out. (connect timeout=15)')': /jllllll/bitsandbytes-windows-webui/main/bitsandbytes-0.38.1-py3-none-any.whl
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000024E0EA92C20>, 'Connection to raw.githubusercontent.com timed out. (connect timeout=15)')': /jllllll/bitsandbytes-windows-webui/main/bitsandbytes-0.38.1-py3-none-any.whl
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000024E0EA92DA0>, 'Connection to raw.githubusercontent.com timed out. (connect timeout=15)')': /jllllll/bitsandbytes-windows-webui/main/bitsandbytes-0.38.1-py3-none-any.whl
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000024E0EA92E90>, 'Connection to raw.githubusercontent.com timed out. (connect timeout=15)')': /jllllll/bitsandbytes-windows-webui/main/bitsandbytes-0.38.1-py3-none-any.whl
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000024E0EA93010>, 'Connection to raw.githubusercontent.com timed out. (connect timeout=15)')': /jllllll/bitsandbytes-windows-webui/main/bitsandbytes-0.38.1-py3-none-any.whl
ERROR: Could not install packages due to an OSError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /jllllll/bitsandbytes-windows-webui/main/bitsandbytes-0.38.1-py3-none-any.whl (Caused by ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000024E0EA93190>, 'Connection to raw.githubusercontent.com timed out. (connect timeout=15)'))

Command '"D:\AI\oobabooga_windows\installer_files\conda\condabin\conda.bat" activate "D:\AI\oobabooga_windows\installer_files\env" >nul && python -m pip install https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.38.1-py3-none-any.whl' failed with exit status code '1'. Exiting...

Done!
Press any key to continue . . .

please let me know what other information I need to provide

CPU : AMD Ryzen 5 5600X
GPU : Nvidia RTX 3060
RAM : 16 GB DDR4 3600 Mhz

CUDA kernel compilation failed when running start_linux.sh

Running start_linux.sh, right before the model download options.

RuntimeError:
The detected CUDA version (12.1) mismatches the version that was used to compile
PyTorch (11.7). Please make sure to use the same CUDA versions.

CUDA kernel compilation failed.

Unclear on how to resolve.

Traceback error

When select model it show this error:

Traceback (most recent call last):
File “C:\Users\Desktop\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 442, in load_state_dict
return torch.load(checkpoint_file, map_location=“cpu”)
File “C:\Users\Desktop\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\torch\[serialization.py](http://serialization.py/)”, line 797, in load
with _open_zipfile_reader(opened_file) as opened_zipfile:
File “C:\Users\Desktop\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\torch\[serialization.py](http://serialization.py/)”, line 283, in init
super().init(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “C:\Users\Desktop\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 446, in load_state_dict
if f.read(7) == “version”:
File “C:\Users\Desktop\one-click-installers-oobabooga-windows\installer_files\env\lib\encodings\[cp1252.py](http://cp1252.py/)”, line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: ‘charmap’ codec can’t decode byte 0x81 in position 1709: character maps to

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “C:\Users\Desktop\one-click-installers-oobabooga-windows\text-generation-webui\[server.py](http://server.py/)”, line 85, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name)
File “C:\Users\Desktop\one-click-installers-oobabooga-windows\text-generation-webui\modules\[models.py](http://models.py/)”, line 168, in load_model
model = AutoModelForCausalLM.from_pretrained(checkpoint, **params)
File “C:\Users\Desktop\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.py”, line 471, in from_pretrained
return model_class.from_pretrained(
File “C:\Users\Desktop\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 2795, in from_pretrained
) = cls._load_pretrained_model(
File “C:\Users\Desktop\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 3109, in _load_pretrained_model
state_dict = load_state_dict(shard_file)
File “C:\Users\Desktop\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 458, in load_state_dict
raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for ‘models\facebook_opt-6.7b\pytorch_model-00001-of-00002.bin’ at ‘models\facebook_opt-6.7b\pytorch_model-00001-of-00002.bin’. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.

Can't install on Kubuntu

./start_linux.sh: 3: Bad substitution
./start_linux.sh: 5: [[: not found
./start_linux.sh: 27: [: T: unexpected operator
./start_linux.sh: 23: /home/Yukkun/.local/share/Trash/files/oobabooga_linux/installer_files/conda/bin/conda: not found
./start_linux.sh: 43: /home/Yukkun/.local/share/Trash/files/oobabooga_linux/installer_files/conda/bin/conda: not found
Conda environment is empty.

I keep just getting this error. Running it with or without sudo doesn't seem to change anything

Torch Cuda error

I am getting following error after the one click installer finishes.
Traceback (most recent call last):
File "T:\AI\GPT4ALL\oobabooga_windows\text-generation-webui\server.py", line 39, in
import torch
File "C:\Users\TD FILM STUDIO\AppData\Roaming\Python\Python310\site-packages\torch_init_.py", line 122, in
raise err
OSError: [WinError 127] The specified procedure could not be found. Error loading "C:\Users\TD FILM STUDIO\AppData\Roaming\Python\Python310\site-packages\torch\lib\torch_cuda_cpp.dll" or one of its dependencies.
Done!
Press any key to continue . . .

Please help anyone.

I also checked the version of CUDA toolkit installed in my pc.
C:\WINDOWS\system32>nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:41:10_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0

Windows start unable to locate conda

Original start_windows.bat and others bat files fails to locate conda (not installed globally)
So, I moved PATH definition after setting %CONDA_ROOT_PREFIX% and appended %CONDA_ROOT_PREFIX%\condabin;%CONDA_ROOT_PREFIX%\Scripts

set CONDA_ROOT_PREFIX=%cd%\installer_files\conda
set PATH=%PATH%;%SystemRoot%\system32;%CONDA_ROOT_PREFIX%\condabin;%CONDA_ROOT_PREFIX%\Scripts

Help... how to fix this part "unexpected keyword argument 'font'"

D:\School\Quarter3\oobabooga_windows>conda activate textgen
'conda' is not recognized as an internal or external command,
operable program or batch file.

D:\School\Quarter3\oobabooga_windows>cd text-generation-webui

D:\School\Quarter3\oobabooga_windows\text-generation-webui>python server.py
Gradio HTTP request redirected to localhost :)
Traceback (most recent call last):
File "D:\School\Quarter3\oobabooga_windows\text-generation-webui\server.py", line 44, in
from modules import chat, shared, training, ui
File "D:\School\Quarter3\oobabooga_windows\text-generation-webui\modules\training.py", line 16, in
from modules import shared, ui
File "D:\School\Quarter3\oobabooga_windows\text-generation-webui\modules\ui.py", line 18, in
theme = gr.themes.Default(
TypeError: Default.init() got an unexpected keyword argument 'font'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.