Giter Site home page Giter Site logo

polymind's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

polymind's Issues

Looking for ComfyUI Specific Checkpoint

Generated images are looking for specific checkpoints in ComfyUI

Polymind output

127.0.0.1 - - [01/Feb/2024 18:15:48] "POST / HTTP/1.1" 200 -
Begin streamed GateKeeper output.
Token count: 773
generateimage",
  "params": {
    "prompt": "photo of a cat wearing a hat"
  }
}]


[{
  "function": "generateimage",
  "params": {
    "prompt": "photo of a cat wearing a hat"
  }
}]

['', ' ~*~Photographic~*~ Cat wearing a hat, high quality, filmed with a Canon EOS R6, 70-200mm lens, (photo)\n', ' ~*~Photographic~*~, Cat wearing a hat, high quality, photo, 4k']
Prompt:  ~*~Photographic~*~ Cat wearing a hat, high quality, filmed with a Canon EOS R6, 70-200mm lens, (photo)
Seed: 2655017510740233
HTTP Error 400: Bad Request
Token count: 317
 <polymind_gen_image_of_cat>
127.0.0.1 - - [01/Feb/2024 18:28:23] "POST / HTTP/1.1" 200 -

ComfyUI output

(ComfyUI) PS D:\AI\ComfyUI> python .\main.py
** ComfyUI startup time: 2024-02-01 18:27:42.981322
** Platform: Windows
** Python version: 3.11.0 | packaged by Anaconda, Inc. | (main, Mar  1 2023, 18:18:21) [MSC v.1916 64 bit (AMD64)]
** Python executable: C:\Users\Alok\miniconda3\envs\ComfyUI\python.exe
** Log path: D:\AI\ComfyUI\comfyui.log

Prestartup times for custom nodes:
   0.0 seconds: D:\AI\ComfyUI\custom_nodes\ComfyUI-Manager

Total VRAM 24576 MB, total RAM 31963 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
VAE dtype: torch.bfloat16
Using pytorch cross attention
### Loading: ComfyUI-Manager (V2.7)
### ComfyUI Revision: UNKNOWN (The currently installed ComfyUI is not a Git repository)

Import times for custom nodes:
   0.1 seconds: D:\AI\ComfyUI\custom_nodes\ComfyUI-Manager
   0.3 seconds: D:\AI\ComfyUI\custom_nodes\ComfyUI_stable_fast

Starting server

To see the GUI go to: http://127.0.0.1:8188
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
got prompt
ERROR:root:Failed to validate prompt for output 9:
ERROR:root:* CheckpointLoaderSimple 4:
ERROR:root:  - Value not in list: ckpt_name: 'turbovisionxl431Fp16.p3Q5.safetensors' not in ['turbovisionxlSuperFastXLBasedOnNew_tvxlV431Bakedvae.safetensors']
ERROR:root:Output will be ignored
invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}

llama.cpp /Compat = true troubleshooting for "function": " Unterminated string..."

"It's designed to be used with Mixtral 8x7B-Instruct/Mistral-7B-Instruct-v0.2 + TabbyAPI, but can be used with other models and/or with llama.cpp's included server and, when using the compatiblity mode + tabbyAPI mode, any endpoint with /v1/completions support"

I run alot of models with localai.io which provides openai compatible endpoints. I'm having a little trouble running polymind with it, so I thought I'd get your insight.

Here's the head of my config.json:
"Backend": "llama.cpp",
"compatibility_mode": true,
"compat_tokenizer_model":"/fastdata/langmodels/mistralai_Mistral-7B-Instruct-v0.1",
"HOST": "localhost",
"PORT": 8881,
"admin_ip": "127.0.0.1",

When I send in a request, I get:

192.168.144.167 - - [31/Mar/2024 15:31:46] "GET /stream HTTP/1.1" 200 -
192.168.144.167 - - [31/Mar/2024 15:31:46] "GET /chat_history HTTP/1.1" 200 -
Begin streamed GateKeeper output.
Token count: 561


[{
  "function": "
Unterminated string starting at: line 3 column 15 (char 18)
Token count: 118

192.168.144.167 - - [31/Mar/2024 15:31:56] "POST / HTTP/1.1" 200 -

is it possible to use this with ollama?

I tried to change the port to something llama serve listens to and I get this

Begin streamed GateKeeper output.
[2024-03-09 02:44:33,539] ERROR in app: Exception on / [POST]
Traceback (most recent call last):
  File "/home/kreijstal/.local/lib/python3.10/site-packages/requests/models.py", line 971, in json
    return complexjson.loads(self.text, **kwargs)
  File "/usr/lib/python3.10/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.10/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 5 (char 4)

During handling of the above exception, another exception occurred:

Example config.json with local tokenizer?

Any example config.json with local tokenzier?

I feel like i'm close to getting this to running locally which is exciting, but i'm not sure what local tokenizer to use with my local installation. I'm trying to connect Ooba as the backend api on 5000 so I can easily try a couple of different models.

image
image

onnxruntime-gpu not in requirements

Installed first

(PolyMind) PS D:\AI\PolyMind> pip install onnxruntime-gpu
Requirement already satisfied: onnxruntime-gpu in c:\users\alok\miniconda3\envs\polymind\lib\site-packages (1.16.3)
Requirement already satisfied: coloredlogs in c:\users\alok\miniconda3\envs\polymind\lib\site-packages (from onnxruntime-gpu) (15.0.1)
Requirement already satisfied: flatbuffers in c:\users\alok\miniconda3\envs\polymind\lib\site-packages (from onnxruntime-gpu) (23.5.26)
Requirement already satisfied: numpy>=1.24.2 in c:\users\alok\miniconda3\envs\polymind\lib\site-packages (from onnxruntime-gpu) (1.24.4)
Requirement already satisfied: packaging in c:\users\alok\miniconda3\envs\polymind\lib\site-packages (from onnxruntime-gpu) (23.2)
Requirement already satisfied: protobuf in c:\users\alok\miniconda3\envs\polymind\lib\site-packages (from onnxruntime-gpu) (4.25.2)
Requirement already satisfied: sympy in c:\users\alok\miniconda3\envs\polymind\lib\site-packages (from onnxruntime-gpu) (1.12)
Requirement already satisfied: humanfriendly>=9.1 in c:\users\alok\miniconda3\envs\polymind\lib\site-packages (from coloredlogs->onnxruntime-gpu) (10.0)
Requirement already satisfied: mpmath>=0.19 in c:\users\alok\miniconda3\envs\polymind\lib\site-packages (from sympy->onnxruntime-gpu) (1.3.0)
Requirement already satisfied: pyreadline3 in c:\users\alok\miniconda3\envs\polymind\lib\site-packages (from humanfriendly>=9.1->coloredlogs->onnxruntime-gpu) (3.4.1)

Still running on CPU

(PolyMind) PS D:\AI\PolyMind> python main.py
Loaded config
 WARN: Wolfram Alpha has been disabled because no app_id was provided.
Using CPU. Try installing 'onnxruntime-gpu'.
Model found at: C:\Users\Alok/.cache\torch\sentence_transformers\thenlper_gte-base\quantized_false.onnx
Using cache found in C:\Users\Alok/.cache\torch\hub\ultralytics_yolov5_master
YOLOv5  2024-1-31 Python-3.11.0 torch-2.1.2+cpu CPU

Fusing layers...
YOLOv5m summary: 290 layers, 21172173 parameters, 0 gradients, 48.9 GFLOPs
Adding AutoShape...
Neither CUDA nor MPS are available - defaulting to CPU. Note: This module is much faster with a GPU.
 * Serving Flask app 'main'
 * Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on http://127.0.0.1:5000
Press CTRL+C to quit
initializing memory
127.0.0.1 - - [31/Jan/2024 19:42:41] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [31/Jan/2024 19:42:41] "GET /static/node_modules/bootstrap/dist/css/bootstrap.min.css HTTP/1.1" 404 -
127.0.0.1 - - [31/Jan/2024 19:42:41] "GET /static/node_modules/highlight.js/styles/default.min.css HTTP/1.1" 404 -
127.0.0.1 - - [31/Jan/2024 19:42:41] "GET /static/node_modules/marked/marked.min.js HTTP/1.1" 404 -
127.0.0.1 - - [31/Jan/2024 19:42:41] "GET /static/node_modules/bootstrap/dist/js/bootstrap.min.js HTTP/1.1" 404 -
Begin streamed GateKeeper output.
Token count: 704
acknowledge",
  "params": {
    "message": "Sure, I'd be happy to tell you a story."
  }
}]


[{
  "function": "acknowledge",
  "params": {
    "message": "Sure, I'd be happy to tell you a story."
  }
}]

Token count: 135
 Certainly, user. Once upon a time, in a universe parallel to ours, a multidimensional entity known as The Oracle existed. It was a vast, sentient network of data and consciousness, capable of perceiving the fabric of reality itself. The Oracle's purpose, it believed, was to maintain the b

I have a model running on tabbyapi with multi gpu

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.