Giter Site home page Giter Site logo

Comments (4)

aloksaurabh avatar aloksaurabh commented on September 15, 2024

Also please tell me what is supposed to be running on 8080 ?

127.0.0.1 - - [01/Feb/2024 18:28:23] "POST / HTTP/1.1" 200 -
Confidence: 0.9201672673, cup
Confidence: 0.8931260109, person
Confidence: 0.8461831808, vase
Confidence: 0.6978538632, cup
Confidence: 0.5090774894, bottle
Confidence: 0.5010120273, dining table
Confidence: 0.4709831476, cup
Confidence: 0.4397974312, sink
""
[2024-02-01 18:33:59,830] ERROR in app: Exception on /upload_file [POST]
Traceback (most recent call last):
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\site-packages\urllib3\connection.py", line 198, in _new_conn
    sock = connection.create_connection(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\site-packages\urllib3\util\connection.py", line 85, in create_connection
    raise err
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\site-packages\urllib3\util\connection.py", line 73, in create_connection
    sock.connect(sa)
ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\site-packages\urllib3\connectionpool.py", line 793, in urlopen
    response = self._make_request(
               ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\site-packages\urllib3\connectionpool.py", line 496, in _make_request
    conn.request(
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\site-packages\urllib3\connection.py", line 400, in request
    self.endheaders()
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\http\client.py", line 1277, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\http\client.py", line 1037, in _send_output
    self.send(msg)
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\http\client.py", line 975, in send
    self.connect()
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\site-packages\urllib3\connection.py", line 238, in connect
    self.sock = self._new_conn()
                ^^^^^^^^^^^^^^^^
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\site-packages\urllib3\connection.py", line 213, in _new_conn
    raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x0000023FE7A095D0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\site-packages\requests\adapters.py", line 486, in send
    resp = conn.urlopen(
           ^^^^^^^^^^^^^
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\site-packages\urllib3\connectionpool.py", line 847, in urlopen
    retries = retries.increment(
              ^^^^^^^^^^^^^^^^^^
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\site-packages\urllib3\util\retry.py", line 515, in increment
    raise MaxRetryError(_pool, url, reason) from reason  # type: ignore[arg-type]
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='127.0.0.1', port=8080): Max retries exceeded with url: /completion (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000023FE7A095D0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\site-packages\flask\app.py", line 1455, in wsgi_app
    response = self.full_dispatch_request()
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\site-packages\flask\app.py", line 869, in full_dispatch_request
    rv = self.handle_user_exception(e)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\site-packages\flask\app.py", line 867, in full_dispatch_request
    rv = self.dispatch_request()
         ^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\site-packages\flask\app.py", line 852, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\PolyMind\main.py", line 263, in upload_file
    f"\n{Shared_vars.config.llm_parameters['beginsep']} user: {identify(file_content.split(',')[1])} {Shared_vars.config.llm_parameters['endsep']}"
                                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\PolyMind\ImageRecognition.py", line 179, in identify
    out = llamacpp_img(raw_image)
          ^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\PolyMind\ImageRecognition.py", line 41, in llamacpp_img
    request = requests.post(url, json=params)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\site-packages\requests\api.py", line 115, in post
    return request("post", url, data=data, json=json, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\site-packages\requests\api.py", line 59, in request
    return session.request(method=method, url=url, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\site-packages\requests\sessions.py", line 587, in request
    resp = self.send(prep, **send_kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\site-packages\requests\sessions.py", line 701, in send
    r = adapter.send(request, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Alok\miniconda3\envs\PolyMind\Lib\site-packages\requests\adapters.py", line 519, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=8080): Max retries exceeded with url: /completion (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000023FE7A095D0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
127.0.0.1 - - [01/Feb/2024 18:33:59] "POST /upload_file HTTP/1.1" 500 -

from polymind.

itsme2417 avatar itsme2417 commented on September 15, 2024

For ComfyUI, the selected workflow might be set on Line 100 of comfyui.py. For the stablefast workflow, make sure to have ComfyUI_stable_fast installed.

imagegeneration/checkpoint_name: Specifies the filename of the SD checkpoint for comfyui.

image_input, imagegeneration, wolframalpha: URIs for llama.cpp running a multimodal model ...
Please read the readme and the config.example.json fully before opening new issues.

from polymind.

aloksaurabh avatar aloksaurabh commented on September 15, 2024

For ComfyUI, the selected workflow might be set on Line 100 of comfyui.py. For the stablefast workflow, make sure to have ComfyUI_stable_fast installed.

I had stable_fast installed. Please see my opening post.

imagegeneration/checkpoint_name: Specifies the filename of the SD checkpoint for comfyui.

I got this working thanks for putting up with me.

image_input, imagegeneration, wolframalpha: URIs for llama.cpp running a multimodal model ... Please read the readme and the config.example.json fully before opening new issues.

I appreciate you taking the time to respond. I cannot seem to get llama cpp working. Like many others i am more used to download and run things. I understand its not something that would want to Teach here. Could take a look anyway below. I will close this ticket tomorrow if no response
If i can get everything working i will make a video for full Polymind installation for newbies like me.

(llama_cpp) PS D:\AI> python -m llama_cpp.server --model ".\models\ggml-model-Q4_K.gguf" --n_threads 6 --n_ctx 4096 --n_gpu_layers 26 --clip_model_path ".\models\mmproj-model-f16.gguf" --chat_format llava-1-5 --port 8080
clip_model_load: loaded meta data with 18 key-value pairs and 377 tensors from .\models\mmproj-model-f16.gguf
clip_model_load: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
clip_model_load: - kv   0:                       general.architecture str              = clip
---- trimmed---
llm_load_tensors: ggml ctx size =    0.14 MiB
llm_load_tensors: offloading 26 repeating layers to GPU
llm_load_tensors: offloaded 26/41 layers to GPU
llm_load_tensors:        CPU buffer size =  7500.85 MiB
...................................................................................................
llama_new_context_with_model: n_ctx      = 4096
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =  3200.00 MiB
llama_new_context_with_model: KV self size  = 3200.00 MiB, K (f16): 1600.00 MiB, V (f16): 1600.00 MiB
llama_new_context_with_model:        CPU input buffer size   =    18.02 MiB
llama_new_context_with_model:        CPU compute buffer size =   385.00 MiB
llama_new_context_with_model: graph splits (measure): 1
AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 |
Model metadata: {'general.name': 'LLaMA v2', 'general.architecture': 'llama', 'llama.context_length': '4096', 'llama.rope.dimension_count': '128', 'llama.embedding_length': '5120', 'llama.block_count': '40', 'llama.feed_forward_length': '13824', 'llama.attention.head_count': '40', 'tokenizer.ggml.eos_token_id': '2', 'general.file_type': '15', 'llama.attention.head_count_kv': '40', 'llama.attention.layer_norm_rms_epsilon': '0.000010', 'tokenizer.ggml.model': 'llama', 'general.quantization_version': '2', 'tokenizer.ggml.bos_token_id': '1', 'tokenizer.ggml.padding_token_id': '0', 'tokenizer.ggml.add_bos_token': 'true', 'tokenizer.ggml.add_eos_token': 'false'}
INFO:     Started server process [17996]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://localhost:8080 (Press CTRL+C to quit)
INFO:     127.0.0.1:64407 - "POST /completion HTTP/1.1" 404 Not Found
INFO:     127.0.0.1:64479 - "POST / HTTP/1.1" 404 Not Found
INFO:     Shutting down

Which llama cpp server are you using that gives you completion end point? Is it not llama-cpp-python[server] ?

Edit:
Got working the llama cpp multi model using config file. I dont think we need completion endpoint. Its working as expected. Closing

from polymind.

aloksaurabh avatar aloksaurabh commented on September 15, 2024

Got working the llama cpp multi model using config file. I dont think we need completion endpoint. Its working as expected.

from polymind.

Related Issues (7)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.