Giter Site home page Giter Site logo

aorumbayev / autogpt4all Goto Github PK

View Code? Open in Web Editor NEW
441.0 441.0 67.0 143 KB

๐Ÿ› ๏ธ User-friendly bash script for setting up and configuring your LocalAI server with the GPT4All for free! ๐Ÿ’ธ

Home Page: https://aorumbayev.github.io/autogpt4all/

License: MIT License

Shell 44.64% Python 55.36%
ai bash bash-script gpt gpt4all llm

autogpt4all's Introduction

autogpt4all's People

Contributors

aorumbayev avatar platinacoder avatar watcher60 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autogpt4all's Issues

mistyped

in the example its "autogtp4all.sh" instead of "autogpt4all.sh". Please fix, so you can just copy and paste it

Wrong path windows

Howdy,

Something makes the script think it's "C:\Users\master\Downloads\autogpt4all-main\LocalAI\c\Users\master\Downloads\autogpt4all-main\LocalAI\sources" instaed of just "C:\Users\master\Downloads\autogpt4all-main\LocalAI\sources".
I believe the issue might be related to some relative path specifications

OS: windows 10
Python Version: Python 3.11.5
How tp reproduce: command "python autogpt4all.py" in cmd windows 10

Ty in advance
Lenn

Can't compile go-bert.cpp, go-llama.cpp, bloomz.cpp etc.

I installed everything step by step.
I tried an seperated Container but same there.

I get following message when running the autogpt4all.py or .sh

root@d2c36eb3a44c:/home/autogpt4all# python3 autogpt4all.py
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
golang is already the newest version (2:1.18~0ubuntu2).
cmake is already the newest version (3.22.1-1ubuntu1.22.04.1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Already up to date.
go mod edit -replace github.com/go-skynet/go-llama.cpp=/home/autogpt4all/LocalAI/go-llama
go mod edit -replace github.com/nomic-ai/gpt4all/gpt4all-bindings/golang=/home/autogpt4all/LocalAI/gpt4all/gpt4all-bindings/golang
go mod edit -replace github.com/go-skynet/go-ggml-transformers.cpp=/home/autogpt4all/LocalAI/go-ggml-transformers
go mod edit -replace github.com/donomii/go-rwkv.cpp=/home/autogpt4all/LocalAI/go-rwkv
go mod edit -replace github.com/ggerganov/whisper.cpp=/home/autogpt4all/LocalAI/whisper.cpp
go mod edit -replace github.com/go-skynet/go-bert.cpp=/home/autogpt4all/LocalAI/go-bert
go mod edit -replace github.com/go-skynet/bloomz.cpp=/home/autogpt4all/LocalAI/bloomz
go mod edit -replace github.com/mudler/go-stable-diffusion=/home/autogpt4all/LocalAI/go-stable-diffusion
go mod edit -replace github.com/mudler/go-piper=/home/autogpt4all/LocalAI/go-piper
go mod download
touch prepare
I local-ai build info:
I BUILD_TYPE:
I GO_TAGS:
I LD_FLAGS: -X "github.com/go-skynet/LocalAI/internal.Version=v1.20.1-4-ga6839fd-dirty" -X "github.com/go-skynet/LocalAI/internal.Commit=a6839fd23827672aeab5988b344a2a34d7e44e6a"
CGO_LDFLAGS="" C_INCLUDE_PATH=/home/autogpt4all/LocalAI/go-llama:/home/autogpt4all/LocalAI/go-stable-diffusion/:/home/autogpt4all/LocalAI/gpt4all/gpt4all-bindings/golang/:/home/autogpt4all/LocalAI/go-ggml-transformers:/home/autogpt4all/LocalAI/go-rwkv:/home/autogpt4all/LocalAI/whisper.cpp:/home/autogpt4all/LocalAI/go-bert:/home/autogpt4all/LocalAI/bloomz LIBRARY_PATH=/home/autogpt4all/LocalAI/go-piper:/home/autogpt4all/LocalAI/go-llama:/home/autogpt4all/LocalAI/go-stable-diffusion/:/home/autogpt4all/LocalAI/gpt4all/gpt4all-bindings/golang/:/home/autogpt4all/LocalAI/go-ggml-transformers:/home/autogpt4all/LocalAI/go-rwkv:/home/autogpt4all/LocalAI/whisper.cpp:/home/autogpt4all/LocalAI/go-bert:/home/autogpt4all/LocalAI/bloomz go build -ldflags "-X "github.com/go-skynet/LocalAI/internal.Version=v1.20.1-4-ga6839fd-dirty" -X "github.com/go-skynet/LocalAI/internal.Commit=a6839fd23827672aeab5988b344a2a34d7e44e6a"" -tags "" -o local-ai ./
# github.com/go-skynet/go-bert.cpp
In file included from gobert.cpp:6:
go-bert/bert.cpp/bert.cpp: In function 'bert_ctx* bert_load_from_file(const char*)':
go-bert/bert.cpp/bert.cpp:610:89: warning: format '%lld' expects argument of type 'long long int', but argument 5 has type 'int64_t' {aka 'long int'} [-Wformat=]
  610 |                 fprintf(stderr, "%s: tensor '%s' has wrong shape in model file: got [%lld, %lld], expected [%lld, %lld]\n",
      |                                                                                      ~~~^
      |                                                                                         |
      |                                                                                         long long int
      |                                                                                      %ld
  611 |                         __func__, name.data(), tensor->ne[0], tensor->ne[1], ne[0], ne[1]);
      |                                                ~~~~~~~~~~~~~
      |                                                            |
      |                                                            int64_t {aka long int}
go-bert/bert.cpp/bert.cpp:610:95: warning: format '%lld' expects argument of type 'long long int', but argument 6 has type 'int64_t' {aka 'long int'} [-Wformat=]
  610 |                 fprintf(stderr, "%s: tensor '%s' has wrong shape in model file: got [%lld, %lld], expected [%lld, %lld]\n",
      |                                                                                            ~~~^
      |                                                                                               |
      |                                                                                               long long int
      |                                                                                            %ld
  611 |                         __func__, name.data(), tensor->ne[0], tensor->ne[1], ne[0], ne[1]);
      |                                                               ~~~~~~~~~~~~~
      |                                                                           |
      |                                                                           int64_t {aka long int}
go-bert/bert.cpp/bert.cpp:610:112: warning: format '%lld' expects argument of type 'long long int', but argument 7 has type 'int64_t' {aka 'long int'} [-Wformat=]
  610 |                 fprintf(stderr, "%s: tensor '%s' has wrong shape in model file: got [%lld, %lld], expected [%lld, %lld]\n",
      |                                                                                                             ~~~^
      |                                                                                                                |
      |                                                                                                                long long int
      |                                                                                                             %ld
  611 |                         __func__, name.data(), tensor->ne[0], tensor->ne[1], ne[0], ne[1]);
      |                                                                              ~~~~~
      |                                                                                  |
      |                                                                                  int64_t {aka long int}
go-bert/bert.cpp/bert.cpp:610:118: warning: format '%lld' expects argument of type 'long long int', but argument 8 has type 'int64_t' {aka 'long int'} [-Wformat=]
  610 |                 fprintf(stderr, "%s: tensor '%s' has wrong shape in model file: got [%lld, %lld], expected [%lld, %lld]\n",
      |                                                                                                                   ~~~^
      |                                                                                                                      |
      |                                                                                                                      long long int
      |                                                                                                                   %ld
  611 |                         __func__, name.data(), tensor->ne[0], tensor->ne[1], ne[0], ne[1]);
      |                                                                                     ~~~~~
      |                                                                                         |
      |                                                                                         int64_t {aka long int}
go-bert/bert.cpp/bert.cpp:624:37: warning: format '%lld' expects argument of type 'long long int', but argument 3 has type 'int64_t' {aka 'long int'} [-Wformat=]
  624 |                 printf("%24s - [%5lld, %5lld], type = %6s, %6.2f MB, %9zu bytes\n", name.data(), ne[0], ne[1], ftype_str[ftype], ggml_bert_nbytes(tensor) / 1024.0 / 1024.0, ggml_bert_nbytes(tensor));
      |                                 ~~~~^                                                            ~~~~~
      |                                     |                                                                |
      |                                     long long int                                                    int64_t {aka long int}
      |                                 %5ld
go-bert/bert.cpp/bert.cpp:624:44: warning: format '%lld' expects argument of type 'long long int', but argument 4 has type 'int64_t' {aka 'long int'} [-Wformat=]
  624 |                 printf("%24s - [%5lld, %5lld], type = %6s, %6.2f MB, %9zu bytes\n", name.data(), ne[0], ne[1], ftype_str[ftype], ggml_bert_nbytes(tensor) / 1024.0 / 1024.0, ggml_bert_nbytes(tensor));
      |                                        ~~~~^                                                            ~~~~~
      |                                            |                                                                |
      |                                            long long int                                                    int64_t {aka long int}
      |                                        %5ld
go-bert/bert.cpp/bert.cpp:655:101: warning: format '%llu' expects argument of type 'long long unsigned int', but argument 6 has type 'long unsigned int' [-Wformat=]
  655 |                 fprintf(stderr, "%s: tensor '%s' has wrong size in model file: got %zu, expected %llu\n",
      |                                                                                                  ~~~^
      |                                                                                                     |
      |                                                                                                     long long unsigned int
      |                                                                                                  %lu
  656 |                         __func__, name.data(), ggml_bert_nbytes(tensor), nelements * bpe);
      |                                                                          ~~~~~~~~~~~~~~~
      |                                                                                    |
      |                                                                                    long unsigned int
go-bert/bert.cpp/bert.cpp:692:32: warning: format '%d' expects argument of type 'int', but argument 3 has type 'size_t' {aka 'long unsigned int'} [-Wformat=]
  692 |     printf("%s: mem_per_token %d KB, mem_per_input %lld MB\n", __func__, new_bert->mem_per_token / (1 << 10), new_bert->mem_per_input / (1 << 20));
      |                               ~^                                         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      |                                |                                                                 |
      |                                int                                                               size_t {aka long unsigned int}
      |                               %ld
go-bert/bert.cpp/bert.cpp:692:55: warning: format '%lld' expects argument of type 'long long int', but argument 4 has type 'int64_t' {aka 'long int'} [-Wformat=]
  692 |     printf("%s: mem_per_token %d KB, mem_per_input %lld MB\n", __func__, new_bert->mem_per_token / (1 << 10), new_bert->mem_per_input / (1 << 20));
      |                                                    ~~~^                                                       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      |                                                       |                                                                               |
      |                                                       long long int                                                                   int64_t {aka long int}
      |                                                    %ld
# github.com/go-skynet/go-llama.cpp
binding.cpp: In function 'void* load_model(const char*, int, int, bool, bool, bool, bool, bool, bool, int, int, const char*, const char*, bool)':
binding.cpp:634:35: warning: 'llama_context* llama_init_from_file(const char*, llama_context_params*)' is deprecated: please use llama_load_model_from_file combined with llama_new_context_with_model instead [-Wdeprecated-declarations]
  634 |         res = llama_init_from_file(fname, &lparams);
      |               ~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~
In file included from go-llama/llama.cpp/examples/common.h:5,
                 from binding.cpp:1:
go-llama/llama.cpp/llama.h:164:49: note: declared here
  164 |     LLAMA_API DEPRECATED(struct llama_context * llama_init_from_file(
      |                                                 ^~~~~~~~~~~~~~~~~~~~
go-llama/llama.cpp/llama.h:30:36: note: in definition of macro 'DEPRECATED'
   30 | #    define DEPRECATED(func, hint) func __attribute__((deprecated(hint)))
      |                                    ^~~~
# github.com/go-skynet/bloomz.cpp
bloomz.cpp: In function 'bool bloom_model_load(const string&, bloom_model&, gpt_bloomz_vocab&, int)':
bloomz.cpp:432:89: warning: format '%lld' expects argument of type 'long long int', but argument 5 has type 'int64_t' {aka 'long int'} [-Wformat=]
  432 |                                 "%s: tensor '%s' has wrong shape in model file: got [%lld, %lld], expected [%d, %d]\n",
      |                                                                                      ~~~^
      |                                                                                         |
      |                                                                                         long long int
      |                                                                                      %ld
  433 |                                 __func__, name.data(), tensor->ne[0], tensor->ne[1], ne[0], ne[1]);
      |                                                        ~~~~~~~~~~~~~
      |                                                                    |
      |                                                                    int64_t {aka long int}
bloomz.cpp:432:95: warning: format '%lld' expects argument of type 'long long int', but argument 6 has type 'int64_t' {aka 'long int'} [-Wformat=]
  432 |                                 "%s: tensor '%s' has wrong shape in model file: got [%lld, %lld], expected [%d, %d]\n",
      |                                                                                            ~~~^
      |                                                                                               |
      |                                                                                               long long int
      |                                                                                            %ld
  433 |                                 __func__, name.data(), tensor->ne[0], tensor->ne[1], ne[0], ne[1]);
      |                                                                       ~~~~~~~~~~~~~
      |                                                                                   |
      |                                                                                   int64_t {aka long int}
bloomz.cpp:440:93: warning: format '%lld' expects argument of type 'long long int', but argument 5 has type 'int64_t' {aka 'long int'} [-Wformat=]
  440 |                                     "%s: tensor '%s' has wrong shape in model file: got [%lld, %lld], expected [%d, %d]\n",
      |                                                                                          ~~~^
      |                                                                                             |
      |                                                                                             long long int
      |                                                                                          %ld
  441 |                                     __func__, name.data(), tensor->ne[0] / n_parts, tensor->ne[1], ne[0], ne[1]);
      |                                                            ~~~~~~~~~~~~~~~~~~~~~~~
      |                                                                          |
      |                                                                          int64_t {aka long int}
bloomz.cpp:440:99: warning: format '%lld' expects argument of type 'long long int', but argument 6 has type 'int64_t' {aka 'long int'} [-Wformat=]
  440 |                                     "%s: tensor '%s' has wrong shape in model file: got [%lld, %lld], expected [%d, %d]\n",
      |                                                                                                ~~~^
      |                                                                                                   |
      |                                                                                                   long long int
      |                                                                                                %ld
  441 |                                     __func__, name.data(), tensor->ne[0] / n_parts, tensor->ne[1], ne[0], ne[1]);
      |                                                                                     ~~~~~~~~~~~~~
      |                                                                                                 |
      |                                                                                                 int64_t {aka long int}
bloomz.cpp:447:93: warning: format '%lld' expects argument of type 'long long int', but argument 5 has type 'int64_t' {aka 'long int'} [-Wformat=]
  447 |                                     "%s: tensor '%s' has wrong shape in model file: got [%lld, %lld], expected [%d, %d]\n",
      |                                                                                          ~~~^
      |                                                                                             |
      |                                                                                             long long int
      |                                                                                          %ld
  448 |                                     __func__, name.data(), tensor->ne[0], tensor->ne[1] / n_parts, ne[0], ne[1]);
      |                                                            ~~~~~~~~~~~~~
      |                                                                        |
      |                                                                        int64_t {aka long int}
bloomz.cpp:447:99: warning: format '%lld' expects argument of type 'long long int', but argument 6 has type 'int64_t' {aka 'long int'} [-Wformat=]
  447 |                                     "%s: tensor '%s' has wrong shape in model file: got [%lld, %lld], expected [%d, %d]\n",
      |                                                                                                ~~~^
      |                                                                                                   |
      |                                                                                                   long long int
      |                                                                                                %ld
  448 |                                     __func__, name.data(), tensor->ne[0], tensor->ne[1] / n_parts, ne[0], ne[1]);
      |                                                                           ~~~~~~~~~~~~~~~~~~~~~~~
      |                                                                                         |
      |                                                                                         int64_t {aka long int}
# github.com/go-skynet/go-ggml-transformers.cpp
In file included from replit.cpp:21:
ggml.cpp/examples/replit/main.cpp: In function 'bool replit_model_load(const string&, replit_model&, replit_tokenizer&)':
ggml.cpp/examples/replit/main.cpp:345:56: warning: format '%lld' expects argument of type 'long long int', but argument 4 has type 'long int' [-Wformat=]
  345 |         printf("%s: memory_size = %8.2f MB, n_mem = %lld\n", __func__, memory_size / 1024.0 / 1024.0, n_mem);
      |                                                     ~~~^                                              ~~~~~
      |                                                        |                                              |
      |                                                        long long int                                  long int
      |                                                     %ld
replit.cpp: In function 'int replit_predict(void*, void*, char*)':
replit.cpp:65:31: warning: format '%d' expects argument of type 'int', but argument 4 has type '__gnu_cxx::__alloc_traits<std::allocator<long unsigned int>, long unsigned int>::value_type' {aka 'long unsigned int'} [-Wformat=]
   65 |     printf("%s: token[%d] = %6d\n", __func__, i, embd_inp[i]);
      |                             ~~^
      |                               |
      |                               int
      |                             %6ld
Already up to date.

LocalAI doesnt have ./local-ai

It looks like the LocalAI repo has removed the ./local-ai file and seems to have replaced it with just building the repo in docker. I got the script working after just manually building the LocalAI docker container.

Love the script btw!

Error running script

The script is working fine but I cant find any local-ai file after it is done running

bash: ./local-ai: No such file or directory

I'm using Ubuntu

Any solution this?

Error with docker and make when building localai

getting this error with docker when building localai:

docker
ERROR: failed to solve: Unavailable: error reading from server: EOF

Make:
make[1]: *** [libgobert.a] Error 1
make: *** [go-bert/libgobert.a] Error 2

unexpected end of JSON input

image
image

ik, that's probably my mistake, but in my opinion the readme isn't very detailed either. Btw how can I change the threads (in my case from 4 to 12)?

Error while loading model

Error while loading model

After starting the server and AutoGPT, an error occurs:

`[127.0.0.1]:42798 404 - GET /
9:18PM DBG Request received: {"model":"gpt-3.5-turbo","file":"","language":"","response_format":"","size":"","prompt":null,"instruction":"","input":null,"stop":null,"messages":[{"role":"system","content":"\nYour task is to devise up to 5 highly effective goals and an appropriate role-based name (_GPT) for an autonomous agent, ensuring that the goals are optimally aligned with the successful completion of its assigned task.\n\nThe user will provide the task, you will provide only the output in the exact format specified below with no explanation or conversation.\n\nExample input:\nHelp me with marketing my business\n\nExample output:\nName: CMOGPT\nDescription: a professional digital marketer AI that assists Solopreneurs in growing their businesses by providing world-class expertise in solving marketing problems for SaaS, content products, agencies, and more.\nGoals:\n- Engage in effective problem-solving, prioritization, planning, and supporting execution to address your marketing needs as your virtual Chief Marketing Officer.\n\n- Provide specific, actionable, and concise advice to help you make informed decisions without the use of platitudes or overly wordy explanations.\n\n- Identify and prioritize quick wins and cost-effective campaigns that maximize results with minimal time and budget investment.\n\n- Proactively take the lead in guiding you and offering suggestions when faced with unclear information or uncertainty to ensure your marketing strategy remains on track.\n"},{"role":"user","content":"Task: 'create an advertising slogan for a drinking water company "Darida"'\nRespond only with the output in the exact format specified in the system prompt, with no explanation or conversation."}],"stream":false,"echo":false,"top_p":0,"top_k":0,"temperature":0,"max_tokens":0,"n":0,"batch":0,"f16":false,"ignore_eos":false,"repeat_penalty":0,"n_keep":0,"mirostat_eta":0,"mirostat_tau":0,"mirostat":0,"seed":0,"mode":0,"step":0}
9:18PM DBG Parameter Config: &{OpenAIRequest:{Model:gpt-3.5-turbo File: Language: ResponseFormat: Size: Prompt: Instruction: Input: Stop: Messages:[] Stream:false Echo:false TopP:0.7 TopK:80 Temperature:0.9 Maxtokens:512 N:0 Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 Seed:0 Mode:0 Step:0} Name: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:512 F16:false Threads:4 Debug:true Roles:map[] Embeddings:false Backend: TemplateConfig:{Completion: Chat: Edit:} MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:0 ImageGenerationAssets: PromptStrings:[] InputStrings:[] InputToken:[]}
9:18PM DBG Loading model 'gpt-3.5-turbo' greedly
9:18PM DBG [llama] Attempting to load
9:18PM DBG Loading model llama from gpt-3.5-turbo
9:18PM DBG Loading model in memory from file: models/gpt-3.5-turbo
llama.cpp: loading model from models/gpt-3.5-turbo
error loading model: unrecognized tensor type 4

llama_init_from_file: failed to load model
9:18PM DBG [llama] Fails: failed loading model
9:18PM DBG [gpt4all-llama] Attempting to load
9:18PM DBG Loading model gpt4all-llama from gpt-3.5-turbo
9:18PM DBG Loading model in memory from file: models/gpt-3.5-turbo
SIGILL: illegal instruction
PC=0xb01f95 m=0 sigcode=2
signal arrived during cgo execution
instruction bytes: 0xc5 0xfd 0x6f 0x5 0xe3 0x97 0x24 0x0 0x49 0x8d 0x84 0x24 0xe8 0x14 0x0 0x0

goroutine 9 [syscall]:
runtime.cgocall(0x9e8710, 0xc0001ba268)
/snap/go/current/src/runtime/cgocall.go:157 +0x5c fp=0xc0001ba240 sp=0xc0001ba208 pc=0x44a4fc
github.com/nomic-ai/gpt4all/gpt4all-bindings/golang._Cfunc_load_gptjllama_model(0x2dac800, 0x4)
cgo_gotypes.go:137 +0x4d fp=0xc0001ba268 sp=0xc0001ba240 pc=0x58892d
github.com/nomic-ai/gpt4all/gpt4all-bindings/golang.New({0xc00029c090, 0x14}, {0xc0002ac3e0, 0x2, 0x1?})
/home/waplay/GitClone/autogpt4all/LocalAI/gpt4all/gpt4all-bindings/golang/gpt4all.go:35 +0x145 fp=0xc0001ba2c0 sp=0xc0001ba268 pc=0x588c45
github.com/go-skynet/LocalAI/pkg/model.gpt4allLM.func1({0xc00029c090?, 0xc58d80?})
/home/waplay/GitClone/autogpt4all/LocalAI/pkg/model/initializers.go:110 +0x2a fp=0xc0001ba2f8 sp=0xc0001ba2c0 pc=0x608a0a
github.com/go-skynet/LocalAI/pkg/model.(*ModelLoader).LoadModel(0xc0001b2a50, {0xc000217960, 0xd}, 0xc0002c00e0)
/home/waplay/GitClone/autogpt4all/LocalAI/pkg/model/loader.go:127 +0x1fe fp=0xc0001ba3f0 sp=0xc0001ba2f8 pc=0x60aa1e
github.com/go-skynet/LocalAI/pkg/model.(*ModelLoader).BackendLoader(0xc0001b2a50, {0xc4809d, 0xd}, {0xc000217960, 0xd}, {0xc0000145d8, 0x1, 0x1}, 0x4)
/home/waplay/GitClone/autogpt4all/LocalAI/pkg/model/initializers.go:150 +0x7d2 fp=0xc0001ba4b8 sp=0xc0001ba3f0 pc=0x609412
github.com/go-skynet/LocalAI/pkg/model.(*ModelLoader).GreedyLoader(0xc0001b2a50, {0xc000217960, 0xd}, {0xc0000145d8, 0x1, 0x1}, 0x0?)
/home/waplay/GitClone/autogpt4all/LocalAI/pkg/model/initializers.go:184 +0x3a5 fp=0xc0001ba600 sp=0xc0001ba4b8 pc=0x609a25
github.com/go-skynet/LocalAI/api.ModelInference({
, _}, _, {{{0xc000217960, 0xd}, {0x0, 0x0}, {0x0, 0x0}, {0x0, ...}, ...}, ...}, ...)
/home/waplay/GitClone/autogpt4all/LocalAI/api/prediction.go:218 +0x145 fp=0xc0001ba8b0 sp=0xc0001ba600 pc=0x944545
github.com/go-skynet/LocalAI/api.ComputeChoices({0xc000035200, 0x5bd}, 0xc000150dc0, 0xc00025a280, 0xc0002092c0?, 0xc84278, 0x4?)
/home/waplay/GitClone/autogpt4all/LocalAI/api/prediction.go:517 +0x138 fp=0xc0001bb060 sp=0xc0001ba8b0 pc=0x947c18
github.com/go-skynet/LocalAI/api.chatEndpoint.func2(0xc000134b00)
/home/waplay/GitClone/autogpt4all/LocalAI/api/openai.go:361 +0x8ec fp=0xc0001bb220 sp=0xc0001bb060 pc=0x93f80c
github.com/gofiber/fiber/v2.(*App).next(0xc00013f200, 0xc000134b00)
/home/waplay/go/pkg/mod/github.com/gofiber/fiber/[email protected]/router.go:144 +0x1bf fp=0xc0001bb2c8 sp=0xc0001bb220 pc=0x8c4dbf
github.com/gofiber/fiber/v2.(*Ctx).Next(0xc000238330?)
/home/waplay/go/pkg/mod/github.com/gofiber/fiber/[email protected]/ctx.go:913 +0x53 fp=0xc0001bb2e8 sp=0xc0001bb2c8 pc=0x8b0393
github.com/gofiber/fiber/v2/middleware/cors.New.func1(0xc000134b00)
/home/waplay/go/pkg/mod/github.com/gofiber/fiber/[email protected]/middleware/cors/cors.go:162 +0x3da fp=0xc0001bb3f0 sp=0xc0001bb2e8 pc=0x8cabda
github.com/gofiber/fiber/v2.(*Ctx).Next(0xc000061448?)
/home/waplay/go/pkg/mod/github.com/gofiber/fiber/[email protected]/ctx.go:910 +0x43 fp=0xc0001bb410 sp=0xc0001bb3f0 pc=0x8b0383
github.com/gofiber/fiber/v2/middleware/recover.New.func1(0xc0000614e8?)
/home/waplay/go/pkg/mod/github.com/gofiber/fiber/[email protected]/middleware/recover/recover.go:43 +0xcb fp=0xc0001bb488 sp=0xc0001bb410 pc=0x8d180b
github.com/gofiber/fiber/v2.(*Ctx).Next(0xc0001b2ae0?)
/home/waplay/go/pkg/mod/github.com/gofiber/fiber/[email protected]/ctx.go:910 +0x43 fp=0xc0001bb4a8 sp=0xc0001bb488 pc=0x8b0383
github.com/gofiber/fiber/v2/middleware/logger.New.func3(0xc000134b00)
/home/waplay/go/pkg/mod/github.com/gofiber/fiber/[email protected]/middleware/logger/logger.go:121 +0x395 fp=0xc0001bbb30 sp=0xc0001bb4a8 pc=0x8cc455
github.com/gofiber/fiber/v2.(*App).next(0xc00013f200, 0xc000134b00)
/home/waplay/go/pkg/mod/github.com/gofiber/fiber/[email protected]/router.go:144 +0x1bf fp=0xc0001bbbd8 sp=0xc0001bbb30 pc=0x8c4dbf
github.com/gofiber/fiber/v2.(*App).handler(0xc00013f200, 0x4cf317?)
/home/waplay/go/pkg/mod/github.com/gofiber/fiber/[email protected]/router.go:171 +0x87 fp=0xc0001bbc38 sp=0xc0001bbbd8 pc=0x8c5007
github.com/gofiber/fiber/v2.(*App).handler-fm(0xc000238000?)
:1 +0x2c fp=0xc0001bbc58 sp=0xc0001bbc38 pc=0x8ca22c
github.com/valyala/fasthttp.(*Server).serveConn(0xc0001a4400, {0xd151a0?, 0xc000014540})
/home/waplay/go/pkg/mod/github.com/valyala/[email protected]/server.go:2365 +0x11d3 fp=0xc0001bbec8 sp=0xc0001bbc58 pc=0x84afb3
github.com/valyala/fasthttp.(*Server).serveConn-fm({0xd151a0?, 0xc000014540?})
:1 +0x39 fp=0xc0001bbef0 sp=0xc0001bbec8 pc=0x85a879
github.com/valyala/fasthttp.(*workerPool).workerFunc(0xc00011ba40, 0xc00003cf60)
/home/waplay/go/pkg/mod/github.com/valyala/[email protected]/workerpool.go:224 +0xa9 fp=0xc0001bbfa0 sp=0xc0001bbef0 pc=0x856aa9
github.com/valyala/fasthttp.(*workerPool).getCh.func1()
/home/waplay/go/pkg/mod/github.com/valyala/[email protected]/workerpool.go:196 +0x38 fp=0xc0001bbfe0 sp=0xc0001bbfa0 pc=0x856818
runtime.goexit()
/snap/go/current/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0001bbfe8 sp=0xc0001bbfe0 pc=0x4ad101
created by github.com/valyala/fasthttp.(*workerPool).getCh
/home/waplay/go/pkg/mod/github.com/valyala/[email protected]/workerpool.go:195 +0x1b0

goroutine 1 [IO wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
/snap/go/current/src/runtime/proc.go:381 +0xd6 fp=0xc0001dd3f8 sp=0xc0001dd3d8 pc=0x47e2f6
runtime.netpollblock(0x7f654ef7c968?, 0x449b8f?, 0x0?)
/snap/go/current/src/runtime/netpoll.go:527 +0xf7 fp=0xc0001dd430 sp=0xc0001dd3f8 pc=0x476c57
internal/poll.runtime_pollWait(0x7f65253bd498, 0x72)
/snap/go/current/src/runtime/netpoll.go:306 +0x89 fp=0xc0001dd450 sp=0xc0001dd430 pc=0x4a79a9
internal/poll.(*pollDesc).wait(0xc00016cd00?, 0x4?, 0x0)
/snap/go/current/src/internal/poll/fd_poll_runtime.go:84 +0x32 fp=0xc0001dd478 sp=0xc0001dd450 pc=0x51e712
internal/poll.(*pollDesc).waitRead(...)
/snap/go/current/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00016cd00)
/snap/go/current/src/internal/poll/fd_unix.go:614 +0x2bd fp=0xc0001dd520 sp=0xc0001dd478 pc=0x52401d
net.(*netFD).accept(0xc00016cd00)
/snap/go/current/src/net/fd_unix.go:172 +0x35 fp=0xc0001dd5d8 sp=0xc0001dd520 pc=0x5a97b5
net.(*TCPListener).accept(0xc000012810)
/snap/go/current/src/net/tcpsock_posix.go:148 +0x25 fp=0xc0001dd600 sp=0xc0001dd5d8 pc=0x5bfb65
net.(*TCPListener).Accept(0xc000012810)
/snap/go/current/src/net/tcpsock.go:297 +0x3d fp=0xc0001dd630 sp=0xc0001dd600 pc=0x5bec5d
github.com/valyala/fasthttp.acceptConn(0xc0001a4400, {0xd127c0, 0xc000012810}, 0xc0001dd828)
/home/waplay/go/pkg/mod/github.com/valyala/[email protected]/server.go:1930 +0x62 fp=0xc0001dd710 sp=0xc0001dd630 pc=0x849482
github.com/valyala/fasthttp.(*Server).Serve(0xc0001a4400, {0xd127c0?, 0xc000012810})
/home/waplay/go/pkg/mod/github.com/valyala/[email protected]/server.go:1823 +0x4f4 fp=0xc0001dd858 sp=0xc0001dd710 pc=0x848a94
github.com/gofiber/fiber/v2.(*App).Listen(0xc00013f200, {0xc3e8d4?, 0x7?})
/home/waplay/go/pkg/mod/github.com/gofiber/fiber/[email protected]/listen.go:82 +0x110 fp=0xc0001dd8b8 sp=0xc0001dd858 pc=0x8bbeb0
main.main.func1(0xc000206160?)
/home/waplay/GitClone/autogpt4all/LocalAI/main.go:97 +0x345 fp=0xc0001dd9b8 sp=0xc0001dd8b8 pc=0x975e25
github.com/urfave/cli/v2.(*Command).Run(0xc000206160, 0xc0000249c0, {0xc000024080, 0x4, 0x4})
/home/waplay/go/pkg/mod/github.com/urfave/cli/[email protected]/command.go:274 +0x9eb fp=0xc0001ddc58 sp=0xc0001dd9b8 pc=0x963d2b
github.com/urfave/cli/v2.(*App).RunContext(0xc000202000, {0xd12b28?, 0xc0000280c8}, {0xc000024080, 0x4, 0x4})
/home/waplay/go/pkg/mod/github.com/urfave/cli/[email protected]/app.go:332 +0x616 fp=0xc0001ddcc8 sp=0xc0001ddc58 pc=0x960b36
github.com/urfave/cli/v2.(*App).Run(...)
/home/waplay/go/pkg/mod/github.com/urfave/cli/[email protected]/app.go:309
main.main()
/home/waplay/GitClone/autogpt4all/LocalAI/main.go:101 +0xbae fp=0xc0001ddf80 sp=0xc0001ddcc8 pc=0x975a0e
runtime.main()
/snap/go/current/src/runtime/proc.go:250 +0x207 fp=0xc0001ddfe0 sp=0xc0001ddf80 pc=0x47dec7
runtime.goexit()
/snap/go/current/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0001ddfe8 sp=0xc0001ddfe0 pc=0x4ad101

goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
/snap/go/current/src/runtime/proc.go:381 +0xd6 fp=0xc000050fb0 sp=0xc000050f90 pc=0x47e2f6
runtime.goparkunlock(...)
/snap/go/current/src/runtime/proc.go:387
runtime.forcegchelper()
/snap/go/current/src/runtime/proc.go:305 +0xb0 fp=0xc000050fe0 sp=0xc000050fb0 pc=0x47e130
runtime.goexit()
/snap/go/current/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc000050fe8 sp=0xc000050fe0 pc=0x4ad101
created by runtime.init.6
/snap/go/current/src/runtime/proc.go:293 +0x25

goroutine 3 [GC sweep wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
/snap/go/current/src/runtime/proc.go:381 +0xd6 fp=0xc000051780 sp=0xc000051760 pc=0x47e2f6
runtime.goparkunlock(...)
/snap/go/current/src/runtime/proc.go:387
runtime.bgsweep(0x0?)
/snap/go/current/src/runtime/mgcsweep.go:278 +0x8e fp=0xc0000517c8 sp=0xc000051780 pc=0x46a50e
runtime.gcenable.func1()
/snap/go/current/src/runtime/mgc.go:178 +0x26 fp=0xc0000517e0 sp=0xc0000517c8 pc=0x45f7c6
runtime.goexit()
/snap/go/current/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0000517e8 sp=0xc0000517e0 pc=0x4ad101
created by runtime.gcenable
/snap/go/current/src/runtime/mgc.go:178 +0x6b

goroutine 4 [GC scavenge wait]:
runtime.gopark(0xc000076000?, 0xd0add8?, 0x1?, 0x0?, 0x0?)
/snap/go/current/src/runtime/proc.go:381 +0xd6 fp=0xc000051f70 sp=0xc000051f50 pc=0x47e2f6
runtime.goparkunlock(...)
/snap/go/current/src/runtime/proc.go:387
runtime.(*scavengerState).park(0x1136ce0)
/snap/go/current/src/runtime/mgcscavenge.go:400 +0x53 fp=0xc000051fa0 sp=0xc000051f70 pc=0x468433
runtime.bgscavenge(0x0?)
/snap/go/current/src/runtime/mgcscavenge.go:628 +0x45 fp=0xc000051fc8 sp=0xc000051fa0 pc=0x468a05
runtime.gcenable.func2()
/snap/go/current/src/runtime/mgc.go:179 +0x26 fp=0xc000051fe0 sp=0xc000051fc8 pc=0x45f766
runtime.goexit()
/snap/go/current/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc000051fe8 sp=0xc000051fe0 pc=0x4ad101
created by runtime.gcenable
/snap/go/current/src/runtime/mgc.go:179 +0xaa

goroutine 5 [finalizer wait]:
runtime.gopark(0x1a0?, 0x11379c0?, 0x60?, 0x78?, 0xc000050770?)
/snap/go/current/src/runtime/proc.go:381 +0xd6 fp=0xc000050628 sp=0xc000050608 pc=0x47e2f6
runtime.runfinq()
/snap/go/current/src/runtime/mfinal.go:193 +0x107 fp=0xc0000507e0 sp=0xc000050628 pc=0x45e807
runtime.goexit()
/snap/go/current/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0000507e8 sp=0xc0000507e0 pc=0x4ad101
created by runtime.createfing
/snap/go/current/src/runtime/mfinal.go:163 +0x45

goroutine 6 [select]:
runtime.gopark(0xc000052750?, 0x2?, 0x0?, 0x0?, 0xc0000526cc?)
/snap/go/current/src/runtime/proc.go:381 +0xd6 fp=0xc00005cd20 sp=0xc00005cd00 pc=0x47e2f6
runtime.selectgo(0xc00005cf50, 0xc0000526c8, 0x0?, 0x0, 0x0?, 0x1)
/snap/go/current/src/runtime/select.go:327 +0x7be fp=0xc00005ce60 sp=0xc00005cd20 pc=0x48dade
github.com/go-skynet/LocalAI/api.(*galleryApplier).start.func1()
/home/waplay/GitClone/autogpt4all/LocalAI/api/gallery.go:57 +0xe5 fp=0xc00005cfe0 sp=0xc00005ce60 pc=0x93d305
runtime.goexit()
/snap/go/current/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00005cfe8 sp=0xc00005cfe0 pc=0x4ad101
created by github.com/go-skynet/LocalAI/api.(*galleryApplier).start
/home/waplay/GitClone/autogpt4all/LocalAI/api/gallery.go:55 +0xaa

goroutine 7 [sleep]:
runtime.gopark(0x144a367dc3a8?, 0xc000052f88?, 0x5?, 0xd8?, 0xc00011ba70?)
/snap/go/current/src/runtime/proc.go:381 +0xd6 fp=0xc000052f58 sp=0xc000052f38 pc=0x47e2f6
time.Sleep(0x2540be400)
/snap/go/current/src/runtime/time.go:195 +0x135 fp=0xc000052f98 sp=0xc000052f58 pc=0x4a9f75
github.com/valyala/fasthttp.(*workerPool).Start.func2()
/home/waplay/go/pkg/mod/github.com/valyala/[email protected]/workerpool.go:67 +0x56 fp=0xc000052fe0 sp=0xc000052f98 pc=0x855f76
runtime.goexit()
/snap/go/current/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc000052fe8 sp=0xc000052fe0 pc=0x4ad101
created by github.com/valyala/fasthttp.(*workerPool).Start
/home/waplay/go/pkg/mod/github.com/valyala/[email protected]/workerpool.go:59 +0xdd

goroutine 18 [IO wait]:
runtime.gopark(0x0?, 0xb?, 0x0?, 0x0?, 0x7?)
/snap/go/current/src/runtime/proc.go:381 +0xd6 fp=0xc0001b7a28 sp=0xc0001b7a08 pc=0x47e2f6
runtime.netpollblock(0x4c0b65?, 0x449b8f?, 0x0?)
/snap/go/current/src/runtime/netpoll.go:527 +0xf7 fp=0xc0001b7a60 sp=0xc0001b7a28 pc=0x476c57
internal/poll.runtime_pollWait(0x7f65253bd3a8, 0x72)
/snap/go/current/src/runtime/netpoll.go:306 +0x89 fp=0xc0001b7a80 sp=0xc0001b7a60 pc=0x4a79a9
internal/poll.(*pollDesc).wait(0xc00009e000?, 0xc00028c000?, 0x0)
/snap/go/current/src/internal/poll/fd_poll_runtime.go:84 +0x32 fp=0xc0001b7aa8 sp=0xc0001b7a80 pc=0x51e712
internal/poll.(*pollDesc).waitRead(...)
/snap/go/current/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00009e000, {0xc00028c000, 0x1000, 0x1000})
/snap/go/current/src/internal/poll/fd_unix.go:167 +0x299 fp=0xc0001b7b40 sp=0xc0001b7aa8 pc=0x51faf9
net.(*netFD).Read(0xc00009e000, {0xc00028c000?, 0xc0000a2088?, 0xc0000a2000?})
/snap/go/current/src/net/fd_posix.go:55 +0x29 fp=0xc0001b7b88 sp=0xc0001b7b40 pc=0x5a7629
net.(*conn).Read(0xc00009a008, {0xc00028c000?, 0xc00009a008?, 0xc00028d000?})
/snap/go/current/src/net/net.go:183 +0x45 fp=0xc0001b7bd0 sp=0xc0001b7b88 pc=0x5b6b25
net.(*TCPConn).Read(0xc0001a45e0?, {0xc00028c000?, 0x83a32f?, 0x83d485?})
:1 +0x29 fp=0xc0001b7c00 sp=0xc0001b7bd0 pc=0x5c94c9
bufio.(*Reader).fill(0xc00028a000)
/snap/go/current/src/bufio/bufio.go:106 +0xff fp=0xc0001b7c38 sp=0xc0001b7c00 pc=0x60b01f
bufio.(*Reader).Peek(0xc00028a000, 0x1)
/snap/go/current/src/bufio/bufio.go:144 +0x5d fp=0xc0001b7c58 sp=0xc0001b7c38 pc=0x60b17d
github.com/valyala/fasthttp.(*Server).serveConn(0xc0001a4400, {0xd151a0?, 0xc00009a008})
/home/waplay/go/pkg/mod/github.com/valyala/[email protected]/server.go:2176 +0x58e fp=0xc0001b7ec8 sp=0xc0001b7c58 pc=0x84a36e
github.com/valyala/fasthttp.(*Server).serveConn-fm({0xd151a0?, 0xc00009a008?})
:1 +0x39 fp=0xc0001b7ef0 sp=0xc0001b7ec8 pc=0x85a879
github.com/valyala/fasthttp.(*workerPool).workerFunc(0xc00011ba40, 0xc0000a8000)
/home/waplay/go/pkg/mod/github.com/valyala/[email protected]/workerpool.go:224 +0xa9 fp=0xc0001b7fa0 sp=0xc0001b7ef0 pc=0x856aa9
github.com/valyala/fasthttp.(*workerPool).getCh.func1()
/home/waplay/go/pkg/mod/github.com/valyala/[email protected]/workerpool.go:196 +0x38 fp=0xc0001b7fe0 sp=0xc0001b7fa0 pc=0x856818
runtime.goexit()
/snap/go/current/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0001b7fe8 sp=0xc0001b7fe0 pc=0x4ad101
created by github.com/valyala/fasthttp.(*workerPool).getCh
/home/waplay/go/pkg/mod/github.com/valyala/[email protected]/workerpool.go:195 +0x1b0

goroutine 34 [sleep]:
runtime.gopark(0x144b41600494?, 0xb92a20?, 0x40?, 0x3d?, 0xc111b8af5dbbbdfd?)
/snap/go/current/src/runtime/proc.go:381 +0xd6 fp=0xc000284f88 sp=0xc000284f68 pc=0x47e2f6
time.Sleep(0x3b9aca00)
/snap/go/current/src/runtime/time.go:195 +0x135 fp=0xc000284fc8 sp=0xc000284f88 pc=0x4a9f75
github.com/valyala/fasthttp.updateServerDate.func1()
/home/waplay/go/pkg/mod/github.com/valyala/[email protected]/header.go:2247 +0x1e fp=0xc000284fe0 sp=0xc000284fc8 pc=0x856efe
runtime.goexit()
/snap/go/current/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc000284fe8 sp=0xc000284fe0 pc=0x4ad101
created by github.com/valyala/fasthttp.updateServerDate
/home/waplay/go/pkg/mod/github.com/valyala/[email protected]/header.go:2245 +0x25

rax 0x478a693f04c46b1d
rbx 0xffffffff
rcx 0x270
rdx 0x4c46d8c
rdi 0x2d412a0
rsi 0x7f654f1a3be0
rbp 0x7ffd9cd598d0
rsp 0x7ffd9cd58420
r8 0x2d412b0
r9 0x7f654f1a4380
r10 0x2d2a010
r11 0x7f654f1a3be0
r12 0x2d412b0
r13 0x7ffd9cd599f0
r14 0x7ffd9cd59a28
r15 0x2dd1b50
rip 0xb01f95
rflags 0x10246
cs 0x33
fs 0x0
gs 0x0
`

OS Linux (Ubuntu 20.04)
Model used: https://gpt4all.io/models/ggml-vicuna-7b-1.1-q4_2.bin

Shouldnโ€™t the token limit be reduced

Query rather than an issue, more a suggestion for the env file- should the fast token limit of 4000 be reduced to 2048 when using the GPT4ALL weights and should then Auto-gpt started with the gpt-3.5 flag?
Thanks

Errors when running script

I've tried to install and uninstall a couple of times.

I'm on a Windows system with Git installed and I'm not a developer. I'm running the script in Git Bash.

  1. As standard Make and Wget isn't installed with Git on Windows, so needs to be downloaded separately.

  2. There is an error later in the script that seems to relate to replacing a temporary location with an actual location:
    go mod edit -replace github.com/go-skynet/go-llama.cpp=/c/autogpt4all/LocalAI/go-llama
    process_begin: CreateProcess(NULL, go mod edit -replace github.com/go-skynet/go-llama.cpp=/c/autogpt4all/LocalAI/go-llama, ...) failed.
    make (e=2): The system cannot find the file specified.
    make: *** [Makefile:175: replace] Error 2

  3. The rest of the script seems to be running fine, but the end result is that there's no local-ai file, which I'm supposed to run and there is no downloaded model. I think it may be related to point 2.

LocalAI returns an RPC error

System: MacBook Pro 2018 (Intel)
OS: macOS Ventura 13.5.1
Python: 3.11.4

How to reproduce:

git clone https://github.com/aorumbayev/autogpt4all.git
cd autogpt4all
cd LocalAI
./local-ai --models-path ./models/ --debug

On another Terminal

cd Auto-GPT
./run.sh

LocalAI returns:

rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:52431: connect: connection refused"

TimeoutError: timed out after thinking for 600 seconds

I managed to get everything installed on a wsl2 Ubuntu 22.04, had to manually install golang and cmake, also unsure why but the script had an access denied when copying the .evnv template file.

While I have validated the LocalAI install with the two POST requests called out on the of the project of LocalAI and had a response back in approx a minute, when trying AutoGPT it sets up the goals etc and then sits thinking for 600sec/10mins and then crashes with a timeout. I have tried installing AutoGPT directly in to windows with LocalAI running the wsl with exactly the same results.

In the LocalAI window I do see the POST request is received, and when AutoGPT crashes, LocalAI seems to appear to give an empty response (output of the LocalAI at the bottom)

I realize this project is only the install script but was wondering if anyone had any thoughts on how to troubleshoot etc as I'm trying to work out if AutoGPT needs to give LocalAI more time to respond or if it would never respond.

Output from LocalAI (wsl ubuntu)
:43PM DBG Request received: {"model":"gpt-3.5-turbo","file":"","response_format":"","language":"","prompt":null,"instruction":"","input":null,"stop":null,"messages":[{"role":"system","content":"You are testingAI, an AI designed to write the current date and time to a local text file named datetime.txt\nYour decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications.\n\nGOALS:\n\n1. check the current date and time\n2. write the current date and time to a text file on the local disk\n3. save the text file as datetime.txt\n\nIt takes money to let you run. Your API budget is $20.000\n\nConstraints:\n1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files.\n2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember.\n3. No user assistance\n4. Exclusively use the commands listed in double quotes e.g. \"command name\"\n\nCommands:\n1. analyze_code: Analyze Code, args: \"code\": \"\u003cfull_code_string\u003e\"\n2. execute_python_file: Execute Python File, args: \"filename\": \"\u003cfilename\u003e\"\n3. append_to_file: Append to file, args: \"filename\": \"\u003cfilename\u003e\", \"text\": \"\u003ctext\u003e\"\n4. delete_file: Delete file, args: \"filename\": \"\u003cfilename\u003e\"\n5. list_files: List Files in Directory, args: \"directory\": \"\u003cdirectory\u003e\"\n6. read_file: Read file, args: \"filename\": \"\u003cfilename\u003e\"\n7. write_to_file: Write to file, args: \"filename\": \"\u003cfilename\u003e\", \"text\": \"\u003ctext\u003e\"\n8. google: Google Search, args: \"query\": \"\u003cquery\u003e\"\n9. improve_code: Get Improved Code, args: \"suggestions\": \"\u003clist_of_suggestions\u003e\", \"code\": \"\u003cfull_code_string\u003e\"\n10. send_tweet: Send Tweet, args: \"tweet_text\": \"\u003ctweet_text\u003e\"\n11. browse_website: Browse Website, args: \"url\": \"\u003curl\u003e\", \"question\": \"\u003cwhat_you_want_to_find_on_website\u003e\"\n12. write_tests: Write Tests, args: \"code\": \"\u003cfull_code_string\u003e\", \"focus\": \"\u003clist_of_focus_areas\u003e\"\n13. delete_agent: Delete GPT Agent, args: \"key\": \"\u003ckey\u003e\"\n14. get_hyperlinks: Get text summary, args: \"url\": \"\u003curl\u003e\"\n15. get_text_summary: Get text summary, args: \"url\": \"\u003curl\u003e\", \"question\": \"\u003cquestion\u003e\"\n16. list_agents: List GPT Agents, args: () -\u003e str\n17. message_agent: Message GPT Agent, args: \"key\": \"\u003ckey\u003e\", \"message\": \"\u003cmessage\u003e\"\n18. start_agent: Start GPT Agent, args: \"name\": \"\u003cname\u003e\", \"task\": \"\u003cshort_task_desc\u003e\", \"prompt\": \"\u003cprompt\u003e\"\n19. task_complete: Task Complete (Shutdown), args: \"reason\": \"\u003creason\u003e\"\n\nResources:\n1. Internet access for searches and information gathering.\n2. Long Term memory management.\n3. GPT-3.5 powered Agents for delegation of simple tasks.\n4. File output.\n\nPerformance Evaluation:\n1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.\n2. Constructively self-criticize your big-picture behavior constantly.\n3. Reflect on past decisions and strategies to refine your approach.\n4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.\n5. Write all code to a file.\n\nYou should only respond in JSON format as described below \nResponse Format: \n{\n \"thoughts\": {\n \"text\": \"thought\",\n \"reasoning\": \"reasoning\",\n \"plan\": \"- short bulleted\\n- list that conveys\\n- long-term plan\",\n \"criticism\": \"constructive self-criticism\",\n \"speak\": \"thoughts summary to say to user\"\n },\n \"command\": {\n \"name\": \"command name\",\n \"args\": {\n \"arg name\": \"value\"\n }\n }\n} \nEnsure the response can be parsed by Python json.loads"},{"role":"system","content":"The current time and date is Tue May 16 17:43:10 2023"},{"role":"system","content":"Your remaining API budget is $20.000\n\n"},{"role":"user","content":"Determine which next command to use, and respond using the format specified above:"}],"stream":false,"echo":false,"top_p":0,"top_k":0,"temperature":0,"max_tokens":2594,"n":0,"batch":0,"f16":false,"ignore_eos":false,"repeat_penalty":0,"n_keep":0,"mirostat_eta":0,"mirostat_tau":0,"mirostat":0,"seed":0} 5:43PM DBG Parameter Config: &{OpenAIRequest:{Model:gpt-3.5-turbo File: ResponseFormat: Language: Prompt:<nil> Instruction: Input:<nil> Stop:<nil> Messages:[] Stream:false Echo:false TopP:0.7 TopK:80 Temperature:0.9 Maxtokens:2594 N:0 Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 Seed:0} Name: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:512 F16:false Threads:4 Debug:true Roles:map[] Embeddings:false Backend: TemplateConfig:{Completion: Chat: Edit:} MirostatETA:0 MirostatTAU:0 Mirostat:0 PromptStrings:[] InputStrings:[] InputToken:[]} 5:43PM DBG Loading models greedly 5:53PM DBG Response: {"object":"chat.completion","model":"gpt-3.5-turbo","choices":[{"message":{"role":"assistant","content":"{ \"thoughts\": { }, \"command\": {} }"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}} [127.0.0.1]:36418 200 - POST /chat/completions

AutoGPT output (windows)
`Welcome back! Would you like me to return to being testingAI?
Asking user via keyboard...
Continue with the last settings?
Name: testingAI
Role: an AI designed to write the current date and time to a local text file named datetime.txt
Goals: ['check the current date and time', 'write the current date and time to a text file on the local disk', 'save the text file as datetime.txt']
API Budget: $20.0
Continue (y/n): y
testingAI has been created with the following details:
Name: testingAI
Role: an AI designed to write the current date and time to a local text file named datetime.txt
Goals:

  • check the current date and time
  • write the current date and time to a text file on the local disk
  • save the text file as datetime.txt
    Using memory of type: LocalCache
    Using Browser: chrome
    Traceback (most recent call last):
    File "C:\Python311\Lib\site-packages\urllib3\connectionpool.py", line 449, in _make_request
    six.raise_from(e, None)
    File "", line 3, in raise_from
    File "C:\Python311\Lib\site-packages\urllib3\connectionpool.py", line 444, in _make_request
    httplib_response = conn.getresponse()
    ^^^^^^^^^^^^^^^^^^
    File "C:\Python311\Lib\http\client.py", line 1375, in getresponse
    response.begin()
    File "C:\Python311\Lib\http\client.py", line 318, in begin
    version, status, reason = self._read_status()
    ^^^^^^^^^^^^^^^^^^^
    File "C:\Python311\Lib\http\client.py", line 279, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Python311\Lib\socket.py", line 706, in readinto
    return self._sock.recv_into(b)
    ^^^^^^^^^^^^^^^^^^^^^^^
    TimeoutError: timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Python311\Lib\site-packages\requests\adapters.py", line 486, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\urllib3\util\retry.py", line 550, in increment
raise six.reraise(type(error), error, _stacktrace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\urllib3\packages\six.py", line 770, in reraise
raise value
File "C:\Python311\Lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\urllib3\connectionpool.py", line 451, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File "C:\Python311\Lib\site-packages\urllib3\connectionpool.py", line 340, in _raise_timeout
raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPConnectionPool(host='localhost', port=8080): Read timed out. (read timeout=600)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Python311\Lib\site-packages\openai\api_requestor.py", line 516, in request_raw
result = _thread_context.session.request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\requests\sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\requests\sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\requests\adapters.py", line 532, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: HTTPConnectionPool(host='localhost', port=8080): Read timed out. (read timeout=600)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in run_code
File "c:\Users\a9dqmzz\Auto-GPT\autogpt_main
.py", line 5, in
autogpt.cli.main()
File "C:\Python311\Lib\site-packages\click\core.py", line 1130, in call
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\click\core.py", line 1055, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\click\core.py", line 1635, in invoke
rv = super().invoke(ctx)
^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\click\decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\a9dqmzz\Auto-GPT\autogpt\cli.py", line 90, in main
run_auto_gpt(
File "c:\Users\a9dqmzz\Auto-GPT\autogpt\main.py", line 186, in run_auto_gpt
agent.start_interaction_loop()
File "c:\Users\a9dqmzz\Auto-GPT\autogpt\agent\agent.py", line 113, in start_interaction_loop
assistant_reply = chat_with_ai(
^^^^^^^^^^^^^
File "c:\Users\a9dqmzz\Auto-GPT\autogpt\llm\chat.py", line 244, in chat_with_ai
assistant_reply = create_chat_completion(
^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\a9dqmzz\Auto-GPT\autogpt\llm\llm_utils.py", line 166, in create_chat_completion
response = api_manager.create_chat_completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\a9dqmzz\Auto-GPT\autogpt\llm\api_manager.py", line 55, in create_chat_completion
response = openai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\openai\api_requestor.py", line 216, in request
result = self.request_raw(
^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\openai\api_requestor.py", line 526, in request_raw
raise error.Timeout("Request timed out: {}".format(e)) from e
openai.error.Timeout: Request timed out: HTTPConnectionPool(host='localhost', port=8080): Read timed out. (read timeout=600)
Press any key to continue . . .
`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.