Giter Site home page Giter Site logo

eval's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eval's Issues

Unable to use GPT-4 model with EVAL

Hello,

I am trying to use the GPT-4 model with EVAL, but I am encountering an issue. Although my OpenAI account has access to GPT-4 and the API key works correctly in other applications, I am unable to get EVAL to work with GPT-4.

Here's a summary of the steps I have taken:

I have cloned the EVAL repository and set up the project with Docker.
I have added my OpenAI API key to the .env file.
I have confirmed that the MODEL_NAME variable in the env.py file is set to gpt-4 by default.
Despite these steps, EVAL keeps giving me an error message stating that the GPT-4 model does not exist. I have tried using a different API key, updating the .env file, and restarting the Docker container, but the issue persists.

Could you please provide some guidance on how to resolve this issue or let me know if there's a problem with the EVAL project itself? I would appreciate any help you can offer.

Thank you!

Captura de Pantalla 2023-04-25 a las 18 51 42

code type included in code

Simple issue - when it generates a html file, for example, rather than starting the file with <!DOCTYPE html>, it starts with:

\```html
<!DOCTYPE html>

Or similarly a python file it starts with the word "python"
Screenshot 2023-04-05 at 20 20 27

Readme: Playground instructions

For many, this request will be absurd, but is it possible to have some guidance on how to integrate EVAL with VSCode, as you did in the example movie files please ie terminal prompt, code added to the open doc, and the docker log files visible? Even a link to an article would be most apprreciated. Does it require an extension?

Risks ⚠️

Hello @adldotori and others!
I appreciate the effort and innovation behind this project. However, I'd like to express my concerns regarding the potential risks associated with allowing AI agents to run arbitrary code. This feature could lead to unintended consequences, such as security vulnerabilities, unauthorized access to sensitive data, or the creation of harmful applications. I encourage the developers to consider the ethical implications and potential dangers of this functionality, and to adopt best practices in AI safety and security to protect users and the wider community. Thank you for your attention to this matter.

How do I remove or shut down an app created by EVAL?

I tried a standard "To Do" prompt and it successfully created that app. However, I can't figure out a way to shut down that app. If I kill the docker environment and restart it, it appears to restart the to do app as well.

Failed both services on MacBook M1 Pro

Description
EVAL fails to execute in both services (eval and eval.gpu) on a MacBook M1 Pro running Ventura 13.3.

Screenshots and Tracebacks
I run: docker-compose up --build eval and get this:

image

Traceback:

Attaching to eval
eval  | Traceback (most recent call last):
eval  |   File "<string>", line 1, in <module>
eval  |   File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
eval  |     return _bootstrap._gcd_import(name[level:], package, level)
eval  |   File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
eval  |   File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
eval  |   File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
eval  |   File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
eval  |   File "<frozen importlib._bootstrap_external>", line 883, in exec_module
eval  |   File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
eval  |   File "/app/api/main.py", line 12, in <module>
eval  |     from api.container import agent_manager, file_handler, reload_dirs, templates, uploader
eval  |   File "/app/api/container.py", line 31, in <module>
eval  |     import torch
eval  | ModuleNotFoundError: No module named 'torch'
eval exited with code 1

I run docker-compose up --build eval.gpu and get this:

image

Traceback:

#0 16.26   running build_ext
#0 16.26   building 'psutil._psutil_linux' extension
#0 16.26   creating build/temp.linux-aarch64-cpython-310
#0 16.26   creating build/temp.linux-aarch64-cpython-310/psutil
#0 16.26   aarch64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DPSUTIL_POSIX=1 -DPSUTIL_SIZEOF_PID_T=4 -DPSUTIL_VERSION=594 -DPy_LIMITED_API=0x03060000 -DPSUTIL_LINUX=1 -I/tmp/tmpd9wx01j8/.venv/include -I/usr/include/python3.10 -c psutil/_psutil_common.c -o build/temp.linux-aarch64-cpython-310/psutil/_psutil_common.o
#0 16.26   psutil/_psutil_common.c:9:10: fatal error: Python.h: No such file or directory
#0 16.26       9 | #include <Python.h>
#0 16.26         |          ^~~~~~~~~~
#0 16.26   compilation terminated.
#0 16.26   error: command '/usr/bin/aarch64-linux-gnu-gcc' failed with exit code 1
#0 16.26
#0 16.26
#0 16.26   at ~/.local/share/pypoetry/venv/lib/python3.8/site-packages/poetry/installation/chef.py:152 in _prepare
#0 16.26       148│
#0 16.26       149│                 error = ChefBuildError("\n\n".join(message_parts))
#0 16.26       150│
#0 16.26       151│             if error is not None:
#0 16.26     → 152│                 raise error from None
#0 16.26       153│
#0 16.26       154│             return path
#0 16.26       155│
#0 16.26       156│     def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:
#0 16.26
#0 16.26 Note: This error originates from the build backend, and is likely not a problem with poetry but with psutil (5.9.4) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 "psutil (==5.9.4)"'.
#0 16.26 Note: This error originates from the build backend, and is likely not a problem with poetry but with psutil (5.9.4) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 "psutil (==5.9.4)"'.
#0 16.26

Desktop:

  • OS: macOS Ventura 13.3

Can someone help with this issue? Thanks!

Error "The model: `gpt-4` does not exist"

The following call
http POST http://localhost:8000/command key=1 files:=\[\] query="Hey there!"

produces this:

HTTP/1.1 200 OK
content-length: 59
content-type: application/json
date: Wed, 05 Apr 2023 16:11:59 GMT
server: uvicorn

{
    "files": [],
    "response": "The model: `gpt-4` does not exist"
}

Am I missing some config, maybe?

Parse Error

Describe the bug
I'm constantly getting the following Parse Error:

api/execute/async/acbde5bb-641c-41c9-b704-d73a532af354 HTTP/1.1" 200 OK
eval | INFO: 172.21.0.1:43084 - "GET /api/execute/async/acbde5bb-641c-41c9-b704-d73a532af354 HTTP/1.1" 200 OK
eval | INFO: 172.21.0.1:43084 - "GET /api/execute/async/acbde5bb-641c-41c9-b704-d73a532af354 HTTP/1.1" 200 OK
eval | INFO: 172.21.0.1:43084 - "GET /api/execute/async/acbde5bb-641c-41c9-b704-d73a532af354 HTTP/1.1" 200 OK
eval | [2023-04-28 12:15:34,546: ERROR/ForkPoolWorker-3] Chain Error: parse error
eval | [2023-04-28 12:15:34,550: ERROR/ForkPoolWorker-3] Task task_execute[acbde5bb-641c-41c9-b704-d73a532af354] raised unexpected: Exception('parse error')
eval | Traceback (most recent call last):
eval | File "/app/.venv/lib/python3.10/site-packages/celery/app/trace.py", line 451, in trace_task
eval | R = retval = fun(*args, **kwargs)
eval | File "/app/.venv/lib/python3.10/site-packages/celery/app/trace.py", line 734, in protected_call
eval | return self.run(*args, **kwargs)
eval | File "/app/api/worker.py", line 22, in task_execute
eval | response = executor({"input": prompt})
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 116, in call
eval | raise e
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 113, in call
eval | outputs = self._call(inputs)
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 822, in _call
eval | next_step_output = self._take_next_step(
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 702, in _take_next_step
eval | output = self.agent.plan(intermediate_steps, **inputs)
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 413, in plan
eval | action = self._get_next_action(full_inputs)
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 374, in _get_next_action
eval | full_output = self.llm_chain.predict(**full_inputs)
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 151, in predict
eval | return self(kwargs)[self.output_key]
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 116, in call
eval | raise e
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 113, in call
eval | outputs = self._call(inputs)
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 57, in _call
eval | return self.apply([inputs])[0]
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 118, in apply
eval | response = self.generate(input_list)
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 62, in generate
eval | return self.llm.generate_prompt(prompts, stop)
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chat_models/base.py", line 83, in generate_prompt
eval | self.callback_manager.on_llm_end(output, verbose=self.verbose)
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/callbacks/base.py", line 159, in on_llm_end
eval | handler.on_llm_end(response)
eval | File "/app/core/agents/callback.py", line 27, in on_llm_end
eval | parsed = self.parser.parse_all(text)
eval | File "/app/core/agents/parser.py", line 15, in parse_all
eval | raise Exception("parse error")
eval | Exception: parse error

Desktop (please complete the following information):

  • OS: Mac on intel chipset
  • Browser: chrome
  • Most recent as of date of writing this issue

openai.error.InvalidRequestError: Invalid URL (POST /v1/chat/completions)

[2023-04-11 21:45:59,565: INFO/Process-1] Task task_execute[1ea47a5e-9bd1-47fc-a845-102dfd26e997] received
eval | [2023-04-11 21:46:00,180: INFO/ForkPoolWorker-7] Entering new chain.
eval | [2023-04-11 21:46:00,180: INFO/ForkPoolWorker-7] Prompted Text: say hello
eval |
eval | [2023-04-11 21:46:00,523: INFO/ForkPoolWorker-7] error_code=None error_message='Invalid URL (POST /v1/chat/completions)' error_param=None error_type=invalid_request_error message='OpenAI API error received' stream_error=False
eval | [2023-04-11 21:46:00,524: ERROR/ForkPoolWorker-7] Chain Error: Invalid URL (POST /v1/chat/completions)
eval | [2023-04-11 21:46:00,529: ERROR/ForkPoolWorker-7] Task task_execute[1ea47a5e-9bd1-47fc-a845-102dfd26e997] raised unexpected: InvalidRequestError(message='Invalid URL (POST /v1/chat/completions)', param=None, code=None, http_status=404, request_id=None)
eval | Traceback (most recent call last):
eval | File "/app/.venv/lib/python3.10/site-packages/celery/app/trace.py", line 451, in trace_task
eval | R = retval = fun(*args, **kwargs)
eval | File "/app/.venv/lib/python3.10/site-packages/celery/app/trace.py", line 734, in protected_call
eval | return self.run(*args, **kwargs)
eval | File "/app/api/worker.py", line 22, in task_execute
eval | response = executor({"input": prompt})
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 116, in call
eval | raise e
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 113, in call
eval | outputs = self._call(inputs)
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 505, in _call
eval | next_step_output = self._take_next_step(
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 409, in _take_next_step
eval | output = self.agent.plan(intermediate_steps, **inputs)
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 105, in plan
eval | action = self._get_next_action(full_inputs)
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 66, in _get_next_action
eval | full_output = self.llm_chain.predict(**full_inputs)
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 151, in predict
eval | return self(kwargs)[self.output_key]
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 116, in call
eval | raise e
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 113, in call
eval | outputs = self._call(inputs)
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 57, in _call
eval | return self.apply([inputs])[0]
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 118, in apply
eval | response = self.generate(input_list)
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 62, in generate
eval | return self.llm.generate_prompt(prompts, stop)
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chat_models/base.py", line 72, in generate_prompt
eval | raise e
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chat_models/base.py", line 69, in generate_prompt
eval | output = self.generate(prompt_messages, stop=stop)
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chat_models/base.py", line 50, in generate
eval | results = [self._generate(m, stop=stop) for m in messages]
eval | File "/app/.venv/lib/python3.10/site-packages/langchain/chat_models/base.py", line 50, in
eval | results = [self._generate(m, stop=stop) for m in messages]
eval | File "/app/core/agents/llm.py", line 283, in _generate
eval | response = self.completion_with_retry(messages=message_dicts, **params)
eval | File "/app/core/agents/llm.py", line 252, in completion_with_retry
eval | return _completion_with_retry(**kwargs)
eval | File "/app/.venv/lib/python3.10/site-packages/tenacity/init.py", line 289, in wrapped_f
eval | return self(f, *args, **kw)
eval | File "/app/.venv/lib/python3.10/site-packages/tenacity/init.py", line 379, in call
eval | do = self.iter(retry_state=retry_state)
eval | File "/app/.venv/lib/python3.10/site-packages/tenacity/init.py", line 314, in iter
eval | return fut.result()
eval | File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 451, in result
eval | return self.__get_result()
eval | File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
eval | raise self._exception
eval | File "/app/.venv/lib/python3.10/site-packages/tenacity/init.py", line 382, in call
eval | result = fn(*args, **kwargs)
eval | File "/app/core/agents/llm.py", line 248, in _completion_with_retry
eval | response = self.client.create(**kwargs)
eval | File "/app/.venv/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
eval | return super().create(*args, **kwargs)
eval | File "/app/.venv/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
eval | response, _, api_key = requestor.request(
eval | File "/app/.venv/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request
eval | resp, got_stream = self._interpret_response(result, stream)
eval | File "/app/.venv/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response
eval | self._interpret_response_line(
eval | File "/app/.venv/lib/python3.10/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
eval | raise self.handle_error_response(
eval | openai.error.InvalidRequestError: Invalid URL (POST /v1/chat/completions)
eval | INFO: 172.18.0.1:56778 - "GET /api/execute/async/1ea47a5e-9bd1-47fc-a845-102dfd26e997 HTTP/1.1" 200 OK

Terminal unterminated quoted string

I've been receiving this error repeatedly with my recent queries:

eval | Action: Terminal
eval | Input: find Auto-GPT-master -type f -name "*.py"
eval | Realtime Terminal Output:
eval | stderr /bin/sh: 1: Syntax error: Unterminated quoted string
eval | Observation: /bin/sh: 1: Syntax error: Unterminated quoted string

The agent will then simply repeat this step over and over, because it is the correct step. That is a valid bash command. But for some reason it is getting a syntax error. I am guessing that the terminal command is losing the closing quote for some reason.

Running on M1 ARM Macs

Great project! 💥

Do you think this should run on an M1 MBP?
I am running into this error in docker compose up -d:

 => ERROR [eval.gpu 11/12] RUN poetry install --with tools,gpu                                                                                                                      18.9s
------
 > [eval.gpu 11/12] RUN poetry install --with tools,gpu:
#0 0.627 Installing dependencies from lock file
#0 0.907 
#0 0.907 Package operations: 75 installs, 0 updates, 0 removals
#0 0.907 
#0 0.908   • Installing packaging (23.0)
#0 1.148   • Installing frozenlist (1.3.3)
#0 1.148   • Installing idna (3.4)
#0 1.150   • Installing marshmallow (3.19.0)
#0 1.150   • Installing multidict (6.0.4)
#0 1.151   • Installing mypy-extensions (1.0.0)
#0 1.153   • Installing six (1.16.0)
#0 1.154   • Installing typing-extensions (4.5.0)
#0 1.316   • Installing aiosignal (1.3.1)
#0 1.316   • Installing async-timeout (4.0.2)
#0 1.317   • Installing attrs (22.2.0)
#0 1.317   • Installing certifi (2022.12.7)
#0 1.318   • Installing charset-normalizer (3.1.0)
#0 1.321   • Installing greenlet (2.0.2)
#0 1.322   • Installing jmespath (1.0.1)
#0 1.323   • Installing marshmallow-enum (1.5.1)
#0 1.324   • Installing markupsafe (2.1.2)
#0 1.414   • Installing mpmath (1.3.0)
#0 1.426   • Installing python-dateutil (2.8.2)
#0 1.442   • Installing sniffio (1.3.0)
#0 1.482   • Installing typing-inspect (0.8.0)
#0 1.506   • Installing urllib3 (1.26.15)
#0 1.512   • Installing yarl (1.8.2)
#0 1.688   • Installing aiohttp (3.8.4)
#0 1.689   • Installing anyio (3.6.2)
#0 1.690   • Installing dataclasses-json (0.5.7)
#0 1.691   • Installing botocore (1.29.104)
#0 1.692   • Installing networkx (3.0)
#0 1.692   • Installing filelock (3.10.7)
#0 1.693   • Installing jinja2 (3.1.2)
#0 1.694   • Installing numpy (1.24.2)
#0 1.695   • Installing pydantic (1.10.7)
#0 1.771   • Installing pytz (2023.3)
#0 1.783   • Installing pyyaml (6.0)
#0 1.789   • Installing regex (2023.3.23)
#0 2.143   • Installing requests (2.28.2)
#0 2.170   • Installing soupsieve (2.4)
#0 2.281   • Installing sqlalchemy (1.4.47)
#0 2.282   • Installing sympy (1.11.1)
#0 2.301   • Installing tenacity (8.2.2)
#0 2.328   • Installing tqdm (4.65.0)
#0 2.385   • Installing zipp (3.15.0)
#0 5.950   • Installing beautifulsoup4 (4.12.0)
#0 5.950   • Installing click (8.1.3)
#0 5.951   • Installing h11 (0.14.0)
#0 5.952   • Installing huggingface-hub (0.13.3)
#0 5.953   • Installing langchain (0.0.115)
#0 5.954   • Installing openai (0.27.2)
#0 5.954   • Installing importlib-metadata (6.1.0)
#0 5.956   • Installing pathspec (0.11.1)
#0 5.956   • Installing pandas (1.5.3)
#0 6.047   • Installing pillow (9.5.0)
#0 6.058   • Installing platformdirs (3.2.0)
#0 6.093   • Installing psutil (5.9.4)
#0 6.115   • Installing s3transfer (0.6.0)
#0 6.169   • Installing starlette (0.26.1)
#0 6.173   • Installing tiktoken (0.3.3)
#0 6.234   • Installing tokenizers (0.13.2)
#0 6.335   • Installing tomli (2.0.1)
#0 6.376   • Installing torch (2.0.0)
#0 8.677 
#0 8.677   ChefBuildError
#0 8.677 
#0 8.677   Backend subprocess exited when trying to invoke build_wheel
#0 8.677   
#0 8.677   running bdist_wheel
#0 8.677   running build
#0 8.677   running build_py
#0 8.677   creating build
#0 8.677   creating build/lib.linux-aarch64-cpython-310
#0 8.677   creating build/lib.linux-aarch64-cpython-310/psutil
#0 8.677   copying psutil/_pswindows.py -> build/lib.linux-aarch64-cpython-310/psutil
#0 8.677   copying psutil/_psaix.py -> build/lib.linux-aarch64-cpython-310/psutil
#0 8.677   copying psutil/_psbsd.py -> build/lib.linux-aarch64-cpython-310/psutil
#0 8.677   copying psutil/_common.py -> build/lib.linux-aarch64-cpython-310/psutil
#0 8.677   copying psutil/_compat.py -> build/lib.linux-aarch64-cpython-310/psutil
#0 8.677   copying psutil/_psosx.py -> build/lib.linux-aarch64-cpython-310/psutil
#0 8.677   copying psutil/_psposix.py -> build/lib.linux-aarch64-cpython-310/psutil
#0 8.677   copying psutil/_pssunos.py -> build/lib.linux-aarch64-cpython-310/psutil
#0 8.677   copying psutil/__init__.py -> build/lib.linux-aarch64-cpython-310/psutil
#0 8.677   copying psutil/_pslinux.py -> build/lib.linux-aarch64-cpython-310/psutil
#0 8.677   creating build/lib.linux-aarch64-cpython-310/psutil/tests
#0 8.677   copying psutil/tests/test_connections.py -> build/lib.linux-aarch64-cpython-310/psutil/tests
#0 8.677   copying psutil/tests/test_bsd.py -> build/lib.linux-aarch64-cpython-310/psutil/tests
#0 8.677   copying psutil/tests/__main__.py -> build/lib.linux-aarch64-cpython-310/psutil/tests
#0 8.677   copying psutil/tests/runner.py -> build/lib.linux-aarch64-cpython-310/psutil/tests
#0 8.677   copying psutil/tests/test_memleaks.py -> build/lib.linux-aarch64-cpython-310/psutil/tests
#0 8.677   copying psutil/tests/test_testutils.py -> build/lib.linux-aarch64-cpython-310/psutil/tests
#0 8.677   copying psutil/tests/test_process.py -> build/lib.linux-aarch64-cpython-310/psutil/tests
#0 8.677   copying psutil/tests/test_posix.py -> build/lib.linux-aarch64-cpython-310/psutil/tests
#0 8.677   copying psutil/tests/test_windows.py -> build/lib.linux-aarch64-cpython-310/psutil/tests
#0 8.677   copying psutil/tests/test_unicode.py -> build/lib.linux-aarch64-cpython-310/psutil/tests
#0 8.677   copying psutil/tests/test_osx.py -> build/lib.linux-aarch64-cpython-310/psutil/tests
#0 8.677   copying psutil/tests/test_aix.py -> build/lib.linux-aarch64-cpython-310/psutil/tests
#0 8.677   copying psutil/tests/test_misc.py -> build/lib.linux-aarch64-cpython-310/psutil/tests
#0 8.677   copying psutil/tests/test_contracts.py -> build/lib.linux-aarch64-cpython-310/psutil/tests
#0 8.677   copying psutil/tests/test_sunos.py -> build/lib.linux-aarch64-cpython-310/psutil/tests
#0 8.677   copying psutil/tests/test_linux.py -> build/lib.linux-aarch64-cpython-310/psutil/tests
#0 8.677   copying psutil/tests/__init__.py -> build/lib.linux-aarch64-cpython-310/psutil/tests
#0 8.677   copying psutil/tests/test_system.py -> build/lib.linux-aarch64-cpython-310/psutil/tests
#0 8.677   running build_ext
#0 8.677   building 'psutil._psutil_linux' extension
#0 8.677   creating build/temp.linux-aarch64-cpython-310
#0 8.677   creating build/temp.linux-aarch64-cpython-310/psutil
#0 8.677   aarch64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DPSUTIL_POSIX=1 -DPSUTIL_SIZEOF_PID_T=4 -DPSUTIL_VERSION=594 -DPy_LIMITED_API=0x03060000 -DPSUTIL_LINUX=1 -I/tmp/tmpz6bfv_zl/.venv/include -I/usr/include/python3.10 -c psutil/_psutil_common.c -o build/temp.linux-aarch64-cpython-310/psutil/_psutil_common.o
#0 8.677   psutil/_psutil_common.c:9:10: fatal error: Python.h: No such file or directory
#0 8.677       9 | #include <Python.h>
#0 8.677         |          ^~~~~~~~~~
#0 8.677   compilation terminated.
#0 8.677   error: command '/usr/bin/aarch64-linux-gnu-gcc' failed with exit code 1
#0 8.677   
#0 8.677 
#0 8.677   at ~/.local/share/pypoetry/venv/lib/python3.8/site-packages/poetry/installation/chef.py:152 in _prepare
#0 8.684       148│ 
#0 8.685       149│                 error = ChefBuildError("\n\n".join(message_parts))
#0 8.685       150│ 
#0 8.685       151│             if error is not None:
#0 8.685     → 152│                 raise error from None
#0 8.685       153│ 
#0 8.685       154│             return path
#0 8.685       155│ 
#0 8.685       156│     def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:
#0 8.686 
#0 8.686 Note: This error originates from the build backend, and is likely not a problem with poetry but with psutil (5.9.4) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 "psutil (==5.9.4)"'.
#0 8.686 
------
failed to solve: executor failed running [/bin/sh -c poetry install --with tools,gpu]: exit code: 1

torch module

Attaching to eval
eval | Traceback (most recent call last):
eval | File "", line 1, in
eval | File "/usr/local/lib/python3.10/importlib/init.py", line 126, in import_module
eval | return _bootstrap._gcd_import(name[level:], package, level)
eval | File "", line 1050, in _gcd_import
eval | File "", line 1027, in _find_and_load
eval | File "", line 1006, in _find_and_load_unlocked
eval | File "", line 688, in _load_unlocked
eval | File "", line 883, in exec_module
eval | File "", line 241, in _call_with_frames_removed
eval | File "/app/api/main.py", line 12, in
eval | from api.container import agent_manager, file_handler, reload_dirs, templates, uploader
eval | File "/app/api/container.py", line 31, in
eval | import torch
eval | ModuleNotFoundError: No module named 'torch'
eval exited with code 1

FileNotFoundError: [Errno 2] No such file or directory: 'file://d3_app/index.html'

I've been receiving errors like this quite frequently. Sometimes the structure of the error is
No such file or directory: 'file/d3_app/index.html'

I also am seeing EVAL frequently hallucinate creating files - I can see the code for the file in the server console, but the file is not actually created.

In general I think there may be something wrong with the file creation and reference process.

eval | File "/app/core/upload/static.py", line 27, in upload
eval | shutil.copy(filepath, file_path)
eval | File "/usr/local/lib/python3.10/shutil.py", line 417, in copy
eval | copyfile(src, dst, follow_symlinks=follow_symlinks)
eval | File "/usr/local/lib/python3.10/shutil.py", line 254, in copyfile
eval | with open(src, 'rb') as fsrc:
eval | FileNotFoundError: [Errno 2] No such file or directory: 'file://d3_app/index.html'

And

eval | File "/app/core/upload/static.py", line 27, in upload
eval | shutil.copy(filepath, file_path)
eval | File "/usr/local/lib/python3.10/shutil.py", line 417, in copy
eval | copyfile(src, dst, follow_symlinks=follow_symlinks)
eval | File "/usr/local/lib/python3.10/shutil.py", line 254, in copyfil e
eval | with open(src, 'rb') as fsrc:
eval | FileNotFoundError: [Errno 2] No such file or directory: 'file/webapp.html'

The model 'gpt-4' does not exist

Using eval without gpu and submitting any command from the terminal:

curl -X POST -H "Content-Type: application/json" -d '{"key": "sessionid", "files": ["https://example.com/image.png"], "query": "Hi there!"}' http://localhost:8000/command

Response:

{"response":"The model: gpt-4 does not exist","files":[]}

For those without access to gpt-4, can this be modified to use another model in the interim?

can't install EVAL

command: docker-compose up --build eval
error

13.95 Setting up polkitd (122-3) ...
13.96 Creating group 'polkitd' with GID 996.
13.96 Creating user 'polkitd' (polkit) with UID 996 and GID 996.
13.99 invoke-rc.d: could not determine current runlevel
13.99 invoke-rc.d: policy-rc.d denied execution of reload.
14.17 start-stop-daemon: unable to stat /usr/libexec/polkitd (No such file or directory)
14.17 Setting up packagekit (1.2.6-5) ...
14.18 invoke-rc.d: could not determine current runlevel
14.19 invoke-rc.d: policy-rc.d denied execution of force-reload.
14.19 Failed to open connection to "system" message bus: Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory
14.40 Created symlink /etc/systemd/user/sockets.target.wants/pk-debconf-helper.socket → /usr/lib/systemd/user/pk-debconf-helper.socket.
14.40 Setting up packagekit-tools (1.2.6-5) ...
14.40 Setting up software-properties-common (0.99.30-4) ...
14.57 Processing triggers for dbus (1.14.8-2~deb12u1) ...
14.88 Traceback (most recent call last):
14.88   File "/usr/bin/add-apt-repository", line 362, in <module>
14.88     sys.exit(0 if addaptrepo.main() else 1)
14.88                   ^^^^^^^^^^^^^^^^^
14.88   File "/usr/bin/add-apt-repository", line 345, in main
14.88     shortcut = handler(source, **shortcut_params)
14.88                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
14.88   File "/usr/lib/python3/dist-packages/softwareproperties/shortcuts.py", line 40, in shortcut_handler
14.88     return handler(shortcut, **kwargs)
14.88            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
14.88   File "/usr/lib/python3/dist-packages/softwareproperties/ppa.py", line 86, in __init__
14.88     if self.lpppa.publish_debug_symbols:
14.88        ^^^^^^^^^^
14.88   File "/usr/lib/python3/dist-packages/softwareproperties/ppa.py", line 126, in lpppa
14.88     self._lpppa = self.lpteam.getPPAByName(name=self.ppaname)
14.88                   ^^^^^^^^^^^
14.88   File "/usr/lib/python3/dist-packages/softwareproperties/ppa.py", line 113, in lpteam
14.88     self._lpteam = self.lp.people(self.teamname)
14.88                    ^^^^^^^^^^^^^^
14.88 AttributeError: 'NoneType' object has no attribute 'people'
------
failed to solve: process "/bin/sh -c apt-get update &&   apt-get install -y software-properties-common &&   add-apt-repository ppa:deadsnakes/ppa &&   apt-get install -y python3.10 python3-pip curl &&   curl -sSL https://install.python-poetry.org | python3 - &&   apt-get install -y nodejs npm" did not complete successfully: exit code: 1

show SSLError in docker

Describe the bug

when I run docker-compose up --build eval
and write down some prompts and send the request.
It shows some error on terminal

eval        | urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/engines/gpt-3.5-turbo (Caused by SSLError(SSLError(1, '[SSL] unknown error (_ssl.c:1007)')))

I change the docker file to

RUN \
  apt-get update && \
  apt-get install -y software-properties-common && \
  # add-apt-repository ppa:deadsnakes/ppa && \
  apt-get install -y python3.10 python3-pip curl && \
  curl -sSL https://install.python-poetry.org | python3 - && \
  apt-get install -y nodejs npm

Is this the reason?

To Reproduce
Steps to reproduce the behavior:

  1. run docker-compose up --build eval
  2. send message
  3. show error

Expected behavior
received message

Desktop (please complete the following information):

  • OS: MacOS
  • Browser chrome

404 Not Found

With the latest version, I've started to receive the following error with each request:

eval | INFO: Waiting for application startup.
eval | INFO: Application startup complete.
eval | INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
eval | INFO: 172.18.0.1:46550 - "POST /command HTTP/1.1" 404 Not Found
eval | INFO: 172.18.0.1:56596 - "POST /command HTTP/1.1" 404 Not Found
eval | INFO: 172.18.0.1:54216 - "POST /command HTTP/1.1" 404 Not Found
eval | INFO: 172.18.0.1:42932 - "POST /command HTTP/1.1" 404 Not Found
eval | INFO: 172.18.0.1:42136 - "POST /command HTTP/1.1" 404 Not Found

On the client side, I'm simply getting the message:
{"detail":"Not Found"}

Sample prompt:
curl -X POST -H "Content-Type: application/json" -d '{"key": "sessionid", "files": [], "query": "Please develop and serve a simple web TODO app. The user can list all TODO items and add, delete each TODO item. I want it to have neuromorphism-style. The ports you can use are 4000 and 7000. Cannot use PORT variable in environment."}' http://localhost:8000/command

Security Issue Report

  1. At api.main.py Stack trace information flows to this location and may be exposed to an external user.
  2. static/execute.js uses a cryptographically insecure random number generated at Math.random() in a security context.

Trying to run without GPU doesn't compile, somehow wants dependencies that are for GPU only

  1. Running docker-compose up --build eval
  2. Getting the following error in the terminal:
    [+] Running 1/1 ✔ Container eval Recreated 0.1s Attaching to eval
    eval | Traceback (most recent call last):
    eval | File "", line 1, in
    eval | File "/usr/local/lib/python3.10/importlib/init.py", line 126, in import_module
    eval | return _bootstrap._gcd_import(name[level:], package, level)
    eval | File "", line 1050, in _gcd_import
    eval | File "", line 1027, in _find_and_load
    eval | File "", line 1006, in _find_and_load_unlocked
    eval | File "", line 688, in _load_unlocked
    eval | File "", line 883, in exec_module
    eval | File "", line 241, in _call_with_frames_removed
    eval | File "/app/api/main.py", line 46, in
    eval | import torch
    eval | ModuleNotFoundError: No module named 'torch'
    eval exited with code 1

So torch is missing. Going into pyproject.toml file and seeing that torch is under optional GPU-only dependencies:
[tool.poetry.dependencies]
python = "^3.10"
fastapi = "^0.94.1"
langchain = "^0.0.115"
diffusers = "^0.14.0"
pydantic = "^1.10.6"
tenacity = "^8.2.2"
llama-index = "0.4.29"
python-dotenv = "^1.0.0"
pillow = "^9.4.0"
boto3 = "^1.26.94"
uvicorn = "^0.21.1"
python-ptrace = "^0.9.8"
jinja2 = "^3.1.2"
python-multipart = "^0.0.6"
[tool.poetry.group.gpu]
optional = true

[tool.poetry.group.gpu.dependencies]
torch = "^2.0.0"
transformers = {git = "https://github.com/huggingface/transformers.git", rev = "main"}
accelerate = "^0.17.1"
sentencepiece = "^0.1.97"
bitsandbytes = "^0.37.2"

Moving torch under [tool.poetry.dependencies] passes the step and then fails saying missing transformers, which is again under GPU dependencies.

Chain Error: parse error

Getting occasional but regular parse errors. These error terminate the entire chain, often in the first step. This entire prompt was to read in an html file and find and fix any syntax errors.

eval | ... </script><script>
eval | ... // Handle user interactions (clicking on nodes)
eval | ... node.on("click", function(d) {
eval | ... if (d.type === "question") {
eval | ... // Fetch the answer and suggested questions from ChatGPT API
eval | ... // Replace the following with the actual API call
eval | ... const answer = "Sample answer";
eval | ... const suggestedQuestions = [
eval | ... "Suggested question 1",
eval | ... "Suggested question 2",
eval | ... "Suggested question 3"
eval | ... ];
eval | ...
eval | ...
eval | Thinking...
eval | Chain Error: parse error
eval | INFO: 127.0.0.1:59934 - "POST /api/execute HTTP/1.1" 200 OK

Bind for 0.0.0.0:8000 failed: port is already allocated

I keep getting the following error:

Error response from daemon: driver failed programming external connectivity on endpoint eval.gpu (260097999e8823c48153d3037a64c28d39973bc219dd9af055eb56dbdc280e24): Bind for 0.0.0.0:8000 failed: port is already allocated

However, I dont have anything running on port 8000. lsof -i :8000 does not return a result.

Also, I have set in the .env file - USE_GPU=False yet it still seems, at least based on the above, to be trying to use the gpu.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.