Giter Site home page Giter Site logo

Comments (15)

ATheorell avatar ATheorell commented on June 3, 2024 1

Ok, sounds quite serious with the overwrite. Can you look into this @Styren ?

from gpt-engineer.

ATheorell avatar ATheorell commented on June 3, 2024 1

Thanks for pushing this. Yes, this indeed sounds very problematic

from gpt-engineer.

captivus avatar captivus commented on June 3, 2024 1

That is a relevant detail as the issue isn't reproducible with the details you've provided and the files that are created are generated by the model directly. We will keep this issue open for a time to see if others report a similar problem but, being unable to reproduce the issue, and without a direct cause evident in the codebase as previously observed by @Styren, we have no immediate next actions.

from gpt-engineer.

captivus avatar captivus commented on June 3, 2024 1

Calm down. We can't reproduce your issue. We're keeping this open to collect additional, and hopefully more detailed, information on this. If you can provide more detailed information to help us reproduce this issue, we're keen for it. We're also keen for others to provide details on this issue, as they experience the same.

from gpt-engineer.

oldmanjk avatar oldmanjk commented on June 3, 2024

Isn't this a breaking bug? Is there any progress being made? Is there a way for me to work around this until it's fixed?

from gpt-engineer.

Styren avatar Styren commented on June 3, 2024

Strange, there's no code path that should generate a .gitignore file, @oldmanjk what is the content of the .gitignore that overwrites the existing file? And what is the content of the generated file_selection.toml?

from gpt-engineer.

captivus avatar captivus commented on June 3, 2024

I'm unable to reproduce this @oldmanjk . Can you please provide more details on how you are able to reproduce? Does this happen consistently, or was this a once-off issue?

from gpt-engineer.

oldmanjk avatar oldmanjk commented on June 3, 2024

The behavior I described happened consistently on wsl. I've since switched to native linux and things are a little different, but still broken. My .gitignore is left untouched now, but file_selection.toml is still overwritten without asking.

$ gpte [project] -i
Running gpt-engineer in /home/j/[project] 

File list detected at /home/j/[project]/.gpteng/file_selection.toml. Edit or delete it if you want to select new files.
Please select and deselect (add # in front) files, save it, and close it to continue...
write: /home/j/[project]/.gpteng/file_selection.toml is not logged in

You have selected the following files:
[project]
[list of the first 17 files of the project]


INFO:httpx:HTTP Request: POST http://192.168.1.79:5000/v1/chat/completions "HTTP/1.1 200 OK"
Traceback (most recent call last):

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
    yield

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_transports/default.py", line 113, in __iter__
    for part in self._httpcore_stream:

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 367, in __iter__
    raise exc from None

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 363, in __iter__
    for part in self._stream:

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 349, in __iter__
    raise exc

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 341, in __iter__
    for chunk in self._connection._receive_response_body(**kwargs):

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 210, in _receive_response_body
    event = self._receive_event(timeout=timeout)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 220, in _receive_event
    with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}):

  File "/home/j/miniconda3/lib/python3.12/contextlib.py", line 158, in __exit__
    self.gen.throw(value)

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc

httpcore.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)


The above exception was the direct cause of the following exception:


Traceback (most recent call last):

  File "/home/j/miniconda3/bin/gpte", line 8, in <module>
    sys.exit(app())
             ^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/gpt_engineer/applications/cli/main.py", line 191, in main
    files_dict = agent.improve(files_dict, prompt)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/gpt_engineer/applications/cli/cli_agent.py", line 132, in improve
    files_dict = self.improve_fn(
                 ^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/gpt_engineer/core/default/steps.py", line 172, in improve
    messages = ai.next(messages, step_name=curr_fn())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/gpt_engineer/core/ai.py", line 118, in next
    response = self.backoff_inference(messages)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/backoff/_sync.py", line 105, in retry
    ret = target(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/gpt_engineer/core/ai.py", line 162, in backoff_inference
    return self.llm.invoke(messages)  # type: ignore
           ^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 173, in invoke
    self.generate_prompt(

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 571, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 434, in generate
    raise e

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 424, in generate
    self._generate_with_cache(

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 608, in _generate_with_cache
    result = self._generate(
             ^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 455, in _generate
    return generate_from_stream(stream_iter)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 62, in generate_from_stream
    for chunk in stream:

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 419, in _stream
    for chunk in self.client.create(messages=message_dicts, **params):

  File "/home/j/miniconda3/lib/python3.12/site-packages/openai/_streaming.py", line 46, in __iter__
    for item in self._iterator:

  File "/home/j/miniconda3/lib/python3.12/site-packages/openai/_streaming.py", line 61, in __stream__
    for sse in iterator:

  File "/home/j/miniconda3/lib/python3.12/site-packages/openai/_streaming.py", line 53, in _iter_events
    yield from self._decoder.iter(self.response.iter_lines())

  File "/home/j/miniconda3/lib/python3.12/site-packages/openai/_streaming.py", line 287, in iter
    for line in iterator:

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_models.py", line 861, in iter_lines
    for text in self.iter_text():

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_models.py", line 848, in iter_text
    for byte_content in self.iter_bytes():

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_models.py", line 829, in iter_bytes
    for raw_bytes in self.iter_raw():

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_models.py", line 883, in iter_raw
    for raw_stream_bytes in self.stream:

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_client.py", line 126, in __iter__
    for chunk in self._stream:

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_transports/default.py", line 112, in __iter__
    with map_httpcore_exceptions():

  File "/home/j/miniconda3/lib/python3.12/contextlib.py", line 158, in __exit__
    self.gen.throw(value)

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
    raise mapped_exc(message) from exc

httpx.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)

I don't know why it crashes there either. It never asks me for input. I have a painstakingly-crafted file_selection.toml which is pretty long (it addresses over 1,000 files), which is overwritten, keeping those first 17 lines intact, but commenting out everything following. To be clear, gpte is unusable in this state (at least for me)

from gpt-engineer.

oldmanjk avatar oldmanjk commented on June 3, 2024

@Styren @captivus comment updated above

from gpt-engineer.

captivus avatar captivus commented on June 3, 2024

Are you hosting a local model? Otherwise, can you tell us more about this seemingly local network chat server you're interacting with?

INFO:httpx:HTTP Request: POST http://192.168.1.79:5000/v1/chat/completions "HTTP/1.1 200 OK"
Traceback (most recent call last):

from gpt-engineer.

oldmanjk avatar oldmanjk commented on June 3, 2024

I'm hosting a local model, yes. On a separate machine running oobabooga, which appears to be the cause of the crash, so you can ignore the crash part. I'm pretty sure the original issue has nothing to do with that though

from gpt-engineer.

oldmanjk avatar oldmanjk commented on June 3, 2024

The model creates file_selection.toml? Why would you have the model handle that?

from gpt-engineer.

oldmanjk avatar oldmanjk commented on June 3, 2024

Pretty sure file_selection.toml is not generated by the model. I see no calls to the model in the code. Do you mind pointing me to the location in the code where the model handles generating file_selection.toml?

from gpt-engineer.

oldmanjk avatar oldmanjk commented on June 3, 2024

I'm perfectly calm, thanks. You didn't answer my question

from gpt-engineer.

oldmanjk avatar oldmanjk commented on June 3, 2024

What's your problem?

from gpt-engineer.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.