Giter Site home page Giter Site logo

pythagora-io / gpt-pilot Goto Github PK

View Code? Open in Web Editor NEW
28.2K 257.0 2.8K 2 MB

The first real AI developer

License: MIT License

Python 97.64% JavaScript 1.42% Dockerfile 0.15% CSS 0.08% EJS 0.66% HTML 0.06%
ai codegen developer-tools gpt-4 coding-assistant research-project

gpt-pilot's Introduction

๐Ÿง‘โ€โœˆ๏ธ GPT PILOT ๐Ÿง‘โ€โœˆ๏ธ


Discord Follow GitHub Repo stars Twitter Follow


Pythagora-io%2Fgpt-pilot | Trendshift

Pythagora-io%2Fgpt-pilot | Trendshift


GPT Pilot doesn't just generate code, it builds apps!


See it in action

(click to open the video in YouTube) (1:40min)


Pythagora-io%2Fgpt-pilot | Trendshift

GPT Pilot is the core technology for the Pythagora VS Code extension that aims to provide the first real AI developer companion. Not just an autocomplete or a helper for PR messages but rather a real AI developer that can write full features, debug them, talk to you about issues, ask for review, etc.


๐Ÿ“ซ If you would like to get updates on future releases or just get in touch, join our Discord server or you can add your email here. ๐Ÿ“ฌ



GPT Pilot aims to research how much LLMs can be utilized to generate fully working, production-ready apps while the developer oversees the implementation.

The main idea is that AI can write most of the code for an app (maybe 95%), but for the rest, 5%, a developer is and will be needed until we get full AGI.

If you are interested in our learnings during this project, you can check our latest blog posts.





๐Ÿ”Œ Requirements

  • Python 3.9+

๐ŸšฆHow to start using gpt-pilot?

๐Ÿ‘‰ If you are using VS Code as your IDE, the easiest way to start is by downloading GPT Pilot VS Code extension. ๐Ÿ‘ˆ

Otherwise, you can use the CLI tool.

After you have Python and (optionally) PostgreSQL installed, follow these steps:

  1. git clone https://github.com/Pythagora-io/gpt-pilot.git (clone the repo)
  2. cd gpt-pilot
  3. python -m venv pilot-env (create a virtual environment)
  4. source pilot-env/bin/activate (or on Windows pilot-env\Scripts\activate) (activate the virtual environment)
  5. pip install -r requirements.txt (install the dependencies)
  6. cd pilot
  7. mv .env.example .env (or on Windows copy .env.example .env) (create the .env file)
  8. Add your environment to the .env file:
    • LLM Provider (OpenAI/Azure/Openrouter)
    • Your API key
    • database settings: SQLite/PostgreSQL (to change from SQLite to PostgreSQL, just set DATABASE_TYPE=postgres)
    • optionally set IGNORE_PATHS for the folders which shouldn't be tracked by GPT Pilot in workspace, useful to ignore folders created by compilers (i.e. IGNORE_PATHS=folder1,folder2,folder3)
  9. python main.py (start GPT Pilot)

After, this, you can just follow the instructions in the terminal.

All generated code will be stored in the folder workspace inside the folder named after the app name you enter upon starting the pilot.

๐Ÿ”Ž Examples

Click here to see all example apps created with GPT Pilot.

๐Ÿณ How to start gpt-pilot in docker?

  1. git clone https://github.com/Pythagora-io/gpt-pilot.git (clone the repo)
  2. Update the docker-compose.yml environment variables, which can be done via docker compose config. If you wish to use a local model, please go to https://localai.io/basics/getting_started/.
  3. By default, GPT Pilot will read & write to ~/gpt-pilot-workspace on your machine, you can also edit this in docker-compose.yml
  4. run docker compose build. this will build a gpt-pilot container for you.
  5. run docker compose up.
  6. access the web terminal on port 7681
  7. python main.py (start GPT Pilot)

This will start two containers, one being a new image built by the Dockerfile and a Postgres database. The new image also has ttyd installed so that you can easily interact with gpt-pilot. Node is also installed on the image and port 3000 is exposed.

๐Ÿง‘โ€๐Ÿ’ป๏ธ CLI arguments

--get-created-apps-with-steps

Lists all existing apps.

python main.py --get-created-apps-with-steps

app_id

Continue working on an existing app using app_id

python main.py app_id=<ID_OF_THE_APP>

step

Continue working on an existing app from a specific step (eg: development_planning)

python main.py app_id=<ID_OF_THE_APP> step=<STEP_FROM_CONST_COMMON>

skip_until_dev_step

Continue working on an existing app from a specific development step

python main.py app_id=<ID_OF_THE_APP> skip_until_dev_step=<DEV_STEP>

Continue working on an existing app from a specific development step. If you want to play around with GPT Pilot, this is likely the flag you will often use.


python main.py app_id=<ID_OF_THE_APP> skip_until_dev_step=0

Erase all development steps previously done and continue working on an existing app from the start of development.

theme

python main.py theme=light
python main.py theme=dark

๐Ÿ— How GPT Pilot works?

Here are the steps GPT Pilot takes to create an app:

  1. You enter the app name and the description.
  2. Product Owner agent like in real life, does nothing. :)
  3. Specification Writer agent asks a couple of questions to understand the requirements better if project description is not good enough.
  4. Architect agent writes up technologies that will be used for the app and checks if all technologies are installed on the machine and installs them if not.
  5. Tech Lead agent writes up development tasks that the Developer must implement.
  6. Developer agent takes each task and writes up what needs to be done to implement it. The description is in human-readable form.
  7. Code Monkey agent takes the Developer's description and the existing file and implements the changes.
  8. Reviewer agent reviews every step of the task and if something is done wrong Reviewer sends it back to Code Monkey.
  9. Troubleshooter agent helps you to give good feedback to GPT Pilot when something is wrong.
  10. Debugger agent hate to see him, but he is your best friend when things go south.
  11. Technical Writer agent writes documentation for the project.

๐Ÿ•ดHow's GPT Pilot different from Smol developer and GPT engineer?

  • GPT Pilot works with the developer to create a fully working production-ready app - I don't think AI can (at least in the near future) create apps without a developer being involved. So, GPT Pilot codes the app step by step just like a developer would in real life. This way, it can debug issues as they arise throughout the development process. If it gets stuck, you, the developer in charge, can review the code and fix the issue. Other similar tools give you the entire codebase at once - this way, bugs are much harder to fix for AI and for you as a developer.

  • Works at scale - GPT Pilot isn't meant to create simple apps but rather so it can work at any scale. It has mechanisms that filter out the code, so in each LLM conversation, it doesn't need to store the entire codebase in context, but it shows the LLM only the relevant code for the current task it's working on. Once an app is finished, you can continue working on it by writing instructions on what feature you want to add.

๐Ÿป Contributing

If you are interested in contributing to GPT Pilot, join our Discord server, check out open GitHub issues, and see if anything interests you. We would be happy to get help in resolving any of those. The best place to start is by reviewing blog posts mentioned above to understand how the architecture works before diving into the codebase.

๐Ÿ–ฅ Development

Other than the research, GPT Pilot needs to be debugged to work in different scenarios. For example, we realized that the quality of the code generated is very sensitive to the size of the development task. When the task is too broad, the code has too many bugs that are hard to fix, but when the development task is too narrow, GPT also seems to struggle in getting the task implemented into the existing code.

๐Ÿ“Š Telemetry

To improve GPT Pilot, we are tracking some events from which you can opt out at any time. You can read more about it here.

๐Ÿ”— Connect with us

๐ŸŒŸ As an open-source tool, it would mean the world to us if you starred the GPT-pilot repo ๐ŸŒŸ

๐Ÿ’ฌ Join the Discord server to get in touch.

gpt-pilot's People

Contributors

bigvo avatar crd716 avatar deng-xian-sheng avatar dhanushkumar-s-g avatar edos10 avatar eltociear avatar eukub avatar githubemploy avatar igeni avatar isthatpratik avatar kerollosy avatar leonostrez avatar liweiyi88 avatar maxanfilofyev avatar mrgoonie avatar nalbion avatar p4rti-s avatar patakk avatar pavel-pythagora avatar piotrwalkusz1 avatar prashantgarbuja avatar prathamdby avatar rajveer43 avatar ramkrishna757575 avatar scoobie-bot avatar sebbeflebbe avatar senko avatar umpire2018 avatar zafiro12 avatar zvone187 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gpt-pilot's Issues

Unable to resume development - KeyError: 'start_debugging'

I've not completed a project yet as it always crashes into a token limit.

When I restart the app I get the following:

Restoring development step with id 28
Updated file /Users//SDP/gpt-pilot/workspace/My_App/index.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/package.json
Updated file /Users//SDP/gpt-pilot/workspace/My_App/models/Trade.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/public/index.html
Updated file /Users//SDP/gpt-pilot/workspace/My_App/public/index.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/routes/index.js
Dev step 28
DONE

Restoring development step with id 29
Updated file /Users//SDP/gpt-pilot/workspace/My_App/index.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/package.json
Updated file /Users//SDP/gpt-pilot/workspace/My_App/models/Trade.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/public/index.html
Updated file /Users//SDP/gpt-pilot/workspace/My_App/public/index.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/routes/index.js
Dev step 29
NO

Restoring development step with id 30
Updated file /Users//SDP/gpt-pilot/workspace/My_App/package.json
Updated file /Users//SDP/gpt-pilot/workspace/My_App/models/Trade.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/public/index.html
Updated file /Users//SDP/gpt-pilot/workspace/My_App/public/index.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/routes/index.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/index.js
Dev step 30
npm run start

Can you check if the app works?
If you want to run the app, just type "r" and press ENTER
Restoring user input id 23: I want it to figure that out for itself
Restoring development step with id 31
Updated file /Users//SDP/gpt-pilot/workspace/My_App/index.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/package.json
Updated file /Users//SDP/gpt-pilot/workspace/My_App/models/Trade.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/public/index.html
Updated file /Users//SDP/gpt-pilot/workspace/My_App/public/index.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/routes/index.js
Dev step 31
NEEDS_DEBUGGING

Restoring development step with id 32
Updated file /Users//SDP/gpt-pilot/workspace/My_App/index.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/package.json
Updated file /Users//SDP/gpt-pilot/workspace/My_App/models/Trade.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/public/index.html
Updated file /Users//SDP/gpt-pilot/workspace/My_App/public/index.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/routes/index.js
Traceback (most recent call last):
File "/Users//SDP/gpt-pilot/pilot/main.py", line 35, in
project.start()
File "/Users//SDP/gpt-pilot/pilot/helpers/Project.py", line 78, in start
self.developer.start_coding()
File "/Users//SDP/gpt-pilot/pilot/helpers/agents/Developer.py", line 32, in start_coding
self.implement_task()
File "/Users//SDP/gpt-pilot/pilot/helpers/agents/Developer.py", line 53, in implement_task
self.execute_task(convo_dev_task, task_steps, continue_development=True)
File "/Users//SDP/gpt-pilot/pilot/helpers/agents/Developer.py", line 112, in execute_task
self.continue_development(convo)
File "/Users//SDP/gpt-pilot/pilot/helpers/agents/Developer.py", line 140, in continue_development
task_steps = iteration_convo.send_message('development/parse_task.prompt', {}, IMPLEMENT_TASK)
File "/Users//SDP/gpt-pilot/pilot/helpers/AgentConvo.py", line 60, in send_message
response = self.postprocess_response(response, function_calls)
File "/Users//SDP/gpt-pilot/pilot/helpers/AgentConvo.py", line 117, in postprocess_response
response = function_calls['functions']response['function_calls']['name']
KeyError: 'start_debugging'

Not sure where to go from here.

FWIW a great 'upgrade' would be a text file with the app id's logged in them alongside their project names. When the terminal loses text then it's a massive pain to go through the debug and hope you've got the right one!

Absolutely loving what I'm seeing so far - really want to get to a point where I can see what it outputs because this is slicker than an oilfield!

Getting "Too many tokens in messages" value error

Full error for context when trying to create a GPT4 connection: -

  File "/Users/vijayashok/code/gpt-pilot/pilot/utils/llm_connection.py", line 94, in create_gpt_chat_completion
    raise ValueError(f'Too many tokens in messages: {tokens_in_messages}. Please try a different test.')

I believe this error happens after fairly deep into the development. For me it happened after 90 DEV tasks.

Doubt: Each time we make a gpt4 request, are we sending all previous conversations to GPT4? If that's the case each subsequent request will have more tokens than the previous ones? Which will exhaust our GPT quota pretty fast

Folks, Let me know if any specific details are needed

Thanks!!!

Documentation for postgres is incorrect

tldr; update either database.py with "postgres" or change the readme to say DATABASE_TYPE=postgresql

The readme states:

PostgreSQL database info to the .env file

  • to change from SQLite to PostgreSQL in your .env just set DATABASE_TYPE=postgres

However, in database.py

                sql = f'DROP TABLE IF EXISTS "{table._meta.table_name}" CASCADE'
            elif DATABASE_TYPE == "sqlite":
                sql = f'DROP TABLE IF EXISTS "{table._meta.table_name}"'
            else:
                raise ValueError(f"Unsupported DATABASE_TYPE: {DATABASE_TYPE}")

            database.execute_sql(sql)

API responded with status code: 429. 10KTPM-200RPM

When creating a project, it is asking about 4 or 5 questions, then saying everything is clear...
Then immediately I get the following error message:

There was a problem with request to openai API:
API responded with status code: 429.

Response text: {
    "error": {
        "message": "Rate limit reached for 10KTPM-200RPM in organization org-XXXXXXXXXXXX on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.",
        "type": "tokens",
        "param": null,
        "code": "rate_limit_exceeded"
    }
}

I have access to GPT-4, is this just OpenAI currently having issues?

[Feature Request] Azure OpenAI endpoint

It would be great to be able to leverage Azure OpenAI services to get access to the gpt4-32k model.
Implementation should be relatively easy, if a developer can build this, I can supply testing credentials to help with the integration.

Right now I've tried several projects and all have errored out on too many tokens requested.

2000ms timeout insufficient for dependency installs

Awesome work on gpt-pilot!

I am trying to have it create a node/JS project. A few things I've observed:

  1. some dependencies it generated are not valid NPM dependencies, so it errors out while trying to install. Is there a way for me to override this?
  2. when doing npm install, the timeout of 2000ms is too short for dependencies to install, causing the step to error out and debugging to start

example of 1:

--------- EXECUTE COMMAND ----------
Can i execute the command: `npm install --save-dev okta-oidc-js @okta/jwt-verifier @okta/okta-react` with 30000ms timeout?
Restoring user input id 15:
t: 10695ms : CLI ERROR:npm ERR! code E404
t: 10697ms : CLI ERROR:npm ERR! 404 Not Found - GET https://registry.npmjs.org/okta-oidc-js - Not found
t: 10697ms : CLI ERROR:npm ERR! 404
t: 10697ms : CLI ERROR:npm ERR! 404  'okta-oidc-js@*' is not in this registry.
t: 10697ms : CLI ERROR:npm ERR! 404
t: 10697ms : CLI ERROR:npm ERR! 404 Note that you can also install from a
t: 10697ms : CLI ERROR:npm ERR! 404 tarball, folder, http url, or git url.
t: 10697ms : CLI ERROR:
t: 10697ms : CLI ERROR:npm ERR! A complete log of this run can be found in: /Users/tom/.npm/_logs/2023-
Saving file /package.json

Dev step 18

NEEDS_DEBUGGING

Example of 2

Can i execute the command: `npm install --save-dev okta-oidc-js @okta/jwt-verifier @okta/okta-react` with 2000ms timeout?
Restoring user input id 11:
t: 2000ms :
Saving file /package.json

Dev step 11

NEEDS_DEBUGGING

Got incorrect CLI response:
stdout:

It might be good to allow the timeout to be user-configurable?

Developer.implement_task tries to do too much

I think that Developer.implement_task() is trying to do too much in too quickly.

I've got so many TODOs here because I don't understand what/why it is this way.

    def implement_task(self):
        convo_dev_task = AgentConvo(self)
        # TODO: why "This should be a simple version of the app so you don't need to aim to provide a production ready code"?
        # TODO: why `no_microservices`? Is that even applicable?
        task_description = convo_dev_task.send_message('development/task/breakdown.prompt', {
            "name": self.project.args['name'],
            "app_type": self.project.args['app_type'],
            "app_summary": self.project.project_description,
            "clarification": [],
            # TODO: why all stories at once?
            "user_stories": self.project.user_stories,
            # "user_tasks": self.project.user_tasks,
            # TODO: "I'm currently in an empty folder" may not always be true?
            "technologies": self.project.architecture,
            # TODO: `array_of_objects_to_string` does not seem to be used by the prompt template?
            "array_of_objects_to_string": array_of_objects_to_string,
            # TODO: prompt lists `files` if `current_task_index` != 0
            "directory_tree": self.project.get_directory_tree(True),
        })

        task_steps = convo_dev_task.send_message('development/parse_task.prompt', {}, IMPLEMENT_TASK)
        convo_dev_task.remove_last_x_messages(2)
        self.execute_task(convo_dev_task, task_steps, continue_development=True)

(I'm also getting errors about "maximum context length is 8192 tokens" when sending dev_ops/ran_command.prompt - there are a lot of them)

Changes that I'd like to see:

  • PO prioritises user_stories, considering pre-requisites/dependencies and possibly adding extra stories if required.
  • TechLead takes each title from user_stories and fleshes out the body with BDD scenarios with Given/When/Then steps. This would probably done one at a time.
  • For each story:
    • start a new conversation (Including all story titles even if focusing on just one would probably help with context)
    • Developer writes unit tests for each scenario ("Then" assertions skipped/ignored)
    • Developer implements code for story
    • Developer runs tests for story
    • Developer/CodeMonkey fixes code until tests pass
    • "Can I execute the command? If yes, just press ENTER" should bail on "no" and also have other options to edit/give up on the current task.

Not response any more after task clear

I wrote a project need, and answer step by step, but after GPT-Pilot comfirmed User Task, it not response any more.

? What is the project name? file_manage_sys


? Describe your app in as many details as possible. "Please write a file upload and download management system with a front-end interface. It should also support automatic 
file cleaning when the system's storage capacity exceeds 80% usage. The system should automatically delete the oldest files. Additionally, this system needs to support user
? Understood. Let's start with the task of getting additional answers for the Web App "file_manage_sys".

1. Do you have any specific requirements for the front-end interface of the file management system?
 
EVERYTHING_CLEAR EVERYTHING_CLEAR


Great! Now that everything is clear, let's move on to the next task: breaking down user stories.
? Great! Now that everything is clear, let's move on to the next task: breaking down user stories.

Based on the description of the "file_manage_sys" Web App, here are a few user stories:

1. As a user, I want to be able to upload files to the system.
2. As a user, I want to be able to download files from the system.
3. As a user, I want the system to automatically clean up files when storage capacity exceeds 80%.
4. As a user, I want to be able to login to the system.
5. As a user, I want to be able to manage user accounts.

Do you have any additional user stories or any modifications to the existing ones?

Please provide your response in the format: "USER_STORIES <your user stories>" or "USER_STORIES_CLEAR" if everything is clear.

**IMPORTANT**
Remember to break down user stories based on the description of the "file_manage_sys" Web App. USER_STORIES_CLEAR


Great! Now let's move on to the next task: breaking down user tasks.

Based on the description of the "file_manage_sys" Web App and the user stories we have identified, here are a few user tasks:

1. User Task: Upload Files
   - User needs to select files from their local system to upload to the file management system.
   - User should be able to provide a name or description for the uploaded files.
   - The system should validate the file format, size, and other relevant criteria.
   - The uploaded files should be stored in the system's storage.

2. User Task: Download Files
   - User needs to search for files in the system and select the desired files to download.
   - User should have the option to download individual files or multiple files simultaneously.
   - The system should ensure the security and integrity of the downloaded files.

3. User Task: Automatic File Cleaning
   - The system should monitor the storage capacity and check if it exceeds 80% usage.
   - If the storage capacity exceeds 80%, the system should automatically identify and delete the oldest files to free up space.
   - The system should have a mechanism to track file upload and modification timestamps for accurate deletion.

4. User Task: User Login
? Great! Now let's move on to the next task: breaking down user tasks.

Based on the description of the "file_manage_sys" Web App and the user stories we have identified, here are a few user tasks:

1. User Task: Upload Files
   - User needs to select files from their local system to upload to the file management system.
   - User should be able to provide a name or description for the uploaded files.
   - The system should validate the file format, size, and other relevant criteria.
   - The uploaded files should be stored in the system's storage.

2. User Task: Download Files
   - User needs to search for files in the system and select the desired files to download.
   - User should have the option to download individual files or multiple files simultaneously.
   - The system should ensure the security and integrity of the downloaded files.

3. User Task: Automatic File Cleaning
Remember to break down user tasks based on the description of the "file_manage_sys" Web App and the user stories we have identified. TASK_CLEAR
? Fantastic! Now that we have a clear understanding of the requirements for the "file_manage_sys" Web App, we can proceed with the development process. I will take the user

Throughout the development process, I will regularly communicate with you to provide updates, gather feedback, and address any questions or concerns. We will work closely t

Is there anything else you would like to discuss before we begin the development process for the "file_manage_sys" Web App?
 
? Great! Since there is nothing else to discuss at the moment, we can move forward with the development process for the "file_manage_sys" Web App. I will update you regular

Thank you for your cooperation, and I look forward to working with you to bring your vision to life!
 
? Thank you for your confirmation. I will now proceed with the development process for the "file_manage_sys" Web App. I will keep you updated on the progress and reach out 

If you have any additional questions or concerns during the development process, please don't hesitate to let me know. I'm here to assist you.
? Great! I'm excited to start working on the development of the "file_manage_sys" Web App. I will keep you updated on the progress and reach out if any questions or clarifi

Once the development is complete, I will provide you with a demo of the Web App and gather your feedback for any necessary adjustments or modifications.

Thank you for entrusting us with your project. We will do our best to deliver a high-quality and user-friendly Web App that meets your expectations.
 
Let's get started, and I'll be in touch soon with updates on the development progress of the "file_manage_sys" Web App. 

API responded with status code: 429. Rate limit reached for 10KTPM-200RPM in organization org-WyXXXXXXXXX

Getting immediate:

API responded with status code: 429. Response text: {
    "error": {
        "message": "Rate limit reached for 10KTPM-200RPM in organization org-WyXXXXXXXXX on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.",
        "type": "tokens",
        "param": null,
        "code": "rate_limit_exceeded"
    }
}

This is my first attempt to access OpenAPI for today and I am already getting this error. I am running other applications to generate python code and I am not getting this error.

Error saving `PNG` files due to unsupported characters

Description

While building a Flutter app using gpt-pilot, an error occurs when GPT-Pilot attempts to save generated PNG files.

Environment

  • GPT-Pilot Version: commit 38d5627
  • Flutter Version: 3.13.2
  • OS: Windows 11

Steps to reproduce

  1. Launch gpt-pilot.
  2. Command him to create a Flutter application.
  3. execute the first command: flutter create <project_name>.

Expected behavior

The command executes successfully and the generated files are saved in the database.

Actual behavior

An error occurs when trying to save a PNG file.

Error logs

Saving file \test_flutter\android\app\src\main\res\mipmap-hdpi\ic_launcher.png/ic_launcher.png
--- Logging error ---
...
  File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 3246, in execute_sql
    cursor.execute(sql, params or ())
ValueError: A string literal cannot contain NUL (0x00) characters.

(Note: Truncated for brevity.)

Full error logs

Saving file \test_flutter\android\app\src\main\res\mipmap-hdpi\ic_launcher.png/ic_launcher.png
--- Logging error ---
Traceback (most recent call last):
  File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 7117, in get
    return clone.execute(database)[0]
           ~~~~~~~~~~~~~~~~~~~~~~~^^^
  File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 4481, in __getitem__
    return self.row_cache[item]
           ~~~~~~~~~~~~~~^^^^^^
IndexError: list index out of range

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 6702, in get_or_create
return query.get(), False
^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 7120, in get
raise self.model.DoesNotExist('%s instance matching query does '
database.models.file_snapshot.FileSnapshotDoesNotExist: <Model: FileSnapshot> instance matching query does not exist:
SQL: SELECT "t1"."id", "t1"."created_at", "t1"."updated_at", "t1"."app_id", "t1"."development_step_id", "t1"."file_id", "t1"."content" FROM "file_snapshot" AS "t1" WHERE ((("t1"."app_id" = %s) AND ("t1"."development_step_id" = %s)) AND ("t1"."file_id" = %s)) LIMIT %s OFFSET %s
Params: ['72dda6189f5a4272992f7a9f465369f0', 8, 56, 1, 0]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Python311\Lib\logging_init_.py", line 1113, in emit
stream.write(msg + self.terminator)
File "C:\Python311\Lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeEncodeError: 'charmap' codec can't encode character '\u03ae' in position 956: character maps to
Call stack:
File "C:\Users\nenup\tools\gpt-pilot\pilot\main.py", line 35, in
project.start()
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\Project.py", line 97, in start
self.developer.start_coding()
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\agents\Developer.py", line 34, in start_coding
self.implement_task(i, dev_task)
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\agents\Developer.py", line 60, in implement_task
self.execute_task(convo_dev_task, task_steps, development_task=development_task, continue_development=True)
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\agents\Developer.py", line 78, in execute_task
run_command_until_success(data['command'], data['timeout'], convo, additional_message=additional_message)
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\cli.py", line 266, in run_command_until_success
response = convo.send_message('dev_ops/ran_command.prompt',
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\AgentConvo.py", line 70, in send_message
development_step = save_development_step(self.agent.project, prompt_path, prompt_data, self.messages, response)
File "C:\Users\nenup\tools\gpt-pilot\pilot\database\database.py", line 222, in save_development_step
project.save_files_snapshot(development_step.id)
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\Project.py", line 214, in save_files_snapshot
file_snapshot, created = FileSnapshot.get_or_create(
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 6708, in get_or_create
return cls.create(**kwargs), True
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 6577, in create
inst.save(force_insert=True)
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 6787, in save
pk = self.insert(**field_dict).execute()
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 1966, in inner
return method(self, database, *args, **kwargs)
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 2037, in execute
return self._execute(database)
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 2842, in _execute
return super(Insert, self)._execute(database)
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 2553, in _execute
cursor = self.execute_returning(database)
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 2560, in execute_returning
cursor = database.execute(self)
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 3254, in execute
return self.execute_sql(sql, params)
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 3243, in execute_sql
logger.debug((sql, params))
Message: ('INSERT INTO "file_snapshot" ("id", "created_at", "updated_at", "app_id", "development_step_id", "file_id", "content") VALUES (%s, %s, %s, %s, %s, %s, %s) RETURNING "file_snapshot"."id"', ['0c054be44f8948b49d0ab5dcec7add7a', datetime.datetime(2023, 9, 5, 11, 10, 50, 724756), datetime.datetime(2023, 9, 5, 11, 10, 50, 724756), '72dda6189f5a4272992f7a9f465369f0', 8, 56, 'PNG\n\x1a\n\x00\x00\x00\nIHDR\x00\x00\x00H\x00\x00\x00H\x08\x03\x00\x00\x00b3Cu\x00\x00\x00\x19tEXtSoftware\x00Adobe ImageReadyqe<\x00\x00\x00PLTE\x00\x00\x00\x01N\x01W)T)T\x01V\x01W)FTT\x01V\x01W\x18v)T=\x002Y\x008d\x01+K\x010S\x01>n\x01>o\x01G\x7f\x01I\x01L\x01N\x01N\x01N\x01Q\x01R\x01R\x01S\x01U\x01U\x01V\x01V\x01W\x01W\x02;g\x02@p\x02Cv\x02I\x03M\x03O\x03P\x16h\x17o\x19x\x1a~\x1b\x1b\x1c)DT\x1a=\x00\x00\x00\x13tRNS\x00\x10\x10\x10\x10PP`````\x19\x10\x00\x00\x00IDATX\x0e@\x10@QT\uf28c\x1d+?f\x08\x0bK"ฮฎ\x0fฦน79BILC\x0e9C\x0e9T/p!~\t0Nsx\t\x04%\\\'Jn;8\x16\'Ep^\tJ\x1cG\n8~)8";LIK\x12w\x1cI0N!\x1c%Q>ศ.;\x1d2\x12vf\x04\x19\x0b\x10ิ†\x04รฒ3lHH2N\x7f\x03\x08JVo\x06dj\x1cpRy5()V\x04#K$=\x00$#\x04;n\x00\x00\x00\x00IENDB`'])
Arguments: ()
Traceback (most recent call last):
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 7117, in get
return clone.execute(database)[0]
~~~~~~~~~~~~~~~~~~~~~~~^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 4481, in getitem
return self.row_cache[item]
~~~~~~~~~~~~~~^^^^^^
IndexError: list index out of range

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 6702, in get_or_create
return query.get(), False
^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 7120, in get
raise self.model.DoesNotExist('%s instance matching query does '
database.models.file_snapshot.FileSnapshotDoesNotExist: <Model: FileSnapshot> instance matching query does not exist:
SQL: SELECT "t1"."id", "t1"."created_at", "t1"."updated_at", "t1"."app_id", "t1"."development_step_id", "t1"."file_id", "t1"."content" FROM "file_snapshot" AS "t1" WHERE ((("t1"."app_id" = %s) AND ("t1"."development_step_id" = %s)) AND ("t1"."file_id" = %s)) LIMIT %s OFFSET %s
Params: ['72dda6189f5a4272992f7a9f465369f0', 8, 56, 1, 0]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\nenup\tools\gpt-pilot\pilot\main.py", line 35, in
project.start()
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\Project.py", line 97, in start
self.developer.start_coding()
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\agents\Developer.py", line 34, in start_coding
self.implement_task(i, dev_task)
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\agents\Developer.py", line 60, in implement_task
self.execute_task(convo_dev_task, task_steps, development_task=development_task, continue_development=True)
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\agents\Developer.py", line 78, in execute_task
run_command_until_success(data['command'], data['timeout'], convo, additional_message=additional_message)
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\cli.py", line 266, in run_command_until_success
response = convo.send_message('dev_ops/ran_command.prompt',
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\AgentConvo.py", line 70, in send_message
development_step = save_development_step(self.agent.project, prompt_path, prompt_data, self.messages, response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot\database\database.py", line 222, in save_development_step
project.save_files_snapshot(development_step.id)
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\Project.py", line 214, in save_files_snapshot
file_snapshot, created = FileSnapshot.get_or_create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 6708, in get_or_create
return cls.create(**kwargs), True
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 6577, in create
inst.save(force_insert=True)
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 6787, in save
pk = self.insert(**field_dict).execute()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 1966, in inner
return method(self, database, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 2037, in execute
return self._execute(database)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 2842, in _execute
return super(Insert, self)._execute(database)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 2553, in _execute
cursor = self.execute_returning(database)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 2560, in execute_returning
cursor = database.execute(self)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 3254, in execute
return self.execute_sql(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 3246, in execute_sql
cursor.execute(sql, params or ())
ValueError: A string literal cannot contain NUL (0x00) characters.

Code for true or false are not written in the correct case

While GPT Pilot was creating code and attempted to run the app.py, an error was displayed indicating that the boolean True was written in lower case, the same with False. the code writer would try to debug the error but it continued to write the true or false with lower case. It would attempt to correct it several time until I killed the process. Here is a sample of one of the line it occurred in. app.run(debug=true) It also occurred in db queries. This was a simple python/flask app.

It would be helpful, if the program asked to continue to debug rather than continuing in a loop, at that prompt we could give input on how to correct the error.

Unable to use GPT-3.5-turbo from .env file

Getting this error on running command python main.py in terminal

------------------ STARTING NEW PROJECT ----------------------
If you wish to continue with this project in future run:
python main.py app_id=8337ff83-bde9-413b-af89-d141dae36b4f
--------------------------------------------------------------

What is the project name? New Project


 Describe your app in as many details as possible. New Sample Project


There was a problem with request to openai API:
API responded with status code: 404. Response text: {
    "error": {
        "message": "The model `gpt-4` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.",
        "type": "invalid_request_error",
        "param": null,
        "code": "model_not_found"
    }
}

 Do you want to try make the same request again? If yes, just press ENTER. Otherwise, type 'no'. 

.env file

#OPENAI or AZURE
ENDPOINT=OPENAI
OPENAI_API_KEY=xyz
AZURE_API_KEY=
AZURE_ENDPOINT=
MODEL_NAME=gpt-3.5-turbo
MAX_TOKENS=8192
DB_NAME=gpt-pilot
DB_HOST=localhost
DB_PORT=5432
DB_USER=postgres
DB_PASSWORD=postgres

Operating system - Windows
python --version - Python 3.11.4

bigger app and visualisation model

I see an area that needs to be addressed.
larger applications are characterized by the fact that they have a complex structure.
the problem of the designer is that such a complex structure must be maintained either in the head (which is difficult) or in some external tool. This means that without some visualization mode it will be difficult for the person issuing the commands to write such an application. Unless the analyst makes an application model and then the programmer will commission tasks according to this model

Project is broken atm.

Running docker-compose results in:

> [ 7/10] RUN python -m venv pilot-env:
#0 0.289 Error: [Errno 2] No such file or directory: '/usr/src/app/pilot-env/bin/python'
------
failed to solve: executor failed running [/bin/sh -c python -m venv pilot-env]: exit code: 1

Documentation for setting it up manually also doesn't work, as there is no env example to copy

ModuleNotFoundError: No module named 'playhouse'

When I try run python db_init.py
terminal throw the error
Traceback (most recent call last): File "/home/ces-user/Desktop/gpt-pilot/pilot/db_init.py", line 3, in <module> from database.database import create_tables, drop_tables File "/home/ces-user/Desktop/gpt-pilot/pilot/database/database.py", line 1, in <module> from playhouse.shortcuts import model_to_dict ModuleNotFoundError: No module named 'playhouse'

Support for Jira & GitHub Issues/Projects

Currently the GPT Pilot Workflow is linear and does not resemble how a real-world project is run:

  • PO: Description -> Clarifications -> Requirements
  • Architect: Tech Requirements
  • Developer: Implement Tasks (all at once)
  • Code Monkey: Update code

I'd like to see integration with issue and project management tools such as Jira and GitHub Issues/Projects.

I'd also like to be able to make simple edits, or add new features. I think the flow would look something like this:

(edit: see updated architectural plan in #91)
image

"advanced" mode has an odd UX

I found "advanced" mode, which is not mentioned in arguments.py, but is checked for in Architect.py.

I asked it to build a simple "Hello World" script in javascript and it proposed the architecture:

  • Node.js
  • MongoDB
  • Mongoose
  • Bootstrap
  • Vanilla Javascript

As expected, get_additional_info_from_user() prompted:

Please check this message and say what needs to be changed. If everything is ok just press ENTER

for Node.js, I accepted that, and then again for MongoDB and I said "no database is required" and it started treating me as the LLM:

Please check this message and say what needs to be changed. If everything is ok just press ENTER
? You are an experienced software architect. Your expertise is in creating an architecture for an MVP (minimum viable products) that can be developed as fast as possible by using as many ready-made technologies as possible. The technologies that you prefer using when other technologies are not explicitly sp

**Scripts**: You prefer using Node.js for writing scripts that are meant to be ran just with the CLI.

**Backend**: You prefer using Node.js. As no database is required for the specific project, you won't be using any ORM like Mongoose or PeeWee.

**Testing**: To create unit and integration tests, you prefer using Jest for Node.js projects and pytest for Python projects. To create end-to-end tests, you prefer using Cypress.

**Frontend**: You prefer using Bootstrap for creating HTML and CSS while you use plain (vanilla) Javascript.

**Other**: From other technologies, if they are needed for the project, you prefer using cronjob (for making automated tasks), Socket.io for web sockets. 

...actually, that was the LLM generating the response.

create_gpt_chat_completion() returns { "text": llmResponse } but get_additional_info_from_user() adds this object to the updated_messages list, which usually includes strings when the user just presses ENTER to accept.

updated_messages then looks like:

[
  'Node.js',
  'You are an experienced software architect... **Backend**: You prefer using Node.js. As no database is required for the specific project, you won't be using any ORM like Mongoose or PeeWee.',
  'You are an experienced software architect...',
  'Bootstrap',
  '**Frontend**: You prefer using Bootstrap for creating HTML and CSS while you use TypeScript instead of plain (vanilla) Javascript.'
]  

...okay, so now I can see that it is actually updating the original prompt from system_messages/architect.prompt but the UX is a bit off. Ideally, the user should just see something like:

Okay, as no database is required for this project, I won't be using any ORM like Mongoose or PeeWee.

[feature] LocalAI endpoint

I don't have any experience with it, but LocalAI might be more attractive for people working in environments where sending source code out to the interwebs is frowned upon.

LocalAI is a drop-in replacement REST API thatโ€™s compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format. Does not require GPU.

  • Text generation with llama.cpp, gpt4all.cpp and more
  • OpenAI functions
  • Embeddings generation for vector databases
  • Download Models drectly from Huggingface

See also #69 by @mrgoonie

Exception: OpenAI API error happened.

Context:

  • gpt-pilot fails to execute a command requiring super-user grants.
  • user executes the command manually
  • The user says "no" when asked if the command should be run again.
  • App development gets stuck in the following error:

Traceback (most recent call last):
File ".../gpt-pilot/pilot/main.py", line 35, in
project.start()
File ".../gpt-pilot/pilot/helpers/Project.py", line 81, in start
self.developer.start_coding()
File ".../gpt-pilot/pilot/helpers/agents/Developer.py", line 32, in start_coding
self.implement_task()
File ".../gpt-pilot/pilot/helpers/agents/Developer.py", line 53, in implement_task
self.execute_task(convo_dev_task, task_steps, continue_development=True)
File ".../gpt-pilot/pilot/helpers/agents/Developer.py", line 71, in execute_task
run_command_until_success(data['command'], data['timeout'], convo, additional_message=additional_message)
File ".../gpt-pilot/pilot/helpers/cli.py", line 222, in run_command_until_success
debug(convo, {'command': command, 'timeout': timeout})
File ".../gpt-pilot/pilot/helpers/cli.py", line 242, in debug
success = convo.agent.project.developer.execute_task(
File ".../gpt-pilot/pilot/helpers/agents/Developer.py", line 71, in execute_task
run_command_until_success(data['command'], data['timeout'], convo, additional_message=additional_message)
File ".../gpt-pilot/pilot/helpers/cli.py", line 222, in run_command_until_success
debug(convo, {'command': command, 'timeout': timeout})
File ".../gpt-pilot/pilot/helpers/cli.py", line 237, in debug
debugging_plan = convo.send_message('dev_ops/debug.prompt',
File ".../gpt-pilot/pilot/helpers/AgentConvo.py", line 59, in send_message
raise Exception("OpenAI API error happened.")
Exception: OpenAI API error happened.

Trying to resume gpt-pilot project leads to the same error and quitting.

Ability to use GPT 3.5 Turbo

I keep hitting the limits of my GPT4 API access, and I'd love to have the capacity to change over to using GPT3.5

Has any testing been done with that? is it not delivering what is needed or would it be possible to allow the swapping of modesl via a config item perhaps? Some things I might need the power and complexity of GPT4, but I suspect we could likely make do with GPT 3.5 in some situations perhaps?

Implement "Agent Protocol"

GPT Pilot may be eligible for the AutoGPT Arena Hacks if it implements the Agent Protocol

There's a Python SDK and client SDKs (coming), for now Python seems to be more supported.

There are other reasons why it would be good to adopt a common interface.

A Task denotes one specific goal for the agent, it can be specific like:

Create a file named hello.txt and write World to it.

or very broad as:

Book a flight from Berlin to New York next week, optimize for price and duration.

Main Endpoints

POST /agent/tasks - for creating tasks

{
   "input": "As a user I want to see 'Hello World' so that I know the app is working",
   "additional_input": { "app_id': "my-app" }
}

additional_input can be any object, for GPT Pilot it might look like:

{
  "app_id": "my-app",
  "user_id": "user",
}

Response, a new task with generated task_id and empty artifacts:

{
  "task_id": "my-app-1", 
  "input": "As a user I want to see 'Hello World' so that I know the app is working",
  "artifacts": []
}  

The AgentProtocol task_id would need to be prefixed by the GPT Pilot app_id (and user_id?).

POST /agent/tasks/{id}/steps - for triggering next step for the task

{
   "input": "step input prompt",
   "additional_input": {  }
}

response:

{
    "task_id": "task_id",
    "step_id": "1",
    "input": "step input prompt",
    "additional_input": str,
    name: str,
    status: 'created' | 'completed',
    output: '',
    additional_output: {},
    artifacts: [{ as below }, ...],
    is_last: boolean,
}

Other Endpoints (Optional?)

  • GET /agent/tasks - current_page, page_size

  • GET /agent/tasks/{task_id}

  • GET /agent/tasks/{task_id}/steps

  • POST /agent/tasks/{task_id}/steps

  • GET /agent/tasks/{task_id}/steps/{step_id}

  • GET /agent/tasks/{task_id}/artifacts

  • POST /agent/tasks/{task_id}/artifacts

  • GET /agent/tasks/{task_id}/artifacts/{artifact_id}

Full Task object

{
  task_id: str, 
  input: str,
  additional_input: {},
  steps: [Step, ...],
  artifacts: [{
    artifact_id: str,
    file_name: str,
    relative_path: str,
  }, ...]
}  

Full Step object

{
    task_id: str,
    step_id: str,
    input: str,
    additional_input: str,
    name: str,
    status: 'created' | 'completed',
    output: '',
    additional_output: {},
    artifacts: [{ as below }, ...],
    is_last: boolean,
}

psycopg2.OperationalError: connection to server at "localhost" (::1), port 5432 failed: Connection refused (0x0000274D/10061)

Environment:

Windows

Problem:

(gpt-pilot) D:\AI\gpt-pilot\pilot>python main.py
Traceback (most recent call last):
  File "D:\AI\gpt-pilot\pilot\main.py", line 31, in <module>
    args = init()
           ^^^^^^
  File "D:\AI\gpt-pilot\pilot\main.py", line 17, in init
    create_database()
  File "D:\AI\gpt-pilot\pilot\database\database.py", line 396, in create_database
    conn = psycopg2.connect(
           ^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\psycopg2\__init__.py", line 122, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
psycopg2.OperationalError: connection to server at "localhost" (::1), port 5432 failed: Connection refused (0x0000274D/10061)
        Is the server running on that host and accepting TCP/IP connections?
connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused (0x0000274D/10061)
        Is the server running on that host and accepting TCP/IP connections?


(gpt-pilot) D:\AI\gpt-pilot\pilot>

psycopg2==2.9.6 - pg_config not found

Collecting psycopg2==2.9.6 (from -r requirements.txt (line 9))
Using cached psycopg2-2.9.6.tar.gz (383 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error

ร— python setup.py egg_info did not run successfully.
โ”‚ exit code: 1
โ•ฐโ”€> [25 lines of output]
/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/setuptools/config/setupcfg.py:508: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead.
warnings.warn(msg, warning_class)
running egg_info
creating /private/var/folders/8_/91fb2xx1191gcq8jtvctckk00000gn/T/pip-pip-egg-info-jp1es49t/psycopg2.egg-info
writing /private/var/folders/8_/91fb2xx1191gcq8jtvctckk00000gn/T/pip-pip-egg-info-jp1es49t/psycopg2.egg-info/PKG-INFO
writing dependency_links to /private/var/folders/8_/91fb2xx1191gcq8jtvctckk00000gn/T/pip-pip-egg-info-jp1es49t/psycopg2.egg-info/dependency_links.txt
writing top-level names to /private/var/folders/8_/91fb2xx1191gcq8jtvctckk00000gn/T/pip-pip-egg-info-jp1es49t/psycopg2.egg-info/top_level.txt
writing manifest file '/private/var/folders/8_/91fb2xx1191gcq8jtvctckk00000gn/T/pip-pip-egg-info-jp1es49t/psycopg2.egg-info/SOURCES.txt'

  Error: pg_config executable not found.
  
  pg_config is required to build psycopg2 from source.  Please add the directory
  containing pg_config to the $PATH or specify the full executable path with the
  option:
  
      python setup.py build_ext --pg-config /path/to/pg_config build ...
  
  or with the pg_config option in 'setup.cfg'.
  
  If you prefer to avoid building psycopg2 from source, please install the PyPI
  'psycopg2-binary' package instead.
  
  For further information please check the 'doc/src/install.rst' file (also at
  <https://www.psycopg.org/docs/install.html>).
  
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

ร— Encountered error while generating package metadata.
โ•ฐโ”€> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

I'm running this in a new Conda environment.

Macbook Pro, Apple Silicone M2 Max

Developer.py - "if step['type'] == 'command':"

Hi,

I can tell from the code that this is something to be worked on, but I just thought I'd mention that it is the thing that keeps on causing all the apps I've tried to create to fail.

f step['type'] == 'command': TypeError: string indices must be integers

I'm not entirely sure what the issue is, otherwise I'd have a bash at it - I'm not clever, just stubborn! I've tried to print the step variable to the console, but got nothing, so I'm not much use I'm afraid.

execute_step(matching_step, current_step) is confusing

I think that execute_step(matching_step, current_step) should be renamed should_execute_step(arg_step, current_step).

Also, this new test_no_step_arg() that I've written fails - am I misunderstanding the intention?

class TestExecuteStep:
    def test_no_step_arg(self):
        assert execute_step(None, 'project_description') is True
        assert execute_step(None, 'architecture') is True
        assert execute_step(None, 'coding') is True

    def test_skip_step(self):
        assert execute_step('architecture', 'project_description') is False
        assert execute_step('architecture', 'architecture') is True
        assert execute_step('architecture', 'coding') is True

    def test_unknown_step(self):
        assert execute_step('architecture', 'unknown') is False
        assert execute_step('unknown', 'project_description') is False
        assert execute_step('unknown', None) is False
        assert execute_step(None, None) is False

Project save_file fails when target file has no extension

When gpt-pilot tries to save a file without extension (e.g. Dockerfile), the following error occurs:

Traceback (most recent call last):
File ".../gpt-pilot/pilot/main.py", line 35, in
project.start()
File ".../gpt-pilot/pilot/helpers/Project.py", line 81, in start
self.developer.start_coding()
File ".../gpt-pilot/pilot/helpers/agents/Developer.py", line 32, in start_coding
self.implement_task()
File ".../gpt-pilot/pilot/helpers/agents/Developer.py", line 53, in implement_task
self.execute_task(convo_dev_task, task_steps, continue_development=True)
File ".../gpt-pilot/pilot/helpers/agents/Developer.py", line 87, in execute_task
self.project.save_file(data)
File ".../gpt-pilot/pilot/helpers/Project.py", line 119, in save_file
data['name'] = data['path'].rsplit('/', 1)[1]
IndexError: list index out of range

This should be an easy fix, depending on why the second condition was added to the if statement in the line above ๐Ÿ‘ฝ

Can't run main.py - Windows installing instructions not clear

PS C:\Users\yensi\Documents\CODING AND DEV\VISUAL STUDIO CODE\gpt-pilot> & "c:/Users/yensi/Documents/CODING AND DEV/VISUAL STUDIO CODE/gpt-pilot/pilot-env/Scripts/Activate.ps1"
(pilot-env) PS C:\Users\yensi\Documents\CODING AND DEV\VISUAL STUDIO CODE\gpt-pilot> & "c:/Users/yensi/Documents/CODING AND DEV/VISUAL STUDIO CODE/gpt-pilot/pilot-env/Scripts/python.exe" "c:/Users/yensi/Documents/CODING AND DEV/VISUAL STUDIO CODE/gpt-pilot/pilot/main.py"
Traceback (most recent call last):
File "c:\Users\yensi\Documents\CODING AND DEV\VISUAL STUDIO CODE\gpt-pilot\pilot\main.py", line 31, in
args = init()
^^^^^^
File "c:\Users\yensi\Documents\CODING AND DEV\VISUAL STUDIO CODE\gpt-pilot\pilot\main.py", line 17, in init
create_database()
File "c:\Users\yensi\Documents\CODING AND DEV\VISUAL STUDIO CODE\gpt-pilot\pilot\database\database.py", line 396, in create_database
conn = psycopg2.connect(
^^^^^^^^^^^^^^^^^
File "C:\Users\yensi\Documents\CODING AND DEV\VISUAL STUDIO CODE\gpt-pilot\pilot-env\Lib\site-packages\psycopg2_init_.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
psycopg2.OperationalError: connection to server at "localhost" (::1), port 5432 failed: Connection refused (0x0000274D/10061)
Is the server running on that host and accepting TCP/IP connections?
connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused (0x0000274D/10061)
Is the server running on that host and accepting TCP/IP connections?

(pilot-env) PS C:\Users\yensi\Documents\CODING AND DEV\VISUAL STUDIO CODE\gpt-pilot>

[epic] Agent Routing

As a user
I want to start GPT Pilot with various types of initial prompts
So that I can build a new app, modify an existing app or debug an issue.

See also #73

Start a non-trivial project

A simple chat app with real time communication

Compatibility with Auto-GPT benchmarks - #73

Write the word 'Washington' to a .txt file

Interact with issue management - #83

Issue 89 is done

Add support for Vertex AI

If you are looking for a powerful and affordable platform for text generation, I highly recommend Vertex AI.

Vertex AI offers a variety of generative models, such as Text Bison, Chat Bison, Code Generation, Code Chat, and Code Completion. These models are fine-tuned for code generation, code chat, and code completion.

The pricing for Vertex AI generative models is very reasonable compared to OpenAI. You only pay for the input and output characters that you use, and the price per 1,000 characters is $0.0005 for most models.Vertex AI Pricing

Vertex AI Code Models

[Feature Request] Support InternLM

Dear gpt-pilot developer,

Greetings! I am vansinhu, a community developer and volunteer at InternLM. Your work has been immensely beneficial to me, and I believe it can be effectively utilized in InternLM as well. Welcome to add Discord https://discord.gg/gF9ezcmtM3 . I hope to get in touch with you.

Best regards,
vansinhu

There was a problem with request to openai API

I created a new API key

and added it my .env file. Do

There was a problem with request to openai API:
API responded with status code: 404. Response text: {
    "error": {
        "message": "The model `gpt-4` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.",
        "type": "invalid_request_error",
        "param": null,
        "code": "model_not_found"
    }
}

[epic] Need to interact with GPT after all steps are "DONE"

When re-running gpt-pilot on a project with all steps "DONE" I'm prompted with:

How did GPT Pilot do? Were you able to create any app that works? Please write any feedback you have or just press ENTER to exit:

I would prefer to be able to interact further with the AI:

  • Dump the final summary to {workspace}/.gpt-pilot.yml or .md. #238
  • Edit the final summary directly or by conversation
  • #91 Choose to speak with Architect/ProductOwner/TechLead
  • Change, remove or add extra requirements/stories - see #83
  • Ask questions about the code - failing tests, integration issues between front/back-ends.
  • #83 List open issues from Jira/Github etc & work on them

Error while installation

this command:

pip install -r requirements.txt

throws an error:

Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error

ร— python setup.py egg_info did not run successfully.
โ”‚ exit code: 1
โ•ฐโ”€> [25 lines of output]
/data/gpt-pilot/pilot-env/lib/python3.11/site-packages/setuptools/config/setupcfg.py:515: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead.
warnings.warn(msg, warning_class)
running egg_info
creating /tmp/pip-pip-egg-info-x2xgedwt/psycopg2.egg-info
writing /tmp/pip-pip-egg-info-x2xgedwt/psycopg2.egg-info/PKG-INFO
writing dependency_links to /tmp/pip-pip-egg-info-x2xgedwt/psycopg2.egg-info/dependency_links.txt
writing top-level names to /tmp/pip-pip-egg-info-x2xgedwt/psycopg2.egg-info/top_level.txt
writing manifest file '/tmp/pip-pip-egg-info-x2xgedwt/psycopg2.egg-info/SOURCES.txt'

  Error: pg_config executable not found.

  pg_config is required to build psycopg2 from source.  Please add the directory
  containing pg_config to the $PATH or specify the full executable path with the
  option:

      python setup.py build_ext --pg-config /path/to/pg_config build ...

  or with the pg_config option in 'setup.cfg'.

  If you prefer to avoid building psycopg2 from source, please install the PyPI
  'psycopg2-binary' package instead.

  For further information please check the 'doc/src/install.rst' file (also at
  <https://www.psycopg.org/docs/install.html>).

  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

ร— Encountered error while generating package metadata.
โ•ฐโ”€> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

List/Select/Clone/Extend projects

Wondering if I can ask a feature request? Since there is a database in play , will it be possible to:

  1. List current projects
  2. Select a project
  3. Clone a project
  4. Add a feature to requested project

This way, a project could be developed from simple to complex in phases.

Also for the existing main.py - For "Do you want to try make the same request again? If yes, just press ENTER. Otherwise, type 'no'.", perhaps add flag to 'continuously try until succeed', instead of prompting user to hit 'Enter' every time.

AttributeError: module 'os' has no attribute 'setsid' on Windows 10

When it's trying to run the command following error occurs:
Traceback (most recent call last):
File "main.py", line 35, in
project.start()
File "C:\Users\Samsung1\Downloads\gpt-pilot-main\gpt-pilot-main\pilot\helpers\Project.py", line 81, in start
self.developer.start_coding()
File "C:\Users\Samsung1\Downloads\gpt-pilot-main\gpt-pilot-main\pilot\helpers\agents\Developer.py", line 32, in start_coding
self.implement_task()
File "C:\Users\Samsung1\Downloads\gpt-pilot-main\gpt-pilot-main\pilot\helpers\agents\Developer.py", line 53, in implement_task
self.execute_task(convo_dev_task, task_steps, continue_development=True)
File "C:\Users\Samsung1\Downloads\gpt-pilot-main\gpt-pilot-main\pilot\helpers\agents\Developer.py", line 71, in execute_task
run_command_until_success(data['command'], data['timeout'], convo, additional_message=additional_message)
File "C:\Users\Samsung1\Downloads\gpt-pilot-main\gpt-pilot-main\pilot\helpers\cli.py", line 188, in run_command_until_success
cli_response = execute_command(convo.agent.project, command, timeout, force)
File "C:\Users\Samsung1\Downloads\gpt-pilot-main\gpt-pilot-main\pilot\helpers\cli.py", line 73, in execute_command
process = run_command(command, project.root_path, q, q_stderr, pid_container)
File "C:\Users\Samsung1\Downloads\gpt-pilot-main\gpt-pilot-main\pilot\helpers\cli.py", line 32, in run_command
preexec_fn=os.setsid,
AttributeError: module 'os' has no attribute 'setsid'

ValueError: Unsupported DATABASE_TYPE: postgres

When setting this project up, and following the documentation under the How to start using gpt-pilot? section, the documentation at step number 8 mentions setting up the environment variable in the .env file.

Here it mentions the following:

to change from SQLite to PostgreSQL in your .env just set DATABASE_TYPE=postgres

But this results in an error:

Traceback (most recent call last):
  File "/home/ramkrishna/Documents/experiments/gpt-pilot/pilot/db_init.py", line 5, in <module>
    drop_tables()
  File "/home/ramkrishna/Documents/experiments/gpt-pilot/pilot/database/database.py", line 385, in drop_tables
    raise ValueError(f"Unsupported DATABASE_TYPE: {DATABASE_TYPE}")
ValueError: Unsupported DATABASE_TYPE: postgres

The error seems to be due to this check in pilot/database/database.py:380

            if DATABASE_TYPE == "postgresql":
                sql = f'DROP TABLE IF EXISTS "{table._meta.table_name}"'
            elif DATABASE_TYPE == "sqlite":
                sql = f'DROP TABLE IF EXISTS "{table._meta.table_name}"'
            else:
                raise ValueError(f"Unsupported DATABASE_TYPE: {DATABASE_TYPE}")

The correct check should be:

            if DATABASE_TYPE == "postgres":
                sql = f'DROP TABLE IF EXISTS "{table._meta.table_name}"'
            elif DATABASE_TYPE == "sqlite":
                sql = f'DROP TABLE IF EXISTS "{table._meta.table_name}"'
            else:
                raise ValueError(f"Unsupported DATABASE_TYPE: {DATABASE_TYPE}")

Does not respond to no prompt

In instances where GPT Pilot prompts the user if they would like to install a module or other item, if the user types no, it is ignored and the program continues with the install process.

Unterminated string starting at: line 75 column 20 (char 11355)

Basically everytime when gpt-pilot starts to generate code, it will run into Unterminated string error.

Often this is after it has been building some of the files.

If you rerun the prompt you run into the same issue.

It has done this on gpt-4 and gpt-3.5 16k on multiple different project attempts.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.