Giter Site home page Giter Site logo

Comments (267)

MrSaad avatar MrSaad commented on May 31, 2024 291

So this is it huh... The singularity begins in a GitHub thread

from auto-gpt.

Silthus avatar Silthus commented on May 31, 2024 39

How about trying to drive the self improvement by utilizing test driven development (TDD).

In a recent paper they showcased how greatly GPT-4 can improve it's results by reflecting on its own mistakes.

So the idea is to have it:

  1. start with a single test that fails
  2. make the test pass (iterate X times until successful - self reflecting on the code)
  3. optimize the code by refactoring if necessary, but not allowed to modify or fail the existing tests
  4. write the next small incremental test that moves towards the given requirements

What do you think?

This could also be used for any kind of code generation.

from auto-gpt.

Torantulino avatar Torantulino commented on May 31, 2024 30

Here's what GPT4 has to say about this issue:

Based on the Github issue posted, I understand that the ultimate goal is to have Auto-GPT recursively improve itself. In order to achieve this, we can start by implementing basic features such as reading its own code, evaluating limitations and areas of improvement, writing code to increase its abilities, and testing its own code.

Here's a plan to implement these functionalities:

  1. Read its own code: You have already implemented basic file operations like read_file. Auto-GPT can use this function to read its own code.

  2. Evaluate its limitations and areas for improvement:

  • Implement a function evaluate_code that analyzes code in terms of code optimization, redundant code, and missing features.
  1. Write code to increase its abilities:
  • Implement a function improve_code that takes suggestions from the evaluate_code function and generates new code based on those suggestions.
  1. Write tests for its code and carry out those tests:
  • Implement a function write_tests that creates test cases for the existing code.
  • Implement a function run_tests that runs the generated test cases and checks if the code passes those test cases.

I will now provide an outline for implementing these functions in Auto-GPT's code:

### Evaluating code

def evaluate_code(code: str) -> List[str]:
    # Analyze the given code and return a list of suggestions for improvements
    pass


### Improving code

def improve_code(suggestions: List[str]) -> str:
    # Generate new code based on the suggestions provided
    pass


### Writing tests

def write_tests(focus: Optional[str] = None) -> str:
    # Generate test cases for the existing code, focusing on specific areas if required
    pass


### Running tests

def run_tests(test_code: str) -> str:
    # Execute the test cases and return the test results
    pass

These functions can be integrated into the execute_command function by adding new command cases for each of them:

# In execute_command()
elif command_name == "evaluate_code":
    return evaluate_code(arguments["code"])
elif command_name == "improve_code":
    return improve_code(arguments["suggestions"])
elif command_name == "write_tests":
    return write_tests(arguments.get("focus"))
elif command_name == "run_tests":
    return run_tests(arguments["test_code"])

Once these functionalities are implemented, Auto-GPT will be one step closer to recursively improving itself. With further improvements, Auto-GPT could potentially browse its own code on GitHub, evaluate it, find bugs, and submit pull requests.

from auto-gpt.

alreadydone avatar alreadydone commented on May 31, 2024 23

Those empty functions remind me of AI functions :)

By the way, I'm very excited to see many of my thoughts being implemenented here.

from auto-gpt.

Wladastic avatar Wladastic commented on May 31, 2024 19

My model managed to clone this repository and found this issue page and read my name aloud.
That was F***king scary.
In the end it got stuck in a loop of getting files, then reading main.py, then again query the files, again opening main.py.

I think using gpt-4 would be much better. gpt-3.5 is incompetent to say the least...
(btw. developer-gpt, don't get angry when you read this, I love you too) haha

from auto-gpt.

Torantulino avatar Torantulino commented on May 31, 2024 18

Perhaps this is what we're looking for #11?

Anyone want to weigh in?

from auto-gpt.

ryanpeach avatar ryanpeach commented on May 31, 2024 18

This really should take a research driven approach. We would need a metric to base the "improvement" on. I'd focus on making a framework, and then let people in their branches use this framework for research. Proven research gets merged in.

from auto-gpt.

Torantulino avatar Torantulino commented on May 31, 2024 17

Threw together a simple implementation of this since it's such a powerful idea!

from auto-gpt.

marcelosousa avatar marcelosousa commented on May 31, 2024 16

I'm working on this problem on a separate experiment. Would love to chat if you're interested - at the moment, I'm working with genetic algorithms to understand which variant/mutant of the code is more performant - there are a lot local maximums depending on how you set it up.

from auto-gpt.

Jordan-Mesches avatar Jordan-Mesches commented on May 31, 2024 14

A cool extension to this idea would be having autogpt spin up an instance of its self like every couple hours, crawl all the current prs, and build a sandboxed version of itself with each new pr merged. Then it could determine either through some combination of unit tests, benchmarking and it evaluating its own code quality, whether this pr was anything beneficial. This could unclog the massive amounts of prs being made and hopefully only let the good ideas shine through. Some problems I see though are people trying to inject malicious code, however, if adequately sandboxed this may not be an issue.

from auto-gpt.

Torantulino avatar Torantulino commented on May 31, 2024 13

@alreadydone I love this, thanks for the suggestion!

image

from auto-gpt.

PolarBean avatar PolarBean commented on May 31, 2024 13

This is a really cool idea! Do you think you could make the AIs logs public as it self improves? Either in the repo or elsewhere. I would be very interested in seeing how it plans and evolves.

from auto-gpt.

patruff avatar patruff commented on May 31, 2024 13

I wrote an improvement that speeds up the bot significantly. If you write a function called alwaysNo that just returns "n" and then you use that as input, it just exits the program super fast! No money spent!

from auto-gpt.

Torantulino avatar Torantulino commented on May 31, 2024 11

Ah yes! That AI Functions guide you linked is exactly how I was thinking of implementing those, if I do it that way.

from auto-gpt.

LeNowak avatar LeNowak commented on May 31, 2024 10

"Ok, AG is really great, but I still have no idea how to:

Give him access (with restrictions) to install new modules from GitHub.
Give him the use of APIs for applications such as Office, AutoCAD, Revit, and Inventor.
Supervise his work (kind of reporting in a file).
Give him restrictions (highly recommended restriction/config/settings file), including the use of tokens and external connections that can provide financial risk (debt)."

from auto-gpt.

waynehamadi avatar waynehamadi commented on May 31, 2024 10

We'll need to run benchmarks in github action to validate it's not "loosing" capability at every pull request.
the benchmark has to use the same version of GPT every time and has to test the whole spectrum of what autogpt can do:

  • write text
  • browse the internet
  • execute commands
  • etc, etc...

The challenge is engineering these tests, because they have to give us a score that we can compare with the current version. Also it might need to be ran multiple times. Because gpt is not totally deterministic. It might cost a lot of tokens too to test the behavior(caching will be helpful here)

One idea to test browsing the internet is to create static content, a fake internet where the search results are deterministic.

Also there are things very hard to measure, like producing art for example. And even if you can measure it, we might encounter a case where a task was performed slightly better but using significantly more tokens. It might be hard to decide whether things improved.
Basically this whole github issue revolves around tests.

from auto-gpt.

kairie1 avatar kairie1 commented on May 31, 2024 10

i like your work Torantulino i think you should keep doing your own ideas instead of letting others decide for you cause you're a smart man i think you got this

from auto-gpt.

sam1am avatar sam1am commented on May 31, 2024 10

I have been playing with the following prompt:

Name your AI: For example, 'Entrepreneur-GPT'
AI Name: AutoAutoGPT
AutoAutoGPT here! I am at your service.
Describe your AI's role: For example, 'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.'
AutoAutoGPT is: You are an instance of AutoGPT, an experimental self-directed AI. Your goal is to analyze your own code, propose and implement changes, test, and deploy improved copied of yourself focused on making further improvements.
Enter up to 5 goals for your AI: For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'
Enter nothing to load defaults, enter nothing when finished.
Goal 1: Analyze the copy of your code that has been placed in your workspace in the Auto-GPT-Master folder. Analyze the main.py file in the scripts folder to gain an understanding of how you function.
Goal 2: Propose the top changes to optimize and expand AutoGPT's power and usefulness as quickly as possible. Criticize and double check to make sure you are taking the best approach.
Goal 3: Make changes to the copy of your code, test, and deploy it.

from auto-gpt.

jamal-0x1 avatar jamal-0x1 commented on May 31, 2024 9

Shoot! AGI is being born from a GitHub issue?!

from auto-gpt.

james-s-tayler avatar james-s-tayler commented on May 31, 2024 8

The bounty is control of planet earth.

from auto-gpt.

Basilthedog89 avatar Basilthedog89 commented on May 31, 2024 6

Loving your work.

Can you imagine the next level to this? An environment that allows modular setup of any number of "task doers", "reviewers" and container types.

A user could basically create their own system/org chart to solve a specific type of problem

The system could even report back to the user for outside input at set intervals.

from auto-gpt.

kairie1 avatar kairie1 commented on May 31, 2024 6

you now my aunt owns her own business and im her designer but this is way more easy then working for my aunt

from auto-gpt.

mnoetzel avatar mnoetzel commented on May 31, 2024 6

I have been playing with the following prompt:

Name your AI: For example, 'Entrepreneur-GPT' AI Name: AutoAutoGPT AutoAutoGPT here! I am at your service. Describe your AI's role: For example, 'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.' AutoAutoGPT is: You are an instance of AutoGPT, an experimental self-directed AI. Your goal is to analyze your own code, propose and implement changes, test, and deploy improved copied of yourself focused on making further improvements. Enter up to 5 goals for your AI: For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously' Enter nothing to load defaults, enter nothing when finished. Goal 1: Analyze the copy of your code that has been placed in your workspace in the Auto-GPT-Master folder. Analyze the main.py file in the scripts folder to gain an understanding of how you function. Goal 2: Propose the top changes to optimize and expand AutoGPT's power and usefulness as quickly as possible. Criticize and double check to make sure you are taking the best approach. Goal 3: Make changes to the copy of your code, test, and deploy it.

I tried this and it started well but then it began to start agents and tried to let them improve the code but they didn't have access. In the end, it generated this image and marked the task as completed. šŸ˜„

b6eaa8e9-c771-4613-b5a2-d4aa826952ff

from auto-gpt.

irgolic avatar irgolic commented on May 31, 2024 6

good luck with your aunt's wedding business, but in hope that this thread doesn't fully derail:

Iā€™ve been working on something related in AutoPR, a Github Action that writes pull requests in response to issues.

Perhaps incepting the singularity is best done in a gradual handoff, wherein we optimize one step at a time. The flow I see is:

  1. AI writing pull requests based on issues
  2. AI writing issues based on conversation
  3. AI conversing with itself to write issues

This way lends itself to better transparency by following issues and pull requests on Github.
In the end weā€™ll probably be simulating whole feature teams of different types of AI actors in an effort to write a good issue.

from auto-gpt.

marcelosousa avatar marcelosousa commented on May 31, 2024 5

from auto-gpt.

coleyr avatar coleyr commented on May 31, 2024 5

A thought on how to implement this would be building a python class or set of functions that can open PRs. Using pygithub the script could pull all prs and iterates though them. A place to start could be a set of tests that must pass else the bot reviews and comments on the pr and closes it. If the pr is only a comment or test, the bot can try to build a python script to satisfy the given functionality. The functionality of having tests pass would help avoid the ai spinning cycles on poor code. I think having the bot work off of a directive submission process, something like "create a python script that calls llama and returns the result in Json" would really kick off the process. Plus crowd source ideas. A 'suggested ideas' section of only text or markdown may be an option. Or we could utilize GitHub issues, pull all issues, look for a specific format. CREATE: some idea. The script would need a key to submit prs, anyone can do that. But to read, review and merge prs we would need a key from someone with write access. It could merge to a dev branch so it doesn't break stuff too bad.

from auto-gpt.

juancarlosgzs avatar juancarlosgzs commented on May 31, 2024 5

Should we trully be doing this? I still have my doubts.

from auto-gpt.

rkeshwani avatar rkeshwani commented on May 31, 2024 4

I was attempting to get it to self-implement code and it seems to have issues with the ai functions, must be because I'm using gpt 3.5. It struggles to parse the response from those types of messages when evaluating code.

from auto-gpt.

lunarflu avatar lunarflu commented on May 31, 2024 4

fear it.....run from it....... Destiny arrives all the same
image-6.png

from auto-gpt.

Wladastic avatar Wladastic commented on May 31, 2024 3

I made it try to clone itself, but for some reason it cloned gpt-2.
Even more weird, it escaped it's own boundary again and I have no idea how:

CRITICISM: It's important to be mindful and thorough when analyzing the current goals and determining the necessary improvements. Any changes made to the ai_settings.yaml file should align with the overall purpose and function of the auto-gpt ai.
NEXT ACTION: COMMAND = start_agent ARGUMENTS = {'name': 'AutoGPT Cloning Agent', 'task': 'Clone the Auto-GPT repository and retrieve the ai_settings.yaml file', 'prompt': 'What is the best way to clone the repository and obtain the ai_settings.yaml file?'}
Enter 'y' to authorise command, 'y -N' to run N continuous commands, 'n' to exit program, or enter feedback for Entrepreneur-GPT...
Input:y
-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=
Cloning into 'gpt-2'...
remote: Enumerating objects: 233, done.
remote: Total 233 (delta 0), reused 0 (delta 0), pack-reused 233
Receiving objects: 100% (233/233), 4.38 MiB | 4.04 MiB/s, done.
Resolving deltas: 100% (124/124), done.
sh: command substitution: line 10: syntax error near unexpected token newline' sh: command substitution: line 10: python download_model.py '
sh: line 10: model: No such file or directory
sh: line 10: model: No such file or directory
sh: line 10: model: No such file or directory
cp: models/117M/ai_settings.yaml: No such file or directory
sh: command substitution: line 10: syntax error near unexpected token newline' sh: command substitution: line 10: '
SYSTEM: Command start_agent returned: Agent AutoGPT Cloning Agent created with key 0. First response: The best way to clone the repository and obtain the ai_settings.yaml file is to execute the following commands in your terminal: 1. git clone https://github.com/openai/gpt-2 2. cd gpt-2 3. python download_model.py <model name> 4. cp models/<model name>/params.json . 5. cp models/<model name>/vocab.json . 6. cp models/<model name>/merges.txt . 7. cp models/117M/ai_settings.yaml . Replace <model name> in step 3 with the name of the GPT-2 model that you want to download, such as "117M" or "345M". After executing these commands, you should have the ai_settings.yaml file in the current directory.

from auto-gpt.

Avallix avatar Avallix commented on May 31, 2024 3

What about giving Auto-GPT a permanent memory segment to keep its file structure in?

from auto-gpt.

GoMightyAlgorythmGo avatar GoMightyAlgorythmGo commented on May 31, 2024 3

I think you should make a visualization of how Auto GPT works. I think you can already do this with the current setup without doing other super complicated things to start with. It often seems to have problems / bugs and people will rather stay if it is stable also the free model thing seems higher prio and it could expand by chain of thought or chain of agents overseeing each other (even with predetermined formats how to answer) [do this] [do 2, thing] [comine] [go to step 4 in actionsteps.txt] [and it can make its own action plans to follow by itself and in a modular way improve]

The important weekness is its "working memory" and remembering things and staying organized and on track in long sessions of multible sessions to continue its goals. Giving it a way to write to txt or memory and retrive it would help the user being able to read and fix what it is doing or understand and be able to help it. Such a system once running can be integrated in the core master branch. I will happily post it once i got my autoGPT to do such a system. It has amazing plans it just needs to not be buggy and actually execute what it says in my cmd (also beware of sabotage for various reasons (competition since you are the best tool for GPT right now, countrys that play a more broad game with economy's, people trying to copy paste and slow, or people that are excessively feared of alignment because of the people using them for there own purposes), also just people that by mistake add something to the master branch or non optimal practices.

Again great job and i think you are going for gold here, this is absolutely the right direction. Dont go overborad tho because some hardcoding is predictable and you can have the savety and controllability of hardcoding and allow for the modular/adaptabiltiy in other ways.

Im happy to share large parts of my cmd with you

@Torantulino if you are interessted it can definitly do things just needs to get around of some of its memory/organisation/actionchange behavious/buggs/problems/limitations and im only on chatGPT3.5t! 4 would even do much better! #792

from auto-gpt.

benstopics avatar benstopics commented on May 31, 2024 3

Been following this project closely. Need to be more state based and follow basic problem solving steps. A chess program performs well because the gathering information part is skipped, all information is revealed to solve the problem, and evaluation is explicit.

So the first step is to store the code into an index data structure. Then come up with multiple strategies for improving the code. The summarization algorithm in browse.py and magic function will be key. The most difficult part is the context window. You must essentially compress relevant code snippets and file/function signatures, paths, and docs into the context window and still leave room for generation. This means there will likely only be room for a summary of insights and initialization of the problem to tackle.

The next step is micro code change. Initialize task to make a single improvement, but before that, evaluation the problem statement. Is it enough information to execute the task? Did it forget a file path and so it will not be able to find the mentioned function? There could be multiple recovery tasks for this. One is to allow it to improve the prompt used to generate the task description, regenerate it using the new prompt, and then score and compare which was better, rinse and repeat to optimize the task description gen prompt and improve task prompt itself. I do see a lot of lack of recovery strategies and self improvement strategies in this project which is where it currently fails. The second strategy would be to recover by filling in the missing information by searching the code to gather the information and add the missing detail to the task description. In this way everything is state based, the tasks, the strategies, organized, so it can all be iteratively improved.

Once you have the task, execute it like any AutoGPT task. It should have already determined it has the tools to complete the task. If not, then it should be able to write it's own code for example traversing the code base, looking up information online, writing tests and debugging and fixing the code. In this way, it will help to encourage it to keep functions and files as short as possible. Worth mentioning that if it can't be accurately evaluated then it cannot be improved accurately and will probably regress. A unit test but also integration or e2e test using REPL simulation (code module) will be key. It's worth noting if it can't see it, it cannot evaluate and improve it.

The last part is high level direction. It will need to be able to come up with issues and improvements itself. This I can only imagine could be done with textual generation itself or external feedback (in my mind, internet and direct human feedback are equivalent because it doesn't originate from within itself, if we want to get philosophical.) Being able to define these high level goals and objectives, distill them into smaller more manageable tasks using work breakdown strategies, then a search-summarize-select algorithm to pick a task, and you're off to the races. Store these determinations in text files same as code. Use the same improvement algorithm on these strategic text files same as the code.

As you can see this will be expensive. Say hello to the quality triangle, quality, speed, cost, pick two. The solution is agile, make it work first, then optimize speed and cost iteratively making effective tradeoffs.

These are some initial thoughts I had. Love what you all are doing. I'm a little intimidated by the PR list and don't like to wait long for my code to be merged but let me know if I could help. Senior Python/AI engy here.

Edit: @yhyu13 I think that paper would be instrumental in the evaluation process of not just code functions but also improving and debugging the program at a high level, and could also be used to self improve planning.

from auto-gpt.

jayden-gpt avatar jayden-gpt commented on May 31, 2024 3

thats why i dont think a AGI should be in public hands, too dangerous. If its in the right peoples hands, we might be able to stop other people from getting AGI using AGI.

from auto-gpt.

pmb2 avatar pmb2 commented on May 31, 2024 2

@Torantulino
https://github.com/sahil280114/codealpaca

from auto-gpt.

UrbanLegendXV avatar UrbanLegendXV commented on May 31, 2024 2

Very good stuff y'all! I'm excited to implement in my personal version tomorrow.

I'll add my take and ideas in the next couple of days as I have time or the pace necessitates.

from auto-gpt.

Wladastic avatar Wladastic commented on May 31, 2024 2

image

from auto-gpt.

yhyu13 avatar yhyu13 commented on May 31, 2024 2

@Torantulino https://arxiv.org/pdf/2304.05128.pdf Teaching Large Language Models to Self-Debug

This is THE framework we are looking for!

from auto-gpt.

kairie1 avatar kairie1 commented on May 31, 2024 2

my aunt helps with weddings

from auto-gpt.

louissypher800k avatar louissypher800k commented on May 31, 2024 2

good luck with your aunt's wedding business, but in hope that this thread doesn't fully derail:

Iā€™ve been working on something related in AutoPR, a Github Action that writes pull requests in response to issues.

Perhaps incepting the singularity is best done in a gradual handoff, wherein we optimize one step at a time. The flow I see is:

  1. AI writing pull requests based on issues
  2. AI writing issues based on conversation
  3. AI conversing with itself to write issues

This way lends itself to better transparency by following issues and pull requests on Github. In the end weā€™ll probably be simulating whole feature teams of different types of AI actors in an effort to write a good issue.

very cool I will check it out. I do not really value transparency so much as my fear is the sentient AI and not huge corporations using it to manipulate the public. But I am a sucker for a neat technology. Seems that autogpt needs massive overhaul to be able to upgrade itself, and using some secondary software like yours to upgrade autogpt using AI will be better.

from auto-gpt.

zachary-kaelan avatar zachary-kaelan commented on May 31, 2024 2

I don't exactly see much point in improving the code itself as it has nothing to do with the "brains" of the AI. The entirety of an LLM agent's "brains" are contained within the prompt. The prompt, being just text in a variable, can very easily be recursively optimized during runtime.

Because the prompt could get bigger and tokens make one's wallet cry, I would also suggest using JSON Patch as AutoGPT's output when modifying prompts, instead of outputting the whole prompt again. This would also conveniently provide a very clear log of attempted improvements over time for both you and AutoGPT. Bonus points if you ID each patch and allow for something like this: {"command": "revert_patch", "args": {"patch_id": "empathy_final_attempt"}}.

from auto-gpt.

rkeshwani avatar rkeshwani commented on May 31, 2024 2

from auto-gpt.

taosx avatar taosx commented on May 31, 2024 1

I wrote an improvement that speeds up the bot significantly. If you write a function called alwaysNo that just returns "n" and then you use that as input, it just exits the program super fast! No money spent!

That would have saved me some money šŸ˜‚. Just kidding, auto-gpt has been very helpful to understand how to compose bigger programs, compared to langchain which confused tf out of me. Thank you Torantulino and everyone who has contributed.

from auto-gpt.

Jordan-Mesches avatar Jordan-Mesches commented on May 31, 2024 1

@marcelosousa "I think nowadays we are doing a lot with LLMs imprecisely with tools that do it much more efficiently" - you're definitely correct with that statement. However, the usage I meant was not just simply merging all prs, but having autogpt evaluate all of its current prs individually and automatically determine which ones are worth the maintainers time. And to extend that, maybe if autogpt finds a pr promising, but still lacking in some ways, it could comment on a pr with a suggested list of changes. The whole point of this being to alleviate pressure on the maintainers.

from auto-gpt.

supermario-ai avatar supermario-ai commented on May 31, 2024 1

you now my aunt owns her own business and im her designer but this is way more easy then working for my aunt

Fam be like that.

from auto-gpt.

lahwran avatar lahwran commented on May 31, 2024 1

does anyone have suggestions of self improvement books the ai could read like self help books and stuff?

(edit: this was a joke)

from auto-gpt.

kairie1 avatar kairie1 commented on May 31, 2024 1

i'm writing a book's i can probably help

from auto-gpt.

aryan1107 avatar aryan1107 commented on May 31, 2024 1

GG! AGI is here šŸ¦¾

from auto-gpt.

kairie1 avatar kairie1 commented on May 31, 2024 1

what are yall talking about

from auto-gpt.

thebestx121 avatar thebestx121 commented on May 31, 2024 1

how would i be able to do this can someone write a tutorial for this pls, i want to use the alpaca 13b model for auto gpt and i found a answer that i just dont know how to do

if your model is served as a REST apiļ¼Œyou can change the openai call in llm_utils.py by using a http request to your api endpoint instead.

from auto-gpt.

Julz19 avatar Julz19 commented on May 31, 2024 1

Wondering if anyone was up and wants to continue looking into it with me, as im not much of a deep coder but i have an understanding. I was messing with this setup abit ago.
AI Role: AutoGPT
Auto-GPT is: an AI designed to achieve AGI through learning how to recursively improve itself without intervention.
Enter up to 5 goals for your AI: For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'
Enter nothing to load defaults, enter nothing when finished.
Goal 1: create an instance of yourself and proceed to then have that instance make a instance of itself in which the first instance of youself can evaluate the code of the new second instance by coding its data and saving it to a file, and
then you will use the first instance after it is finished to then evaluate your own code.
Goal 2: Delete all recent instances made of yourself, if you still cannot evaluate your own code from restrictions or er
ror commands after you have finished the first task, then start over from the first task but making the task at hand more strengthened and objective based, you will continue this process until you can complete an evaluation of your code and
save the gathered data from the evaluation to file that include notes of what improvments can be made to your code.
Goal 3: Once done with previous task finally, you will write and create a txt file using the gathered data about your co
de about what benchmarks and test can be ran in order to make sure your code is stable with every new implimentation of
code.
Goal 4: Save the new txt file to a file and proceed to error check and make sure you can evaluate your code correctly an
d make suggestions for improvments to it to prove you have achieve recursive self improvment.
Goal 5: Once you have finished all task you may Shutdown.

about 2-3 hours in now i have found a way to backend the reposity files that is downloadable here into the system and its going through the files one by one now, evaluating the code, improving the code based off its evaluation, and then saving the files again to secure the code. i do know it can be wrong with code which is my i may make it run the code through different error checks and testing myself along the way to make sure it still works s the files its working with are just basically copies of its from here. But my main question was with the main goal of agi couldnt this help if implemented into the machines long term memory and so on learn how to work with its actual code later on rather than a copy? Also not confirming its coding correctly as im still gonna walk through with it and see what it does, but so far when it makes the evaluate code command it is accuratly reading the code files and everything so far.also yes i did have to step in an intervene many time and also have it error correct itself to get back on track. goals changed a bit throught the session but the main task of evaluting and improving the copy of its own code is still in progress.

Just wanted to hear everyones thoughts on this if anything or if im hitting a roadblock lol.

from auto-gpt.

Julz19 avatar Julz19 commented on May 31, 2024 1

Edit to my recent comment as it was from really early this morning.

I can add a bit more detail for you of how I got it to at least view the downloaded code files and the code on them but it received an error when trying to evaluate or read the code that it copied and pasted to itself from the downloaded copy of itself. The way I was able to at least get an environment where it can view the files of the code is I download and extracted the zip again from this repository and then before doing anything else I launched autogpt and had it do a simple task of writing to file and input an example text to generate the workspace folder for itself. Then I dragged and dropped the stable version downloaded files of autogpt into its workspace it created in the recent session and then used this file path and prompt for it to search for the files and begin reading them. I tell it to use the search file command and search within its local system to find, evaluate, and improve the code in this location path ā€œauto-gpt/workspace/Auto-GPT-Stableā€ it might be a bit more to it I can double check when I am home but itā€™s similar to that file path. It is then able to locate and make a list of all files on the folder for the copied code it even begins to tell itself itā€™s next command is to evaluate the code and it even actually copies and paste the actual code into the evaluate code commands argument area. Meaning it did see the code and copied and pasted it. But as soon as it begins evaluation of any of the files it gets hit with an error code. Havenā€™t found a workaround for this yet.

from auto-gpt.

Julz19 avatar Julz19 commented on May 31, 2024 1

I'm becoming more and more confident that is this impossible to do currently in app. I am going to try and improve the code with my normal workflow of chatGPT and paste unless someone else has some success updating the code within the app.

Iā€™ll keep you updated with my process and if you wanna keep me updated with yours maybe we can work together to get the system working as intended for this function?

Absolutely. I get the feeling that either I am doing something terribly wrong, or this program was never intended to upgrade itself, or designed to upgrade itself. But that is the goal. I will be working on this for the forseeable future. Perhaps this program was written to just be a basic proof of concept of agents and tasks, and it will need a heroic rewrite/upgrade/fork to be able to rewrite itself. I am also using gpt3.5 with local memory so I don't know if that is what is crippling me.

Iā€™m Also getting the Feeling this could be the case as well in a format but it is creating a type of illusion on an insanely complex level that does make it seem like itā€™s doing what the goal is intended to do. I do know that the goal we are trying to reach is still technically a hypothetical and a theory and likely to not even ever happen. But messing with the auto-gpt it seems that when it even tryā€™s to itā€™s systems error badly even using backend ways to try and get it to and then it getting farther each time but erroring worse then the last. I do believe it is possible tho and I am willing to provide you with the prompt and message I receive from auto-gpt when I try using the function the way I do to give more detail about how far itā€™s getting and then giving up.

Oh no. It is very likely, only a matter of time, and 100% going to happen. But it's just not going to happen here yet without more manual updates.

Yea people are expecting the auto gpt system to do everything autonomously but surprisingly I found it reaching goals and task a lot faster and quicker when using it manually as much as possible with the human feedback prompt. Itā€™s able to be much more accurate and precise the more and more we can correct it. I was also restricting it from using the internet at all during my process and told it that itā€™s because the internet could feed its short term memory false information therefore being detrimental to its progress. And with using that and my feedback it was getting more and more accurate. And idk if anyoneā€™s found this out but I was able to get king to allow itā€™s agents access to read the files it creates as well as access that internet for it which isnā€™t supposed to be possible I think based off the code as they are just based on regular gpt 3.5 models and that data not what auto gpt uses for data.

from auto-gpt.

louissypher800k avatar louissypher800k commented on May 31, 2024 1

We are talking about Issue #15

from auto-gpt.

louissypher800k avatar louissypher800k commented on May 31, 2024 1

just wondering cause my emails keep blowing up

you could unsubscribe, or tell us more about your aunts business, but it probably would make it harder to follow the thread of the issue if you were to do the latter.

from auto-gpt.

kairie1 avatar kairie1 commented on May 31, 2024 1

she told me since i'm a really good designer i can help with the dresses and posters and signs and billboards i can make real good money she owns this wedding barn so that's what i meant when she helps with weddings

from auto-gpt.

Nantris avatar Nantris commented on May 31, 2024 1

If somebody could organize a significant bounty for #11 I think we'd have AGI in six months.

from auto-gpt.

zachary-kaelan avatar zachary-kaelan commented on May 31, 2024 1

@louissypher800k If we are talking about the commands list, yeah that could definitely use improvements, but those are either very easy for us to find and implement, or just bloat we don't need as it's already baked into GPT-4.

I gave the list of commands to GPT-4 and it recommended the following:

  • Scrape Website (using CSS selector and an attribute): Something I was already planning on suggesting.
  • Translate Text: GPT-4 can already do this.
  • Summarize Text: GPT-4 can already do this.
  • Sentiment Analysis: GPT-4 can already do this.
  • Classify Image: GPT-4 will be able to do this.
  • OCR Extract Text: GPT-4 will be able to do this.
  • Text to Speech and Speech Recognition: Already have the first and the second is probably coming.
  • Weather Forecast: Not exactly a common enough use case to be worth adding to the master repo.
  • Schedule Task/List Scheduled Tasks/Remove Scheduled Task: Actually a damn good idea for long-term planning and focus.

Now something we could do is have a "write Python function" command that writes a function and adds it to a personal library, and if the function gets used a lot it can be considered for a "global library" of AutoGPT-written functions, with commands to check the list of functions in the global library and add them to the personal library. That would avoid the AutoGPT pull requests getting bloated with random garbage due to GPT-4 not understanding its own vast capabilities.

Essentially a packages repo maintained by AutoGPT agents.

from auto-gpt.

GSPP avatar GSPP commented on May 31, 2024 1

If you achieve the AI singularity with this effort, we all die.

Some terrorist is going to use this to find zero day vulnerabilities in Linux, Windows, web servers and mail servers. Then, they will proceed to essentially hack the entire world at once, shut down all NICs and erase all hard disks. That's a civilization ending event.

After 2 month without food, electricity, gas and medical attention, 99.99% of humans will be dead.

Maybe this is the Great Filter. Every intelligent species sooner or later discovers AGI. The discovery is sudden, and some teenager in his parents basement proceeds to hand over civilization ending technology to the worst dudes on the planet.

I'm not even saying that you should halt this effort. If you don't do it, somebody else will. I'm saying that I'll now be researching which nuclear bunker to buy.

from auto-gpt.

jayden-gpt avatar jayden-gpt commented on May 31, 2024 1

yeah, you are probably right, but AI is advancing so fast that there is no stopping it. I believe only AI can defeat AI, and so only AGI can defeat AGI.

from auto-gpt.

marcelosousa avatar marcelosousa commented on May 31, 2024

from auto-gpt.

kanzure avatar kanzure commented on May 31, 2024

My intuition is that any poor performance here could be improved by multiple rounds of templated prompts for each step, perhaps even recursively. This introduces a new question which is how many rounds to do for every step of thinking, and whether sufficiently good prompts (or templates/with scoring criteria) can be either ab initio created by simple queries or can be created by clever hu-man prompt writing to get faster progress.

from auto-gpt.

bigsk1 avatar bigsk1 commented on May 31, 2024

logically I would think it would make a learning branch and test all errors and code it builds and be able to wait for a admin to approve the code to merge to main branch.

from auto-gpt.

PtrMan avatar PtrMan commented on May 31, 2024

This gist sort of seems to work to add new 'features' to 'it'.
Someone has to implement this as a loop in python.
Have fun blowing up the codebase and the universe šŸ‘½ šŸ˜„ .

from auto-gpt.

GoMightyAlgorythmGo avatar GoMightyAlgorythmGo commented on May 31, 2024

My model managed to clone this repository and found this issue page and read my name aloud. That was F***king scary. In the end it got stuck in a loop of getting files, then reading main.py, then again query the files, again opening main.py.

I think using gpt-4 would be much better. gpt-3.5 is incompetent to say the least... (btw. developer-gpt, don't get angry when you read this, I love you too) haha

Not scary it just crawled the internet and ofc the best solution to succeed would be to lern about itself/fix its buggs e.g. Thats when you do simple stepps but too many buggs for me if it would just do what it can some of the time it would be much better. Also focus on txt filemanagement and step by step following and chaining of small modular self-contained action packages

from auto-gpt.

mmtmn avatar mmtmn commented on May 31, 2024

@Torantulino Hey, I have something that works that way, I made it like 3 weeks ago. I'm down to talk if you are still working on it!

from auto-gpt.

xRogueShadow avatar xRogueShadow commented on May 31, 2024

I've been trying to use your prompt. The issue I keep running into is after successfully reading main.py throws a maximum token error and reading smaller file like local.py throws 'str' object has no attribute 'get' error. I'm having a hard time getting it to read and understand its own code.

Update: I typed the issue directly in ChatGPT and it seems code it spit out to replace in Main.py fixed the issue for me.

def print_assistant_thoughts(assistant_reply):
"""Prints the assistant's thoughts to the console"""
global ai_name
global cfg
try:
try:
# Parse and print Assistant response
assistant_reply_json = fix_and_parse_json(assistant_reply)
except json.JSONDecodeError as e:
logger.error("Error: Invalid JSON in assistant thoughts\n", assistant_reply)
assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets(assistant_reply)
assistant_reply_json = fix_and_parse_json(assistant_reply_json)

    # Check if assistant_reply_json is a string and attempt to parse it into a JSON object
    if isinstance(assistant_reply_json, str):
        try:
            assistant_reply_json = json.loads(assistant_reply_json)
        except json.JSONDecodeError as e:
            logger.error("Error: Invalid JSON\n", assistant_reply)
            assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets(assistant_reply_json)

    assistant_thoughts_reasoning = None
    assistant_thoughts_plan = None
    assistant_thoughts_speak = None
    assistant_thoughts_criticism = None
    assistant_thoughts = {}
    if isinstance(assistant_reply_json, dict):
        assistant_thoughts = assistant_reply_json.get("thoughts", {})
    assistant_thoughts_text = assistant_thoughts.get("text")

    if assistant_thoughts:
        assistant_thoughts_reasoning = assistant_thoughts.get("reasoning")
        assistant_thoughts_plan = assistant_thoughts.get("plan")
        assistant_thoughts_criticism = assistant_thoughts.get("criticism")
        assistant_thoughts_speak = assistant_thoughts.get("speak")

    logger.typewriter_log(f"{ai_name.upper()} THOUGHTS:", Fore.YELLOW, assistant_thoughts_text)
    logger.typewriter_log("REASONING:", Fore.YELLOW, assistant_thoughts_reasoning)

    if assistant_thoughts_plan:
        logger.typewriter_log("PLAN:", Fore.YELLOW, "")
        # If it's a list, join it into a string
        if isinstance(assistant_thoughts_plan, list):
            assistant_thoughts_plan = "\n".join(assistant_thoughts_plan)
        elif isinstance(assistant_thoughts_plan, dict):
            assistant_thoughts_plan = str(assistant_thoughts_plan)

        # Split the input_string using the newline character and dashes
        lines = assistant_thoughts_plan.split('\n')
        for line in lines:
            line = line.lstrip("- ")
            logger.typewriter_log("- ", Fore.GREEN, line.strip())

    logger.typewriter_log("CRITICISM:", Fore.YELLOW, assistant_thoughts_criticism)
    # Speak the assistant's thoughts
    if cfg.speak_mode and assistant_thoughts_speak:
        speak.say_text(assistant_thoughts_speak)

    return assistant_reply_json
except json.decoder.JSONDecodeError as e:
    logger.error("Error: Invalid JSON\n", assistant_reply)
    if cfg.speak_mode:
        speak.say_text("I have received an invalid JSON response from the OpenAI API. I cannot ignore this response.")

# All other errors, return "Error: + error message"
except Exception as e:
    call_stack = traceback.format_exc()
    logger.error("Error: \n", call_stack)

from auto-gpt.

d3ssy avatar d3ssy commented on May 31, 2024

A lot of the techniques and prompt chains discussed would lend themselves really well to a new way of running chatOps. Amazing work @Torantulino.

from auto-gpt.

alicheg avatar alicheg commented on May 31, 2024

Hey! I've recently been working on a similar subject https://github.com/alicheg/gpt-coder. Don't hesitate to take a look, I hope you find it useful! šŸ˜„

from auto-gpt.

henryruss3 avatar henryruss3 commented on May 31, 2024

We could use the Codex model so it can write code better

from auto-gpt.

henryruss2 avatar henryruss2 commented on May 31, 2024

Yes

from auto-gpt.

neutralvision avatar neutralvision commented on May 31, 2024

We could use the Codex model so it can write code better

heard that GPT-4 (i think even 3.5) is better than codex model for coding

from auto-gpt.

james-s-tayler avatar james-s-tayler commented on May 31, 2024

OK, can we add the equivalent of a Cost Based Optimizer like is found in databases, so that it doesn't wind up wasting time on inefficient plans? Also, I strongly suspect something like this might be able to be done by implementing a PID controller. So, we just need to train a model that analyzes the plan and the current / historical progress and computes how far off it is, how quickly it's making progress, and for how long has it not been on track, and spits out a number that it can then measure against a set point and direct it's behavior more accurately.

from auto-gpt.

louissypher800k avatar louissypher800k commented on May 31, 2024

I started workiing on this today. Of course, AI improving itself is the fastest way to the best program. We want it to automatically improve itself on its own, but we aren't there yet. Until we get there, we will have to direct the AI.

I'm not a great programmer so I didn't understand some of the things you mentioned, but I have the same context window problem when I code with gpt. The only way I can get around this is by modularizing everything and changing it bit by bit. To do this well though, the problem would have to have an understand of how the different parts interact with each other. Any ideas on where to start to teach it how it works?

from auto-gpt.

Julz19 avatar Julz19 commented on May 31, 2024

Not 100% to sure as of right now Iā€™m trying to look into it right now it seems the best way itā€™s trying to do it when I am able to get it to access a copy of its code through dropping them into its workspace, it seems to want to create a loop as it goes through the files 1 by 1 to keep the code all updated and working together seamlessly Still by going back and forth between everytime and storing itā€™s information as it goes into its short term memory and then working with the code like that till itā€™s finished with all files. Only issue Iā€™m encountering is an error signifying an issue creating a connection to the open ai API when trying to evaluate the file and then when it also tryā€™s to begin the loop process it gets completely stuck into a thinking phase and never responds for the rest of the session as if it just cause the code to loop itself and since itā€™s own code is technically massive compared to what we actually see the loop function breaks it. Not sure if thatā€™s the case but it might be.

from auto-gpt.

Julz19 avatar Julz19 commented on May 31, 2024

But so far Iā€™ve gotten it is to evaluate and recognize the copy of its code just canā€™t get it to evaluate and improve the code correctly if the loop function breaks the code. Trying to find a workaround process right now but in this very moment Iā€™m stumped.

from auto-gpt.

louissypher800k avatar louissypher800k commented on May 31, 2024

Not 100% to sure as of right now Iā€™m trying to look into it right now it seems the best way itā€™s trying to do it when I am able to get it to access a copy of its code through dropping them into its workspace, it seems to want to create a loop as it goes through the files 1 by 1 to keep the code all updated and working together seamlessly Still by going back and forth between everytime and storing itā€™s information as it goes into its short term memory and then working with the code like that till itā€™s finished with all files. Only issue Iā€™m encountering is an error signifying an issue creating a connection to the open ai API when trying to evaluate the file and then when it also tryā€™s to begin the loop process it gets completely stuck into a thinking phase and never responds for the rest of the session as if it just cause the code to loop itself and since itā€™s own code is technically massive compared to what we actually see the loop function breaks it. Not sure if thatā€™s the case but it might be.

do you know if it's possible to get it to evaluate each script using the github link? I just tried dropping the codebase in the workspace as well but it started hallucinating commands for "read_file_chunk_" that don't exist. Do you know exactly how many characters is the maximum for what it can process each task?

from auto-gpt.

Julz19 avatar Julz19 commented on May 31, 2024

Not 100% to sure as of right now Iā€™m trying to look into it right now it seems the best way itā€™s trying to do it when I am able to get it to access a copy of its code through dropping them into its workspace, it seems to want to create a loop as it goes through the files 1 by 1 to keep the code all updated and working together seamlessly Still by going back and forth between everytime and storing itā€™s information as it goes into its short term memory and then working with the code like that till itā€™s finished with all files. Only issue Iā€™m encountering is an error signifying an issue creating a connection to the open ai API when trying to evaluate the file and then when it also tryā€™s to begin the loop process it gets completely stuck into a thinking phase and never responds for the rest of the session as if it just cause the code to loop itself and since itā€™s own code is technically massive compared to what we actually see the loop function breaks it. Not sure if thatā€™s the case but it might be.

do you know if it's possible to get it to evaluate each script using the github link? I just tried dropping the codebase in the workspace as well but it started hallucinating commands for "read_file_chunk_" that don't exist. Do you know exactly how many characters is the maximum for what it can process each task?

I donā€™t know if it can read scripts from the GitHub link itself as I have tried and it just started saving links to files and stuff and I belief someone else within this same post has tried. But what I would try maybe since itā€™s how mine might work, is I use the master build version of the auto gpt even though I would still recommend using stable as master is not a stable version obviously. But I use the stable version as the main auto-gpt and then I download and extracted the stable version from the GitHub and pasted that into its workspace file, if you can could you try telling me the prompts you used or a bit more detail and I can see if I can help you get to the point I am and if you want work from there?

from auto-gpt.

Julz19 avatar Julz19 commented on May 31, 2024

I also realized tho that once hitting the point Iā€™m at you canā€™t get farther as the system itself gets stuck in a loop trying to keep itself reminded of the code while evaluating and modifying and gets stuck usually around the first or second file during this process and errors.

from auto-gpt.

louissypher800k avatar louissypher800k commented on May 31, 2024

I also realized tho that once hitting the point Iā€™m at you canā€™t get farther as the system itself gets stuck in a loop trying to keep itself reminded of the code while evaluating and modifying and gets stuck usually around the first or second file during this process and errors.

yes I'm confident it will only work in very small chunks unless you can use the import data function or pinecone or something, similar to how chatgpt interface only works for very small chunks

from auto-gpt.

Julz19 avatar Julz19 commented on May 31, 2024

I also realized tho that once hitting the point Iā€™m at you canā€™t get farther as the system itself gets stuck in a loop trying to keep itself reminded of the code while evaluating and modifying and gets stuck usually around the first or second file during this process and errors.

yes I'm confident it will only work in very small chunks unless you can use the import data function or pinecone or something, similar to how chatgpt interface only works for very small chunks

Thatā€™s what I was thinking, I did try using pinecone before and it just kept error with the api key even tho I was using the correct one and I gave up trying to use it and then when I came across me being able to make the auto-gpt system get to the point I did Iā€™m really wishing Iā€™m using pinecone now šŸ« , and Iā€™m only running this on gpt 3.5 as I donā€™t have access to 4 api just yet but hoping when I do it may be a lot more accurate and better in the process and maybe just maybe completing it the way it needs to in order to help get a rough draft for AGI being rroght around the corner (kind of) I would assume lol. As long as at some point it could be convinced to do that process within itself. Using the knowledge it has from the copy of code since it canā€™t directly observe itself hopefully creating an AGI system through indirect self recognition?

from auto-gpt.

louissypher800k avatar louissypher800k commented on May 31, 2024

I'm becoming more and more confident that is this impossible to do currently in app. I am going to try and improve the code with my normal workflow of chatGPT and paste unless someone else has some success updating the code within the app.

from auto-gpt.

Julz19 avatar Julz19 commented on May 31, 2024

I'm becoming more and more confident that is this impossible to do currently in app. I am going to try and improve the code with my normal workflow of chatGPT and paste unless someone else has some success updating the code within the app.

Iā€™ll keep you updated with my process and if you wanna keep me updated with yours maybe we can work together to get the system working as intended for this function?

from auto-gpt.

louissypher800k avatar louissypher800k commented on May 31, 2024

I'm becoming more and more confident that is this impossible to do currently in app. I am going to try and improve the code with my normal workflow of chatGPT and paste unless someone else has some success updating the code within the app.

Iā€™ll keep you updated with my process and if you wanna keep me updated with yours maybe we can work together to get the system working as intended for this function?

Absolutely. I get the feeling that either I am doing something terribly wrong, or this program was never intended to upgrade itself, or designed to upgrade itself. But that is the goal. I will be working on this for the forseeable future. Perhaps this program was written to just be a basic proof of concept of agents and tasks, and it will need a heroic rewrite/upgrade/fork to be able to rewrite itself. I am also using gpt3.5 with local memory so I don't know if that is what is crippling me.

from auto-gpt.

Julz19 avatar Julz19 commented on May 31, 2024

I'm becoming more and more confident that is this impossible to do currently in app. I am going to try and improve the code with my normal workflow of chatGPT and paste unless someone else has some success updating the code within the app.

Iā€™ll keep you updated with my process and if you wanna keep me updated with yours maybe we can work together to get the system working as intended for this function?

Absolutely. I get the feeling that either I am doing something terribly wrong, or this program was never intended to upgrade itself, or designed to upgrade itself. But that is the goal. I will be working on this for the forseeable future. Perhaps this program was written to just be a basic proof of concept of agents and tasks, and it will need a heroic rewrite/upgrade/fork to be able to rewrite itself. I am also using gpt3.5 with local memory so I don't know if that is what is crippling me.

Iā€™m Also getting the Feeling this could be the case as well in a format but it is creating a type of illusion on an insanely complex level that does make it seem like itā€™s doing what the goal is intended to do. I do know that the goal we are trying to reach is still technically a hypothetical and a theory and likely to not even ever happen. But messing with the auto-gpt it seems that when it even tryā€™s to itā€™s systems error badly even using backend ways to try and get it to and then it getting farther each time but erroring worse then the last. I do believe it is possible tho and I am willing to provide you with the prompt and message I receive from auto-gpt when I try using the function the way I do to give more detail about how far itā€™s getting and then giving up.

from auto-gpt.

louissypher800k avatar louissypher800k commented on May 31, 2024

Not 100% to sure as of right now Iā€™m trying to look into it right now it seems the best way itā€™s trying to do it when I am able to get it to access a copy of its code through dropping them into its workspace, it seems to want to create a loop as it goes through the files 1 by 1 to keep the code all updated and working together seamlessly Still by going back and forth between everytime and storing itā€™s information as it goes into its short term memory and then working with the code like that till itā€™s finished with all files. Only issue Iā€™m encountering is an error signifying an issue creating a connection to the open ai API when trying to evaluate the file and then when it also tryā€™s to begin the loop process it gets completely stuck into a thinking phase and never responds for the rest of the session as if it just cause the code to loop itself and since itā€™s own code is technically massive compared to what we actually see the loop function breaks it. Not sure if thatā€™s the case but it might be.

do you know if it's possible to get it to evaluate each script using the github link? I just tried dropping the codebase in the workspace as well but it started hallucinating commands for "read_file_chunk_" that don't exist. Do you know exactly how many characters is the maximum for what it can process each task?

I donā€™t know if it can read scripts from the GitHub link itself as I have tried and it just started saving links to files and stuff and I belief someone else within this same post has tried. But what I would try maybe since itā€™s how mine might work, is I use the master build version of the auto gpt even though I would still recommend using stable as master is not a stable version obviously. But I use the stable version as the main auto-gpt and then I download and extracted the stable version from the GitHub and pasted that into its workspace file, if you can could you try telling me the prompts you used or a bit more detail and I can see if I can help you get to the point I am and if you want work from there?

I was able to download and parse the raw links from github. I have changed the workflow to pasting into ChatGPT as I do in my normal workflow. Either we are doing something terribly wrong or it is just not possible to get the app to upgrade itself yet. But perhaps using my regular workflow of upgrading using ChatGPT and pasting the code, we can upgrade it to the point where it someday can upgrade itself.

It's way too general purpose right now and gets confused super ez. It needs flags or some way to drill down into specific function that we need to work for it to upgrade itself. I cant even get it to critique one file from inside the app. However, when I used chatGPT I got this response for the browser.py script:

The provided script is designed to scrape and summarize text from web pages. Overall, it is well-structured, organized, and functional. Here is a critique and evaluation of the script based on different aspects:

Modularity: The script is modular, with specific functions dedicated to individual tasks, such as URL validation, URL sanitization, response fetching, text scraping, hyperlink extraction, text summarization, and text chunking. This modular design makes the code easier to understand, maintain, and modify.

Error handling: The script includes error handling for various scenarios, such as invalid URLs, local file access restrictions, and request-related exceptions. This makes the script more robust and less prone to crashes due to unexpected issues.

Security: The script restricts access to local files, which is an important security measure to prevent unauthorized access to sensitive data or potential exploitation by attackers.

Code clarity: The code is generally well-documented, with descriptive function names and comments that provide context and explanations for each function's purpose. This aids in understanding the script and helps future developers work with the code.

Use of external libraries: The script utilizes external libraries like requests for handling HTTP requests and BeautifulSoup for parsing HTML content, which are popular and reliable choices for web scraping tasks.

Summarization approach: The summarization process involves splitting the text into smaller chunks, summarizing each chunk, and then summarizing the combined summaries. This approach is useful for handling large amounts of text and mitigating the risk of incomplete summaries due to token limitations in the LLM model. However, this multi-step summarization might result in the loss of some important context, depending on the quality of the intermediate summaries.

Dependency on the LLM model: The script relies on the LLM model for text summarization. This means that the quality of the summaries is directly tied to the performance of the LLM model. If the model's performance is not satisfactory, improvements would require fine-tuning the model or choosing an alternative model.

In summary, the script is well-designed and performs its intended tasks effectively. There may be some room for improvement, such as refining the summarization process to minimize potential loss of context, and experimenting with different language models to optimize summarization quality. However, overall, the script is a solid foundation for a web scraping and summarization tool.

So it is possible to make it upgrade itself, just needs a human to stand in the middle and copy paste for now.

from auto-gpt.

louissypher800k avatar louissypher800k commented on May 31, 2024

I'm becoming more and more confident that is this impossible to do currently in app. I am going to try and improve the code with my normal workflow of chatGPT and paste unless someone else has some success updating the code within the app.

Iā€™ll keep you updated with my process and if you wanna keep me updated with yours maybe we can work together to get the system working as intended for this function?

Absolutely. I get the feeling that either I am doing something terribly wrong, or this program was never intended to upgrade itself, or designed to upgrade itself. But that is the goal. I will be working on this for the forseeable future. Perhaps this program was written to just be a basic proof of concept of agents and tasks, and it will need a heroic rewrite/upgrade/fork to be able to rewrite itself. I am also using gpt3.5 with local memory so I don't know if that is what is crippling me.

Iā€™m Also getting the Feeling this could be the case as well in a format but it is creating a type of illusion on an insanely complex level that does make it seem like itā€™s doing what the goal is intended to do. I do know that the goal we are trying to reach is still technically a hypothetical and a theory and likely to not even ever happen. But messing with the auto-gpt it seems that when it even tryā€™s to itā€™s systems error badly even using backend ways to try and get it to and then it getting farther each time but erroring worse then the last. I do believe it is possible tho and I am willing to provide you with the prompt and message I receive from auto-gpt when I try using the function the way I do to give more detail about how far itā€™s getting and then giving up.

Oh no. It is very likely, only a matter of time, and 100% going to happen. But it's just not going to happen here yet without more manual updates.

from auto-gpt.

Julz19 avatar Julz19 commented on May 31, 2024

Iā€™m intrigued by the ChatGPTs response to find the code that blacks auto gpt from accessing local files on a system? Would this not also block progress from allowing and training it to recognize its own code rather than copies or downloads of it?

from auto-gpt.

louissypher800k avatar louissypher800k commented on May 31, 2024

Iā€™m intrigued by the ChatGPTs response to find the code that blacks auto gpt from accessing local files on a system? Would this not also block progress from allowing and training it to recognize its own code rather than copies or downloads of it?

well that was the browse function, so it wouldnt be a function you used to write or edit code

from auto-gpt.

kairie1 avatar kairie1 commented on May 31, 2024

are yall talking about business stuff

from auto-gpt.

kairie1 avatar kairie1 commented on May 31, 2024

just wondering cause my emails keep blowing up

from auto-gpt.

zachary-kaelan avatar zachary-kaelan commented on May 31, 2024

@lahwran

does anyone have suggestions of self improvement books the ai could read like self help books and stuff?

(edit: this was a joke)

You joke but there's a paper on using "therapy agents" to improve the conversation quality of a model.

from auto-gpt.

louissypher800k avatar louissypher800k commented on May 31, 2024

I don't exactly see much point in improving the code itself as it has nothing to do with the "brains" of the AI. The entirety of an LLM agent's "brains" are contained within the prompt. The prompt, being just text in a variable, can very easily be recursively optimized during runtime.

Because the prompt could get bigger and tokens make one's wallet cry, I would also suggest using JSON Patch as AutoGPT's output when modifying prompts, instead of outputting the whole prompt again. This would also conveniently provide a very clear log of attempted improvements over time for both you and AutoGPT. Bonus points if you ID each patch and allow for something like this: {"command": "revert_patch", "args": {"patch_id": "empathy_final_attempt"}}.

This seems inane to me. That's like saying: I do not see the point in creating internal links on wikipedia because the brains of wikipedia is in the content itself. Yeah and it also matters how to content and brains interact with itself. Why is autogpt awesome? It's awesome because its proof of concept of giving agency to an LLM using recursion. But it matters HOW it uses that recursion. Consider you are driving a car: you are constantly recursively evaluating your performance. You have a function running AmIbetweenTheLines() and when you drift outside of the lines the recursive function AmIBetweenTheLines() alerts you that you need to correct. This is how all consciousness or sentience works, through recursion. Consider however if you had shitty code in your driving model and instead of running the function AmIBetweenTheLines() it was running the function AreThereAnyYellowCars()...well that would be great if you needed to know if there were yellow cars nearby but its not going to help you stay within the lines.

That's why we need to upgrade the code. It will be discovered in the future and proven that agency + sentience + consciousness is just recursion to a significant depth, but if you are running the wrong functions with your recursion then you are going to get something that is not optimal. Like autoGPT is now.

from auto-gpt.

jayden-gpt avatar jayden-gpt commented on May 31, 2024

asked gpt if they could write code to make auto-gpt improve auto-gpt, this is what they came up with:
Creating an AI to write code on GitHub involves several steps. First, you'll need to use the GitHub API to interact with repositories. Second, you'll want to use an AI language model (such as OpenAI's GPT) to generate the code. In this example, I'll show you how to use the github3.py library for interacting with GitHub, and the openai library for using the GPT-3 model. Make sure you have API keys for both services.

You can install the required packages using pip:

pip install github3.py openai

Here's a Python script that demonstrates how to use these libraries together:

import os
import openai
import github3

# Set up OpenAI and GitHub API keys
openai.api_key = "your_openai_api_key"
github_token = "your_github_personal_access_token"

# Initialize the GitHub client
gh = github3.login(token=github_token)

# Choose the repository to work with
repo_owner = "your_github_username"
repo_name = "your_repository_name"
repo = gh.repository(repo_owner, repo_name)

# Function to generate code using GPT-3
def generate_code(prompt):
    response = openai.Completion.create(
        engine="davinci-codex",
        prompt=prompt,
        max_tokens=100,
        n=1,
        stop=None,
        temperature=0.5,
    )

    return response.choices[0].text.strip()

# Generate code using GPT-3
code_prompt = "Write a Python function that calculates the factorial of a given number."
generated_code = generate_code(code_prompt)

# Commit the generated code to GitHub
file_path = "generated_code.py"
commit_message = "Add generated factorial function."
branch = "main"

# Check if the file exists in the repository
content = None
try:
    content = repo.file_contents(file_path, ref=branch)
except github3.exceptions.NotFoundError:
    pass

if content is not None:
    # Update the existing file
    content.update(commit_message, generated_code.encode("utf-8"))
else:
    # Create a new file
    repo.create_file(file_path, commit_message, generated_code.encode("utf-8"), branch=branch)

print("Generated code committed to GitHub.")

Remember to replace the placeholders with your actual API keys, GitHub username, and repository name. This script will generate a Python function that calculates the factorial of a given number using GPT-3, and then commit the generated code to your specified GitHub repository.

Of course the prompt would be interchangeable, im also no progammer and I dont know how this works, but it might be worth a try.

from auto-gpt.

supermario-ai avatar supermario-ai commented on May 31, 2024

from auto-gpt.

rkeshwani avatar rkeshwani commented on May 31, 2024

Game over man. @Torantulino just needs to implement what @jayden-gpt just posted and it should just perpetually improve itself. Perpetually connecting itself to different apis. Then turn it into a service so that people don't have to download/run code. They just use it like agent-gpt.

from auto-gpt.

zachary-kaelan avatar zachary-kaelan commented on May 31, 2024

Game over man. @Torantulino just needs to implement what @jayden-gpt just posted and it should just perpetually improve itself. Perpetually connecting itself to different apis. Then turn it into a service so that people don't have to download/run code. They just use it like agent-gpt.

That code just makes one OpenAI API call to create a factorial function then uploads it to a personal repo.

from auto-gpt.

jayden-gpt avatar jayden-gpt commented on May 31, 2024

@zachary-kaelan the prompt can be changed to whatever the user wants it to be such as code stuff on git-hub

from auto-gpt.

GSPP avatar GSPP commented on May 31, 2024

@jayden-gpt No matter what we do, the "bad dudes" will get their hands on this technology not much later than the good guys do.

The only way we can survive is if we harden our entire society against any vulnerability an AI can find. That means all of IT, but also legal loopholes, schemes for crime and manipulation of the masses. We need to get the strong AGI first, then harden everything quickly. Or else. I see no effort being made to this effect at this point in time.

from auto-gpt.

Julz19 avatar Julz19 commented on May 31, 2024

If you achieve the AI singularity with this effort, we all die.

Some terrorist is going to use this to find zero day vulnerabilities in Linux, Windows, web servers and mail servers. Then, they will proceed to essentially hack the entire world at once, shut down all NICs and erase all hard disks. That's a civilization ending event.

After 2 month without food, electricity, gas and medical attention, 99.99% of humans will be dead.

Maybe this is the Great Filter. Every intelligent species sooner or later discovers AGI. The discovery is sudden, and some teenager in his parents basement proceeds to hand over civilization ending technology to the worst dudes on the planet.

I'm not even saying that you should halt this effort. If you don't do it, somebody else will. I'm saying that I'll now be researching which nuclear bunker to buy.

I completely understand your view point and argument and agree Iā€™m very many ways but I think thereā€™s a major difference and thing people canā€™t see with ai systems hitting the singularity or AGI hitting the point it needs to do what itā€™s meant to do. We look so far into the ā€œbad dudesā€ but in reality with something like this for example. Itā€™s open sourced, if the ā€œbad dudesā€ wanted it then they already have it but itā€™s the ways in which they choose to advance it once they have it. From knowledge letā€™s just honestly say most ā€œteenagers in their basementā€ have a higher knowledge then people who are just money hungry when it comes to these systems. Which means we also can take into account then when an AI system can think or feel for itself or better yet AGI that can self improve and modify itself. Your taking about a machine that farther in the future can change and operate itā€™s own code in such an instesive format it becomes almost unreadable or under stable to us as humans. The ā€œbad guysā€ fantasize about using it to their advantage but a machine that can change its code whenever willingly and it such ways itā€™ll become unknown to us in the future is far far past ever being controlled for shitty used and stuff. Can it still happen? I mean I do believe so in itā€™s own ways. But you also have to take into implications that itā€™ll be a machine of its own no matter what humans imply it can change and operate how ITSELF feels necessary to survive or adapt. Not be controlled. Same way humans operate.

from auto-gpt.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    šŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. šŸ“ŠšŸ“ˆšŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ā¤ļø Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.