databassgit / agentforge Goto Github PK
View Code? Open in Web Editor NEWExtensible AGI Framework
License: GNU General Public License v3.0
Extensible AGI Framework
License: GNU General Public License v3.0
If I want to use my pretrained LLM to join this agent, How can I rewrite the code of src/agentforge/llm/oobabooga.py
?
I find that I can not simply change the .agentforge/settings/models.yaml
to fix my own LLM model.
So now this has been demonstrated:
This looks like a good scheme to use for the task executor
help pls:
"Selecting collection: tasks
No embedding_function provided, using default embedding function: SentenceTransformerEmbeddingFunction
Language Model Not Found!
Traceback (most recent call last):
File "C:\Users\p\BigBoogaAGI\main.py", line 34, in
task_list = taskCreationAgent.run_task_creation_agent(objective, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\p\BigBoogaAGI\Agents\task_creation_agent.py", line 37, in run_task_creation_agent
raise ValueError('Language model not found. Please check the language_model_api variable.')
ValueError: Language model not found. Please check the language_model_api variable.
"
The documentation is not clear on allot of things and some of the documentation links don't exist.
In addition to basic installation, we should include documentation for the following:
Config Information:
editing config.ini
.env variables
Using Local Models
Using different DBs
Adding different DBs
Hi all, I work on a bunch of professional public facing open source projects and wanted to give my perspective on fostering a project that is easy to contribute to. A focus on creating an environment where people can create meaningful contributions without ever speaking to the maintainers or leaving github, except to code. Here are some tips:
These are some key ideas, and form the basis of a contribution guideline. Any ideas or questions please comment and we can update the list. I look forward to hearing from you.
You should probably change it after pushing it to github in the last commit.
Hello!
Just want to tell you that the api got completely revamped in oobabooga and I think some parts may break when using the newest version of oobabooga.
When too many requests are sent to an encoder model, the system can sometimes respond with an error. We should add error handling for these situations so that the work can continue.
Traceback (most recent call last):
File "C:\GitKraken\BigBoogaAGI\Utilities\chroma_utils.py", line 192, in save_status
self.collection.update(
File "C:\Users\josmith\Anaconda3\lib\site-packages\chromadb\api\models\Collection.py", line 271, in update
embeddings = self._embedding_function(documents)
File "C:\Users\josmith\Anaconda3\lib\site-packages\chromadb\utils\embedding_functions.py", line 39, in call
for result in self._client.create(
File "C:\Users\josmith\Anaconda3\lib\site-packages\openai\api_resources\embedding.py", line 33, in create
response = super().create(*args, **kwargs)
File "C:\Users\josmith\Anaconda3\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "C:\Users\josmith\Anaconda3\lib\site-packages\openai\api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
File "C:\Users\josmith\Anaconda3\lib\site-packages\openai\api_requestor.py", line 620, in _interpret_response
self._interpret_response_line(
File "C:\Users\josmith\Anaconda3\lib\site-packages\openai\api_requestor.py", line 683, in _interpret_response_line
raise self.handle_error_response(
openai.error.RateLimitError: The server is currently overloaded with other requests. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error p
ersists.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\GitKraken\BigBoogaAGI\salience.py", line 39, in
statusAgent.run_status_agent(data)
File "C:\GitKraken\BigBoogaAGI\Agents\status_agent.py", line 49, in run_status_agent
self.save_status(status, task_id, task_desc, task_order)
File "C:\GitKraken\BigBoogaAGI\Agents\status_agent.py", line 119, in save_status
self.storage.save_status(status, task_id, text, task_order)
File "C:\GitKraken\BigBoogaAGI\Utilities\chroma_utils.py", line 203, in save_status
raise ValueError(f"\n\nError saving status. Error: {e}")
ValueError:
Error saving status. Error: The server is currently overloaded with other requests. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error pe
rsists.
When you run agentforge init
the init file in customagents has text in it.
Anthropic changed their python sdk - making this code line outdated.
Would love to know if this might help - https://github.com/BerriAI/litellm
~Simple I/O library, that standardizes all the llm api calls to the OpenAI call
from litellm import completion
## set ENV variables
# ENV variables can be set in .env file, too. Example in .env.example
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["ANTHROPIC_API_KEY"] = "anthropic key"
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# anthropic call
response = completion("claude-v-2", messages)
https://arxiv.org/pdf/2305.05176.pdf
Abstract:
"There is a rapidly growing number of large language models (LLMs) that users can query for
a fee. We review the cost associated with querying popular LLM APIs—e.g. GPT-4, ChatGPT,
J1-Jumbo—and find that these models have heterogeneous pricing structures, with fees that can
differ by two orders of magnitude. In particular, using LLMs on large collections of queries and
text can be expensive. Motivated by this, we outline and discuss three types of strategies that
users can exploit to reduce the inference cost associated with using LLMs: 1) prompt adaptation,
2) LLM approximation, and 3) LLM cascade. As an example, we propose FrugalGPT, a simple yet
flexible instantiation of LLM cascade which learns which combinations of LLMs to use for different
queries in order to reduce cost and improve accuracy. Our experiments show that FrugalGPT can
match the performance of the best individual LLM (e.g. GPT-4) with up to 98% cost reduction or
improve the accuracy over GPT-4 by 4% with the same cost. The ideas and findings presented
here lay a foundation for using LLMs sustainably and efficiently."
I have got this somewhat working locally now using the oobabooga api with a local model and using Chroma for storage. I am getting an error when it restarts the while loop of main.py because it is trying to make a collection called tasks again. Chroma is throwing an error that says collection with name tasks already exists. Have you run into this issue? I can delete the index and start the process over from the beginning just fine and it will create the collection but I will just run into the error again. Im still looking into the solution. It looks like it might be on the chromadb side.
>agentforge init
All files have been successfully copied!
An error occurred while executing the 'init' command: [Errno 21] Is a directory: '/home/example/dev/aivenv/lib/python3.10/site-packages/agentforge/utils/installer/agents/PredefinedAgents'
Honest question, the AutoGPT repo stopped and this one came from BabyBooga, is this the one to watch now?
ModuleNotFoundError: No module named 'Logs.logger_config'
It seems the required logger_config.py file is missing from Logs directory
Hi all, recently saw the video in Dave Shapiero's yt about this project and am keen to get involved.
I have been thinking about efficient task subdivision.
What you want is to break larger tasks in to subtasks recursively until you reach a set of base tasks that the model thinks it can solve in a single shot, or few step shot, branching on different strategic options. You then count the number of base tasks in a subtree from the leaves up to the root to determine the most efficient strategy.
This can later include elements like frustration causing you to re-evaluate and re subdivide tasks of underestimated difficulty up to and including giving up and choosing another strategy.
Does this sound interesting? Is there anywhere I can jump in to a chat with you guys to discuss some more ideas?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.