Giter Site home page Giter Site logo

agentforge's Issues

Personal LLM support

If I want to use my pretrained LLM to join this agent, How can I rewrite the code of src/agentforge/llm/oobabooga.py?
I find that I can not simply change the .agentforge/settings/models.yaml to fix my own LLM model.

No embedding_function provided, using default embedding function: SentenceTransformerEmbeddingFunction

help pls:
"Selecting collection: tasks
No embedding_function provided, using default embedding function: SentenceTransformerEmbeddingFunction
Language Model Not Found!
Traceback (most recent call last):
File "C:\Users\p\BigBoogaAGI\main.py", line 34, in
task_list = taskCreationAgent.run_task_creation_agent(objective, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\p\BigBoogaAGI\Agents\task_creation_agent.py", line 37, in run_task_creation_agent
raise ValueError('Language model not found. Please check the language_model_api variable.')
ValueError: Language model not found. Please check the language_model_api variable.
"

Need for better documentation

In addition to basic installation, we should include documentation for the following:

Config Information:
editing config.ini
.env variables

Using Local Models
Using different DBs
Adding different DBs

Open source structure and contribution guide

Hi all, I work on a bunch of professional public facing open source projects and wanted to give my perspective on fostering a project that is easy to contribute to. A focus on creating an environment where people can create meaningful contributions without ever speaking to the maintainers or leaving github, except to code. Here are some tips:

  1. Any ideas of new direction or features should be discussed publicly in issues - This shows everyone what needs doing, and anyone can pick up an issue and say they are working on it. This is also important when you yourself want to add something, just publicly weigh in with what you want to do with some detail and others can contribute ideas or gotchas to avoid. This creates a public history of the project which makes it easier to track progress and understand the state of development and what's important to maintainers for new contributors. Public discussion of ideas is the most vital point on this list. Stuff said on discord can easily dissapear, issues and PRs are concrete and eternal.
  2. Set up automated CI - Every new contribution should come with either a test covering the components added, or something that tests them in concert with the rest of the system. This makes it very easy to see when a contribution is ready - the tests all pass, no meeting needed. The main branch should always pass, unless something upstream has broken it.
  3. Make use of github releases to peg stable versions using tags - Also important, do regular releases whenever main is in a stable working state and has features that were not in a previous version. Regular releases are a key point of agile development strategy. This is more for maintainers than contributors, but still important
  4. Document - Whenever a new feature is added, the docs should be updated to reflect the changes in the same pull request, this is as important as the working code itself, and tests for verification.
  5. Whenever you want to implement something, create a new branch on your fork from an up to date main and create a PR as soon as you have a first draft, tagged with the issue it solves, and the fact that its a work in progress. Never work on main on your fork as it makes rebasing new changes difficult. Having an open PR makes it easy for others to see what everyone is working on, avoiding people treading on each others toes and duplicating work.
  6. Pull Requests - A PR that is ready to merge must have the following:
    • Working Code
    • Tests that pass
    • Docs
    • One or multiple solved issues tagged - if you are solving something with no issue create the issue
    • Must have all changes on main merged and integrated, a way to do this is to create a PR from main in to your working branch
    • A positive review from a maintainer - any needed changes need to be communicated, and actioned before it is ready to merge.

These are some key ideas, and form the basis of a contribution guideline. Any ideas or questions please comment and we can update the list. I look forward to hearing from you.

API Key exposed

You should probably change it after pushing it to github in the last commit.

New api in oobabooga

Hello!
Just want to tell you that the api got completely revamped in oobabooga and I think some parts may break when using the newest version of oobabooga.

Better Error Handling for Encoder models

When too many requests are sent to an encoder model, the system can sometimes respond with an error. We should add error handling for these situations so that the work can continue.

Traceback (most recent call last):
File "C:\GitKraken\BigBoogaAGI\Utilities\chroma_utils.py", line 192, in save_status
self.collection.update(
File "C:\Users\josmith\Anaconda3\lib\site-packages\chromadb\api\models\Collection.py", line 271, in update
embeddings = self._embedding_function(documents)
File "C:\Users\josmith\Anaconda3\lib\site-packages\chromadb\utils\embedding_functions.py", line 39, in call
for result in self._client.create(
File "C:\Users\josmith\Anaconda3\lib\site-packages\openai\api_resources\embedding.py", line 33, in create
response = super().create(*args, **kwargs)
File "C:\Users\josmith\Anaconda3\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "C:\Users\josmith\Anaconda3\lib\site-packages\openai\api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
File "C:\Users\josmith\Anaconda3\lib\site-packages\openai\api_requestor.py", line 620, in _interpret_response
self._interpret_response_line(
File "C:\Users\josmith\Anaconda3\lib\site-packages\openai\api_requestor.py", line 683, in _interpret_response_line
raise self.handle_error_response(
openai.error.RateLimitError: The server is currently overloaded with other requests. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error p
ersists.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\GitKraken\BigBoogaAGI\salience.py", line 39, in
statusAgent.run_status_agent(data)
File "C:\GitKraken\BigBoogaAGI\Agents\status_agent.py", line 49, in run_status_agent
self.save_status(status, task_id, task_desc, task_order)
File "C:\GitKraken\BigBoogaAGI\Agents\status_agent.py", line 119, in save_status
self.storage.save_status(status, task_id, text, task_order)
File "C:\GitKraken\BigBoogaAGI\Utilities\chroma_utils.py", line 203, in save_status
raise ValueError(f"\n\nError saving status. Error: {e}")
ValueError:

Error saving status. Error: The server is currently overloaded with other requests. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error pe
rsists.

Update Anthropic Client

Anthropic changed their python sdk - making this code line outdated.

client = anthropic.Client(API_KEY)

Would love to know if this might help - https://github.com/BerriAI/litellm

~Simple I/O library, that standardizes all the llm api calls to the OpenAI call

from litellm import completion

## set ENV variables
# ENV variables can be set in .env file, too. Example in .env.example
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["ANTHROPIC_API_KEY"] = "anthropic key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# anthropic call
response = completion("claude-v-2", messages)

Incorporate LLM Cascades for API cost savings

https://arxiv.org/pdf/2305.05176.pdf

Abstract:
"There is a rapidly growing number of large language models (LLMs) that users can query for
a fee. We review the cost associated with querying popular LLM APIs—e.g. GPT-4, ChatGPT,
J1-Jumbo—and find that these models have heterogeneous pricing structures, with fees that can
differ by two orders of magnitude. In particular, using LLMs on large collections of queries and
text can be expensive. Motivated by this, we outline and discuss three types of strategies that
users can exploit to reduce the inference cost associated with using LLMs: 1) prompt adaptation,
2) LLM approximation, and 3) LLM cascade. As an example, we propose FrugalGPT, a simple yet
flexible instantiation of LLM cascade which learns which combinations of LLMs to use for different
queries in order to reduce cost and improve accuracy. Our experiments show that FrugalGPT can
match the performance of the best individual LLM (e.g. GPT-4) with up to 98% cost reduction or
improve the accuracy over GPT-4 by 4% with the same cost. The ideas and findings presented
here lay a foundation for using LLMs sustainably and efficiently."

Error creating collection.

I have got this somewhat working locally now using the oobabooga api with a local model and using Chroma for storage. I am getting an error when it restarts the while loop of main.py because it is trying to make a collection called tasks again. Chroma is throwing an error that says collection with name tasks already exists. Have you run into this issue? I can delete the index and start the process over from the beginning just fine and it will create the collection but I will just run into the error again. Im still looking into the solution. It looks like it might be on the chromadb side.

Error initing empty project

>agentforge init
All files have been successfully copied!
An error occurred while executing the 'init' command: [Errno 21] Is a directory: '/home/example/dev/aivenv/lib/python3.10/site-packages/agentforge/utils/installer/agents/PredefinedAgents'

Task subdivision algorithm

Hi all, recently saw the video in Dave Shapiero's yt about this project and am keen to get involved.

I have been thinking about efficient task subdivision.
What you want is to break larger tasks in to subtasks recursively until you reach a set of base tasks that the model thinks it can solve in a single shot, or few step shot, branching on different strategic options. You then count the number of base tasks in a subtree from the leaves up to the root to determine the most efficient strategy.

This can later include elements like frustration causing you to re-evaluate and re subdivide tasks of underestimated difficulty up to and including giving up and choosing another strategy.

Does this sound interesting? Is there anywhere I can jump in to a chat with you guys to discuss some more ideas?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.