Giter Site home page Giter Site logo

forethought-technologies / autochain Goto Github PK

View Code? Open in Web Editor NEW
1.8K 1.8K 93.0 1.01 MB

AutoChain: Build lightweight, extensible, and testable LLM Agents

Home Page: https://autochain.forethought.ai

License: MIT License

Python 99.89% Makefile 0.11%

autochain's People

Contributors

antoinenasr avatar ayushexel avatar danofsteel32 avatar jadcham avatar samighoche avatar tao12345666333 avatar tiangolo avatar xingweitian avatar yyiilluu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autochain's Issues

Support for Azure deployments

deployment_id and model parameters are required for OpenAI SDK to talk to OpenAI instances on Azure.

The error I have is:

exception=InvalidRequestError(message="Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.chat_completion.ChatCompletion'>", param='engine', code=None, http_status=None, request_id=None)>

Integration examples:

openai.Embedding.create(
  deployment_id="xxx",
  input=prompt,
  model="xxx",
)

completion = openai.ChatCompletion.create(
  deployment_id="xxx",
  messages=messages,
  model="xxx",
)

Proposed integration:

ChatOpenAI(
  model_name="xxx",
  deployment_id="xxx",
  model="xxx",
)

Can AutoChain be used in Production

I read this announcement of AutoChain and noticed it harps on experimentation and flexibility.
I am curious, can AutoChain be used in production?

Clarification from user

Hey :)
when I use ConversationalAgent with PLANNING_PROMPT_TEMPLATE, the agent stopped because it was needed clarification from the user, but the agent already stopped/exit (the user can't give any input). could you please suggest simple why how to let the user insert it's answer in order to let the agent continue in the same running?

Thank you very much

HuggingFaceTextGenerationModel

Situation

I'm trying to use AutoChain with llama 2 instead of the public version of ChatGPT
autochain.models.huggingface_text_generation_model seems to be able to let me use llama 2

from autochain.chain.chain import Chain
from autochain.memory.buffer_memory import BufferMemory
from autochain.models.huggingface_text_generation_model import (
    HuggingFaceTextGenerationModel,
)
from huggingface_hub import login
from autochain.agent.conversational_agent.conversational_agent import ConversationalAgent
import os

os.environ['OPENAI_API_KEY'] = "my-key"
os.environ['PYTHONPATH'] = os.getcwd()

# your token here
login(token="hf_token")
llm = HuggingFaceTextGenerationModel(model_name="meta-llama/Llama-2-13b-chat-hf",
                                     temperature=2048,
                                     model_kwargs={'top_p': None, 'num_return_sequences': 1})
agent = ConversationalAgent.from_llm_and_tools(llm=llm)
memory = BufferMemory()
chain = Chain(agent=agent, memory=memory)

print(chain.run("Write me a poem about AI")['message'])

Problem

The model doesn't return anything after printing Planning.
However, the model download directly from meta repo works. So it is not a hardware problem.
image
I'll download the 7b-chat-hf version or the 7b-chat version. Maybe the model I have was corrupted.

Troubleshooting

Running transformer directly was able to get some text back

from transformers import AutoTokenizer
import transformers
import torch

model = "meta-llama/Llama-2-13b-chat-hf"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    device_map="auto",
)

sequences = pipeline(
    'I liked "Breaking Bad" and "Band of Brothers". Do you have any recommendations of other shows I might like?\n',
    do_sample=False,
    top_k=None,
    top_p=None,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
    max_length=200,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")

Returns

python using_transformer.py 
Loading checkpoint shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3/3 [00:04<00:00,  1.39s/it]
/home/choy3/mambaforge/envs/MEEAgent/lib/python3.11/site-packages/transformers/generation/configuration_utils.py:362: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.
  warnings.warn(
/home/choy3/mambaforge/envs/MEEAgent/lib/python3.11/site-packages/transformers/generation/configuration_utils.py:367: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `None` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`.
  warnings.warn(
/home/choy3/mambaforge/envs/MEEAgent/lib/python3.11/site-packages/transformers/generation/configuration_utils.py:377: UserWarning: `do_sample` is set to `False`. However, `top_k` is set to `None` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_k`.
  warnings.warn(
Result: I liked "Breaking Bad" and "Band of Brothers". Do you have any recommendations of other shows I might like?

I'm looking for something with a similar tone and style to "Breaking Bad" and "Band of Brothers". Here are a few shows that you might enjoy:

1. "The Sopranos" - This HBO series is a crime drama that explores the life of a New Jersey mob boss, Tony Soprano, as he navigates the criminal underworld and deals with personal and family issues.
2. "The Wire" - This HBO series is a gritty and intense drama that explores the drug trade in Baltimore from multiple perspectives, including law enforcement, drug dealers, and politicians.
3. "Narcos" - This Netflix series tells the true story of Pablo Escobar, the infamous Colombian drug lord,

Inconsistent versions with Pip and Github source

I noticed that the Autochain version available on Pip is not in sync with the source on GitHub.

For example, the Pip version doesn't have openai_functions_agent.

Is there a way I can help? Also, add versions/releases/tags here on GitHub?

HuggingFaceTextGenerationModel as llm asks for OpenAI_API_KEY

Hi,

I am planning to make a contribution to AutoChain at some point, and started to experiment a couple of things and getting myself familiar with the code.

I wanted to use open-source LLM from HuggingFace, so I figured out the HuggingFaceTextGenerationModel module, and created a simple script as:

from autochain.chain.chain import Chain
from autochain.memory.buffer_memory import BufferMemory
from autochain.models.huggingface_text_generation_model import HuggingFaceTextGenerationModel
from autochain.models.chat_openai import ChatOpenAI
from autochain.agent.conversational_agent.conversational_agent import ConversationalAgent

llm = HuggingFaceTextGenerationModel(model_name="distilgpt2")
memory = BufferMemory()
agent = ConversationalAgent.from_llm_and_tools(llm=llm)
chain = Chain(agent=agent, memory=memory)
print(chain.run("Write me a poem AI")['message'])

However this returns:

Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
Planning

Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
Invalid or incomplete response due to 'OPENAI_API_KEY'
Let me hand you off to an agent now

which I couldn't find where it is trying to grab OPENAI_API_KEY, I believe it is from the ConversationalAgent, but still couldn't figure out where to look at and where it goes wrong or maybe I am missing something and using some of those modules wrong.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.