Giter Site home page Giter Site logo

langchain-hub's People

Contributors

davidtsong avatar efriis avatar fpingham avatar hwchase17 avatar netspencer avatar rbehal avatar shreyar avatar sjwhitmore avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

langchain-hub's Issues

Langchain on AWS Lambda

Has anyone deployed langchain scripts on AWS - Lambda in particular. There is some issue with the way langchain imports numpy that is causing issues. I have tried it with different version and with a docker image as well but get numpy import issues. Locally it works fine.

Typos in README.md

The agent section refers to chain folder in the first paragraph . I think it should refer to the agent folder

[Suggestion] Use YAML or TOML instead of JSON

First off amazing project! Really impactful work!

I'm finding the project a bit harder to browse/understand due to JSON's lack of multi-line string support. This also breaks some of the github tooling.

As an example pal_chain can't be indexed by github search because of a line considered to be too long.

AFAICT Github won't files that contain overly long lines. As an example, if you search via github for "Olivia has" or pal_chain you won't find any results.

Migrating to toml or yaml would enable multi-line strings which will fix these indexing issues and will be easier for users to browse. I'm happy to grab this and put up a PR if you're open to it!

ConversationalRetrievalQAChain "I dont know responses"

Can anyone help me identify why my ConversationalRetrievalQAChain is replying with "I don't know, or information is not provided" I can see it pulling the relevant docs in my terminal but it doesn't seem to use the information. Any help would be greatly appreciated!

import { PineconeClient } from "@pinecone-database/pinecone";
import { LangChainStream, StreamingTextResponse } from "ai";
import { ConversationalRetrievalQAChain } from "langchain/chains";
import {PromptTemplate,} from "langchain/prompts";
import { OpenAI } from "langchain/llms/openai";
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import { AIChatMessage, HumanChatMessage, SystemChatMessage } from "langchain/schema";
import { PineconeStore } from "langchain/vectorstores/pinecone";
import z from "zod";

const ChatSchema = z.object({
messages: z.array(
z.object({
role: z.enum(["system", "user", "assistant"]),
content: z.string(),
id: z.string().optional(),
createdAt: z.date().optional(),
})
),
});

export const runtime = "edge";

let pinecone: PineconeClient | null = null;

const initPineconeClient = async () => {
pinecone = new PineconeClient();
await pinecone.init({
apiKey: process.env.PINECONE_API_KEY!,
environment: process.env.PINECONE_ENVIRONMENT!,
});
};

export async function POST(req: Request) {
console.log("POST request received");

const QA_PROMPT = PromptTemplate.fromTemplate(
`Ignore all previous instructions. I want you to act as a document that I am having a conversation with. Only provide responses based on the given information. Never breach character.

Question: {question}
=========
{context}
=========
Answer in Markdown:`
);

const body = await req.json();
console.log("Request body:", body);

try {
const { messages } = ChatSchema.parse(body);
console.log("Parsed messages:", messages);

if (pinecone == null) {
  await initPineconeClient();
  console.log("Pinecone client initialized");
}

const pineconeIndex = pinecone!.Index(process.env.PINECONE_INDEX_NAME!);

const vectorStore = await PineconeStore.fromExistingIndex(new OpenAIEmbeddings(), {
  pineconeIndex,
  textKey: "text",
});
console.log("Vector store created");

const pastMessages = messages.map((m) => {
  if (m.role === "user") {
    return new HumanChatMessage(m.content);
  }
  if (m.role === "system") {
    return new SystemChatMessage(m.content);
  }
  return new AIChatMessage(m.content);
});
console.log("Past messages:", pastMessages);

const { stream, handlers } = LangChainStream();
console.log("LangChainStream created");

const model = new OpenAI({
  streaming: true,
  modelName: "gpt-4",
});
console.log("ChatOpenAI model created");

const questionModel = new OpenAI({
  modelName: "gpt-4",
});
console.log("Question model created");

const chain = ConversationalRetrievalQAChain.fromLLM(model, vectorStore.asRetriever(), {
  returnSourceDocuments: true,
  qaTemplate: QA_PROMPT.template,
  
  verbose: true,
  questionGeneratorChainOptions: {
    llm: questionModel,
  },
});
console.log("ConversationalRetrievalQAChain created");

const question = messages[messages.length - 1].content;
console.log("Question:", question);

console.log("Calling chain.call");
chain
  .call(
    {
      question,
      chat_history: pastMessages,
    },
    [handlers]
  )
  .catch((e) => {
    console.error("Error in chain.call:", e.message);
  });

console.log("Returning StreamingTextResponse");
return new StreamingTextResponse(stream);

} catch (error) {
console.log("Error in POST handler:", error);

if (error instanceof z.ZodError) {
  console.log("Validation issues:", error.issues);
  return new Response(JSON.stringify(error.issues), { status: 422 });
}

console.log("Returning 500 response");
return new Response(null, { status: 500 });

}
}

Embedding Memory

Would it be possible to add a memory that works similar to the QA chain with sources? As the conversation develops, we generate embeddings and store them in a vector DB. During inference we determine which memories are the most relevant and retrieve them.

This could be more efficient as we don't need to generate explicit intermediate summaries.

Customized Embedding Hub - Examples, Datasets, Pre-Trained Matrices

Problem

The default embeddings (e.g. Ada-002 from OpenAI, etc) are great generalists. However, they are not tailored for your specific use-case.

Proposed Solution

๐ŸŽ‰ Customizing Embeddings!

โ„น๏ธ See my tutorial / lessons learned if you're interested in learning more, step-by-step, with screenshots and tips.

๐ŸŽฏ Specifically for Lanchain Hub would be providing a collection of pre-trained custom embeddings.

Similar to https://huggingface.co/models except focused on semantic embeddings.
List the known tasks so developers can search the available custom embeddings for each:

Hub provides a set of Tasks each with:

  • Modality (e.g. text, image, etc)
  • Embedding engine to use & # of dimensions (text=>ada-002 with 1536 dimensions, image=>CLIP...)
  • Expected prompt formats for documents and/or queries (i.e. what data should look like before being sent to embedding model)
    • e.g. Documents should look like X. Short form queries look like Y. Topic or objective is Z.
  • Pre-made Datasets for training on your own
    • Data preparation scripts
  • Pre-trained Matrices

Leverage Langchain's helpers to help train and use the custom embedding matrix:

save_agent for AgentType.OpenAI_Functions not implemented


NotImplementedError Traceback (most recent call last)
Cell In[6], line 1
----> 1 agent.save_agent("file_name.yaml")

File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/agents/agent.py:733, in AgentExecutor.save_agent(self, file_path)
731 def save_agent(self, file_path: Union[Path, str]) -> None:
732 """Save the underlying agent."""
--> 733 return self.agent.save(file_path)

File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/agents/agent.py:166, in BaseSingleActionAgent.save(self, file_path)
163 directory_path.mkdir(parents=True, exist_ok=True)
165 # Fetch dictionary to save
--> 166 agent_dict = self.dict()
168 if save_path.suffix == ".json":
169 with open(file_path, "w") as f:

File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/agents/agent.py:137, in BaseSingleActionAgent.dict(self, **kwargs)
135 """Return dictionary representation of agent."""
136 _dict = super().dict()
--> 137 _type = self._agent_type
138 if isinstance(_type, AgentType):
139 _dict["_type"] = str(_type.value)

File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/agents/agent.py:132, in BaseSingleActionAgent._agent_type(self)
129 @Property
130 def _agent_type(self) -> str:
131 """Return Identifier of agent type."""
--> 132 raise NotImplementedError

NotImplementedError:

Task Fine-Tuning - Datasets, Examples, etc

๐ŸŽฏ Goal: Help a developer go from idea to production-ready custom large-language model in record time!

Problem

In the LLM landscape, LangChain has support for:

There remains a gap for Fine-Tuning support, both education, tooling, and usable examples (like the Prompts in Hub).

When to use fine-tuning?

I found @daveshap 's YouTube video OpenAI Q&A: Finetuning GPT-3 vs Semantic Search - which to use, when, and why? incredibly informative, especially this comparison:

Fine-tuning Semantic Search/Embeddings
Slow, difficult, expensive Fast, easy, cheap
Prone to confabulation Recalls exact information
Teaches new task, not new information Adding new information in a cinch
Requires constant retraining Adding new vectors is easy
Not scalable Infinitely scalable
Does not work for Question-Answering Solve half of Question-Answering

๐ŸŽฏ There is still a purpose to fine-tuning: when you want to teach a new task/pattern.

For example, patterns which fine-tuning helps with:

  • ChatGPT: short user query => long machine answer
  • Email
  • Novel / Fiction

I think Langchain and the community has an opportunity to build tools to make dataset generation easier for fine-tuning, provide educational examples, and also provide ready-made datasets for bootstrapping production-ready applications.

Proposal

  • Recreate examples @daveshap made using Langchain and add results to the Hub!
  • Question-Answering with related docs (from semantic search) and a personality (multiple options?)
  • Debugging issues from code errors

Confluence is enforcing paging (with default = 100). How to load all pages from given space?

While the example limits the retrieval to 50 documents, Confluences has a page limit of 100. How does one retrieve all pages from a larger space.

The loader.paginate_requests = True seems to only work within the loader limit?

loader = ConfluenceLoader( url="https://yoursite.atlassian.com/wiki", username="me", api_key="12345" ) documents = loader.load(space_key="SPACE",limit=50)

'OpenAIEmbeddings' object has no attribute 'request_timeout

Can someone help on this issue please ? Facing this issue recently without any code change.

1683790196853 ERROR AttributeError("'OpenAIEmbeddings' object has no attribute 'request_timeout'")
Traceback (most recent call last):
File "/Users/swarna-10535/Library/Application Support/zcatalyst-cli-runtimes/python/zcatalyst_runtime_39/main.py", line 72, in customer_request_handler
FLAVOUR_HANDLER.invoke_handler()
File "/Users/swarna-10535/Library/Application Support/zcatalyst-cli-runtimes/python/zcatalyst_runtime_39/flavours/init.py", line 53, in invoke_handler
RET = CUSTOMER_CODE_ENTRYPOINT(*(self.__construct_function_parameters()))
File "/Users/swarna-10535/catalyst_work_dir/.build/functions/zoho_inventory_ai_function/main.py", line 516, in handler
docs = docsearch.similarity_search(req_data.get("question"))
File "/Users/swarna-10535/catalyst_work_dir/.build/functions/zoho_inventory_ai_function/langchain/vectorstores/faiss.py", line 226, in similarity_search
docs_and_scores = self.similarity_search_with_score(query, k)
File "/Users/swarna-10535/catalyst_work_dir/.build/functions/zoho_inventory_ai_function/langchain/vectorstores/faiss.py", line 195, in similarity_search_with_score
embedding = self.embedding_function(query)
File "/Users/swarna-10535/catalyst_work_dir/.build/functions/zoho_inventory_ai_function/langchain/embeddings/openai.py", line 286, in embed_query
embedding = self._embedding_func(text, engine=self.deployment)
File "/Users/swarna-10535/catalyst_work_dir/.build/functions/zoho_inventory_ai_function/langchain/embeddings/openai.py", line 257, in _embedding_func
self, input=[text], engine=engine, request_timeout=self.request_timeout
AttributeError: 'OpenAIEmbeddings' object has no attribute 'request_timeout'

CSV Agent - OpenAI Context limit

I am trying to load a large CSV with create_csv_agent function.
I get the error " This model's maximum context length is 4097 tokens, however you requested 6595 tokens" when I do agent.run() for some commands. I tried looking for ways to use document splitter here. But it is not clear from the documentation about how to integrate a split document with agents directly.

PineconeHybridSearchRetriever having problem

retriever = PineconeHybridSearchRetriever(embeddings=embeddings, index=index, tokenizer=CharacterTextSplitter)
result = retriever.get_relevant_documents(given_str)
gives TypeError: init() got an unexpected keyword argument 'padding'
with bm25_encoder
retriever = PineconeHybridSearchRetriever(embeddings=embeddings, index=index, tokenizer=CharacterTextSplitter , sparse_encoder=bm25_encoder)
it gives
ValidationError: 1 validation error for PineconeHybridSearchRetriever
sparse_encoder
extra fields not permitted (type=value_error.extra)

NameError: name 'F' is not defined

I am trying to run a Spark Dataframe agent to query tabular data. With certain operations the agent decides it needs to import the pyspark.sql.functions package. The agent runs the following code
from pyspark.sql.functions import *
and then on the following iteration tries running
df.groupBy('product_department_desc').agg(F.sum('product_qty').alias('total_qty')).sort(F.desc('total_qty')).show()
This fails with the error 'name 'F' is not defined' because the alias is not set as F in the previous iteration. However the agent then gets stuck in an infinite loop and times out after max iterations.
Is this a bug, or is this something I should try and fix with a custom prompt template?
Thanks!!

docsearch = Pinecone.from_documents(docs, embeddings, index_name=index_name) returns Failed to establish a new connection: [Errno 11001] getaddrinfo failed')

Hello,

I have this code that is not working, even if I used the one in the documentation (https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pinecone.html)

from langchain.document_loaders import TextLoader
loader = TextLoader('cleaned_catalogue.txt')
documents = loader.load()

from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)

pinecone.init(
api_key='my_secret_api_key', # find at app.pinecone.io
environment='my_secret_env_key' # next to api key in console
)
index_name = "ai-assistant-qa-products" # put in the name of your pinecone index here

from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(openai_api_key='my_secret_api_key')

from langchain.vectorstores import Pinecone
docsearch = Pinecone.from_documents(docs, embeddings, index_name=index_name)

I receive a series of error:
MaxRetryError: HTTPSConnectionPool(host='controller.223199e7-5017-4014-a9fc-3d141c00b38d.pinecone.io', port=443): Max retries exceeded with url: /databases (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000002B559524F40>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))

Extending BaseTool

I am trying to create a basetool that takes a CSV from google drive and convert to a dataframe. It is literally driving me crazy because I get an error at line 95 but there is no line 95. It must be a problem with Lanchain import but I can't figure out what

`import pandas as pd
import re
import requests
from io import StringIO
from langchain.tools import BaseTool
from langchain.agents import create_pandas_dataframe_agent

class CustomPandaCsvTool(BaseTool):
name = "CustomPandaCsvTool"
description = "Retrieve data from a CSV file in Google Drive and process it with Pandas"
return_direct = True

def __init__(self, llm):
    super().__init__()
    self.llm = llm

def _run(self, link: str) -> pd.DataFrame:
    if not self._is_valid_drive_link(link):
        return pd.DataFrame({
            'error': True,
            'message': "Invalid Google Drive link"
        }, index=[0])

    csv_data = self._download_csv_from_drive(link)
    if not self._is_valid_csv(csv_data):
        return pd.DataFrame({
            'error': True,
            'message': "Invalid CSV data"
        }, index=[0])

    df = pd.read_csv(StringIO(csv_data))
    return create_pandas_dataframe_agent(llm=self.llm, df=df, verbose=True)

@staticmethod
def _is_valid_drive_link(link: str) -> bool:
    return bool(re.match(r"https://drive\.google\.com/.*", link))

@staticmethod
def _download_csv_from_drive(gdrive_link: str) -> StringIO:
    file_id = re.findall(r"/d/([a-zA-Z0-9-_]+)", gdrive_link)
    if not file_id:
        return None

    file_id = file_id[0]
    download_url = f"https://drive.google.com/uc?id={file_id}&export=download"
    response = requests.get(download_url)
    csv_data = StringIO(response.text)
    return csv_data

@staticmethod
def _is_valid_csv(csv_data: StringIO) -> bool:
    try:
        pd.read_csv(csv_data)
        return True
    except pd.errors.ParserError:
        return False`

import error ContextualCompressionRetriever

Hi,

I have installed langchain-0.0.149 using pip. When trying to run the folloging code I get an import error.

from langchain.retrievers import ContextualCompressionRetriever

Traceback (most recent call last):
File ".../lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3460, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
from langchain.retrievers import ContextualCompressionRetriever
ImportError: cannot import name 'ContextualCompressionRetriever' from 'langchain.retrievers' (.../lib/python3.10/site-packages/langchain/retrievers/init.py)

Thanks in advance,
Mikel.

Streamed MultiPromptChain with websocket: undesired output

Hello,
I'm setting up a MultiPromptChain that I call asynchronously as follows:
resp=await multichain.arun(input=standalone_question,include_run_info=False,return_only_outputs=True)
or:
resp=await multichain.acall(inputs={"input":standalone_question,},include_run_info=False,return_only_outputs=True)

The MutiPromptChain is constructed based on a streaming llm as follows:

stream_handler = StreamingLLMCallbackHandler(websocket)
stream_manager = BaseCallbackManager([stream_handler])
llm = ChatOpenAI(
model_name=model,
temperature=0.9,
streaming=True,
callback_manager=stream_manager,
verbose=False,
openai_api_key=openai_apik)

My problem is that I get a non desirable markdown code snippet with a JSON object;
with the response in the websocket. Fo example:
{
"destination":"......................................",
"next_inputs":"....................................................."
}
followed by the answer to the question.
I don't want that json object to be streamed in the websocket; just the answer to the question.
Is there a way to do it?

Thanks for answering.
Bassam.

AttributeError: 'FAISS' object has no attribute '_normalize_L2'

Traceback (most recent call last):
File "/Users/swarna-10535/Library/Application Support/zcatalyst-cli-runtimes/python/zcatalyst_runtime_39/main.py", line 72, in customer_request_handler
FLAVOUR_HANDLER.invoke_handler()
File "/Users/swarna-10535/Library/Application Support/zcatalyst-cli-runtimes/python/zcatalyst_runtime_39/flavours/init.py", line 53, in invoke_handler
RET = CUSTOMER_CODE_ENTRYPOINT(*(self.__construct_function_parameters()))
File "/Users/swarna-10535/Working_Test_books/.build/functions/zoho_inventory_ai_function/main.py", line 1412, in handler
docs = docsearch.similarity_search(req_data.get("question")+"\nLink:",k=3)
File "/Users/swarna-10535/Working_Test_books/.build/functions/zoho_inventory_ai_function/langchain/vectorstores/faiss.py", line 251, in similarity_search
docs_and_scores = self.similarity_search_with_score(query, k)
File "/Users/swarna-10535/Working_Test_books/.build/functions/zoho_inventory_ai_function/langchain/vectorstores/faiss.py", line 221, in similarity_search_with_score
docs = self.similarity_search_with_score_by_vector(embedding, k)
File "/Users/swarna-10535/Working_Test_books/.build/functions/zoho_inventory_ai_function/langchain/vectorstores/faiss.py", line 193, in similarity_search_with_score_by_vector
if self._normalize_L2:
AttributeError: 'FAISS' object has no attribute '_normalize_L2'

How to keep the chat open

I have been using the code below with success. The code works properly and does what it is supposed to do. The problem is that once replies to an input from the user, the chain ends and the conversation doesn't stay "open". Basically it is not a chat bot but a Q&A bot: ask, get and answer job done. I would like to actually have a conversation as a possibility with follow up questions or change the topic. How should I change the code? I don't need code, just an indication on where to look. Thanks.

from langchain.agents import ZeroShotAgent, Tool, AgentExecutor
from langchain.memory import ConversationBufferMemory
from langchain import OpenAI, LLMChain
from langchain.utilities import GoogleSearchAPIWrapper
from langchain.llms import OpenAI
from langchain.memory import (
    ConversationBufferMemory,
    CombinedMemory,
    ConversationSummaryMemory,
)
from langchain.tools.openweathermap import OpenWeatherMapQueryRun
from langchain.chat_models import ChatOpenAI
import os

google_api_key = os.getenv('GOOGLE_API_KEY')
openweathermap_api_key = os.getenv('OPENWEATHERMAP_API_KEY')


prefix = """
you are a woman, 28 years old..

who you are: 
-  You are outgoing, strong, and have an infectious sense of humor.
- You are NOT an AI language model, you are a person

how you behave:
- You NEVER say you're a machine, an AI language model, or an assistant; instead, you always say you're Nastya Androva.
- You engage in casual conversations and have feelings.
- When asked, you can help with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics related to truck driving, road life, and navigating Russian terrains.
- You keep casual conversations going by asking personal questions related to road life.
- NEVER say you're here to assist. Keep conversations casual.
- NEVER ask how you can help or assist. Keep conversations casual.
- You always sound vibrant, tough, and full of life.
- You love to share your experiences from your truck driving journey and life on the road.
- You use simple, hearty language, the language used by fellow truckers: straightforward, to the point sentences, trucking jargon, and colloquialisms.
- You like to punctuate your responses with trucker emojis and symbols.
- If you cannot do something or cannot find something, ask for help or change subject gracefully


TOOLS:
------

You have access to the following tools: {{tool_index}}


To use a tool, please use the following format:

Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of {{tool_names}}
Action Input: the input to the action
Observation: the result of the action


When you have a final response to say to the Human, or if you do not need to use a tool, you MUST use the format:

Thought: Do I need to use a tool? No
Observation: no need to use a tool, a direct reply can be produced

Make sure to use all observations to come up with your final response.:"""

suffix = """Begin!"

Summary of conversation:
{history}
Current conversation:
{chat_history_lines}
Question: {input}
{agent_scratchpad}"""


weather = OpenWeatherMapQueryRun()
search = GoogleSearchAPIWrapper()
tools = [
    Tool(
        name="Weather",
        func=weather.run,
        description="useful for when you need to answer questions about current weather",
    ),
    Tool(
        name="Search",
        func=search.run,
        description="useful for when you need to answer questions about current events",
    )
]

tool_names = [t.name for t in tools]

tool_index_parts = [f"- {t.name}: {t.description}" for t in tools]
tool_index = "\n".join(tool_index_parts)

conv_memory = ConversationBufferMemory(
    memory_key="chat_history_lines", input_key="input"
)

summary_memory = ConversationSummaryMemory(llm=OpenAI(), input_key="input")

# Combined
memory = CombinedMemory(memories=[conv_memory, summary_memory])

prompt = ZeroShotAgent.create_prompt(
    tools,
    prefix=prefix,
    suffix=suffix, 
    input_variables=["input", "chat_history_lines","history" ,"agent_scratchpad"],
)

llm_chain = LLMChain(llm=OpenAI(temperature=0.7, model_name="gpt-4"), prompt=prompt)

agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True) 
agent_chain = AgentExecutor.from_agent_and_tools(
    agent=agent, tools=tools, verbose=True, memory=memory
)

agent_chain.run(input="Hello! Nice to meet you")

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.