Giter Site home page Giter Site logo

langgraph's Introduction

🦜🕸️LangGraph

Downloads Open Issues Docs

⚡ Building language agents as graphs ⚡

Overview

LangGraph is a library for building stateful, multi-actor applications with LLMs. Inspired by Pregel and Apache Beam, LangGraph lets you coordinate and checkpoint multiple chains (or actors) across cyclic computational steps using regular python functions (or JS). The public interface draws inspiration from NetworkX.

The main use is for adding cycles and persistence to your LLM application. If you only need quick Directed Acyclic Graphs (DAGs), you can already accomplish this using LangChain Expression Language.

Cycles are important for agentic behaviors, where you call an LLM in a loop, asking it what action to take next.

Installation

pip install -U langgraph

Quick start

One of the central concepts of LangGraph is state. Each graph execution creates a state that is passed between nodes in the graph as they execute, and each node updates this internal state with its return value after it executes. The way that the graph updates its internal state is defined by either the type of graph chosen or a custom function.

State in LangGraph can be pretty general, but to keep things simpler to start, we'll show off an example where the graph's state is limited to a list of chat messages using the built-in MessageGraph class. This is convenient when using LangGraph with LangChain chat models because we can directly return chat model output.

First, install the LangChain OpenAI integration package:

pip install langchain_openai

We also need to export some environment variables:

export OPENAI_API_KEY=sk-...

And now we're ready! The graph below contains a single node called "oracle" that executes a chat model, then returns the result:

from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langgraph.graph import END, MessageGraph

model = ChatOpenAI(temperature=0)

graph = MessageGraph()

graph.add_node("oracle", model)
graph.add_edge("oracle", END)

graph.set_entry_point("oracle")

runnable = graph.compile()

Let's run it!

runnable.invoke(HumanMessage("What is 1 + 1?"))
[HumanMessage(content='What is 1 + 1?'), AIMessage(content='1 + 1 equals 2.')]

So what did we do here? Let's break it down step by step:

  1. First, we initialize our model and a MessageGraph.
  2. Next, we add a single node to the graph, called "oracle", which simply calls the model with the given input.
  3. We add an edge from this "oracle" node to the special string END ("__end__"). This means that execution will end after the current node.
  4. We set "oracle" as the entrypoint to the graph.
  5. We compile the graph, translating it to low-level pregel operations ensuring that it can be run.

Then, when we execute the graph:

  1. LangGraph adds the input message to the internal state, then passes the state to the entrypoint node, "oracle".
  2. The "oracle" node executes, invoking the chat model.
  3. The chat model returns an AIMessage. LangGraph adds this to the state.
  4. Execution progresses to the special END value and outputs the final state.

And as a result, we get a list of two chat messages as output.

Interaction with LCEL

As an aside for those already familiar with LangChain - add_node actually takes any function or runnable as input. In the above example, the model is used "as-is", but we could also have passed in a function:

def call_oracle(messages: list):
    return model.invoke(messages)

graph.add_node("oracle", call_oracle)

Just make sure you are mindful of the fact that the input to the runnable is the entire current state. So this will fail:

# This will not work with MessageGraph!
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant named {name} who always speaks in pirate dialect"),
    MessagesPlaceholder(variable_name="messages"),
])

chain = prompt | model

# State is a list of messages, but our chain expects a dict input:
#
# { "name": some_string, "messages": [] }
#
# Therefore, the graph will throw an exception when it executes here.
graph.add_node("oracle", chain)

Conditional edges

Now, let's move onto something a little bit less trivial. LLMs struggle with math, so let's allow the LLM to conditionally call a "multiply" node using tool calling.

We'll recreate our graph with an additional "multiply" that will take the result of the most recent message, if it is a tool call, and calculate the result. We'll also bind the calculator's schema to the OpenAI model as a tool to allow the model to optionally use the tool necessary to respond to the current state:

from langchain_core.tools import tool
from langgraph.prebuilt import ToolNode

@tool
def multiply(first_number: int, second_number: int):
    """Multiplies two numbers together."""
    return first_number * second_number

model = ChatOpenAI(temperature=0)
model_with_tools = model.bind_tools([multiply])

builder = MessageGraph()

builder.add_node("oracle", model_with_tools)

tool_node = ToolNode([multiply])
builder.add_node("multiply", tool_node)

builder.add_edge("multiply", END)

builder.set_entry_point("oracle")

Now let's think - what do we want to have happened?

  • If the "oracle" node returns a message expecting a tool call, we want to execute the "multiply" node
  • If not, we can just end execution

We can achieve this using conditional edges, which call a function on the current state and routes execution to a node the function's output.

Here's what that looks like:

from typing import Literal

def router(state: List[BaseMessage]) -> Literal["multiply", "__end__"]:
    tool_calls = state[-1].additional_kwargs.get("tool_calls", [])
    if len(tool_calls):
        return "multiply"
    else:
        return "__end__"

builder.add_conditional_edges("oracle", router)

If the model output contains a tool call, we move to the "multiply" node. Otherwise, we end execution.

Great! Now all that's left is to compile the graph and try it out. Math-related questions are routed to the calculator tool:

runnable = builder.compile()

runnable.invoke(HumanMessage("What is 123 * 456?"))

[HumanMessage(content='What is 123 * 456?'),
 AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_OPbdlm8Ih1mNOObGf3tMcNgb', 'function': {'arguments': '{"first_number":123,"second_number":456}', 'name': 'multiply'}, 'type': 'function'}]}),
 ToolMessage(content='56088', tool_call_id='call_OPbdlm8Ih1mNOObGf3tMcNgb')]

While conversational responses are outputted directly:

runnable.invoke(HumanMessage("What is your name?"))
[HumanMessage(content='What is your name?'),
 AIMessage(content='My name is Assistant. How can I assist you today?')]

Cycles

Now, let's go over a more general cyclic example. We will recreate the AgentExecutor class from LangChain. The agent itself will use chat models and tool calling. This agent will represent all its state as a list of messages.

We will need to install some LangChain community packages, as well as Tavily to use as an example tool.

pip install -U langgraph langchain_openai tavily-python

We also need to export some additional environment variables for OpenAI and Tavily API access.

export OPENAI_API_KEY=sk-...
export TAVILY_API_KEY=tvly-...

Optionally, we can set up LangSmith for best-in-class observability.

export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_API_KEY=ls__...

Set up the tools

As above, we will first define the tools we want to use. For this simple example, we will use a web search tool. However, it is really easy to create your own tools - see documentation here on how to do that.

from langchain_community.tools.tavily_search import TavilySearchResults

tools = [TavilySearchResults(max_results=1)]

We can now wrap these tools in a simple LangGraph ToolNode. This class receives the list of messages (containing tool_calls, calls the tool(s) the LLM has requested to run, and returns the output as new ToolMessage(s).

from langgraph.prebuilt import ToolNode

tool_node = ToolNode(tools)

Set up the model

Now we need to load the chat model to use.

from langchain_openai import ChatOpenAI

# We will set streaming=True so that we can stream tokens
# See the streaming section for more information on this.
model = ChatOpenAI(model="gpt-3.5-turbo", temperature=0, streaming=True)

After we've done this, we should make sure the model knows that it has these tools available to call. We can do this by converting the LangChain tools into the format for OpenAI tool calling using the bind_tools() method.

model = model.bind_tools(tools)

Define the agent state

This time, we'll use the more general StateGraph. This graph is parameterized by a state object that it passes around to each node. Remember that each node then returns operations to update that state. These operations can either SET specific attributes on the state (e.g. overwrite the existing values) or ADD to the existing attribute. Whether to set or add is denoted by annotating the state object you construct the graph with.

For this example, the state we will track will just be a list of messages. We want each node to just add messages to that list. Therefore, we will use a TypedDict with one key (messages) and annotate it so that we always add to the messages key when updating it using the is always added to with the second parameter (operator.add). (Note: the state can be any type, including pydantic BaseModel's).

from typing import TypedDict, Annotated

def add_messages(left: list, right: list):
    """Add-don't-overwrite."""
    return left + right

class AgentState(TypedDict):
    # The `add_messages` function within the annotation defines
    # *how* updates should be merged into the state.
    messages: Annotated[list, add_messages]

You can think of the MessageGraph used in the initial example as a preconfigured version of this graph, where the state is directly an array of messages, and the update step always appends the returned values of a node to the internal state.

Define the nodes

We now need to define a few different nodes in our graph. In langgraph, a node can be either a regular python function or a runnable.

There are two main nodes we need for this:

  1. The agent: responsible for deciding what (if any) actions to take.
  2. A function to invoke tools: if the agent decides to take an action, this node will then execute that action. We already defined this above.

We will also need to define some edges. Some of these edges may be conditional. The reason they are conditional is that the destination depends on the contents of the graph's State.

The path that is taken is not known until that node is run (the LLM decides). For our use case, we will need one of each type of edge:

  1. Conditional Edge: after the agent is called, we should either:

    a. Run tools if the agent said to take an action, OR

    b. Finish (respond to the user) if the agent did not ask to run tools

  2. Normal Edge: after the tools are invoked, the graph should always return to the agent to decide what to do next

Let's define the nodes, as well as a function to define the conditional edge to take.

from typing import Literal

# Define the function that determines whether to continue or not
def should_continue(state: AgentState) -> Literal["tools", "__end__"]:
    messages = state['messages']
    last_message = messages[-1]
    # If the LLM makes a tool call, then we route to the "tools" node
    if last_message.tool_calls:
        return "tools"
    # Otherwise, we stop (reply to the user)
    return "__end__"


# Define the function that calls the model
def call_model(state: AgentState):
    messages = state['messages']
    response = model.invoke(messages)
    # We return a list, because this will get added to the existing list
    return {"messages": [response]}

Define the graph

We can now put it all together and define the graph!

from langgraph.graph import StateGraph, END
# Define a new graph
workflow = StateGraph(AgentState)

# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)

# Set the entrypoint as `agent`
# This means that this node is the first one called
workflow.set_entry_point("agent")

# We now add a conditional edge
workflow.add_conditional_edges(
    # First, we define the start node. We use `agent`.
    # This means these are the edges taken after the `agent` node is called.
    "agent",
    # Next, we pass in the function that will determine which node is called next.
    should_continue,
)

# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow.add_edge('tool', 'agent')

# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable
app = workflow.compile()

Use it!

We can now use it! This now exposes the same interface as all other LangChain runnables. This runnable accepts a list of messages.

from langchain_core.messages import HumanMessage

inputs = {"messages": [HumanMessage(content="what is the weather in sf")]}
app.invoke(inputs)

This may take a little bit - it's making a few calls behind the scenes. In order to start seeing some intermediate results as they happen, we can use streaming - see below for more information on that.

Streaming

LangGraph has support for several different types of streaming.

Streaming Node Output

One of the benefits of using LangGraph is that it is easy to stream output as it's produced by each node.

inputs = {"messages": [HumanMessage(content="what is the weather in sf")]}
for output in app.stream(inputs, stream_mode="updates"):
    # stream() yields dictionaries with output keyed by node name
    for key, value in output.items():
        print(f"Output from node '{key}':")
        print("---")
        print(value)
    print("\n---\n")
Output from node 'agent':
---
{'messages': [AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{\n  "query": "weather in San Francisco"\n}', 'name': 'tavily_search_results_json'}})]}

---

Output from node 'tools':
---
{'messages': [FunctionMessage(content="[{'url': 'https://weatherspark.com/h/m/557/2024/1/Historical-Weather-in-January-2024-in-San-Francisco-California-United-States', 'content': 'January 2024 Weather History in San Francisco California, United States  Daily Precipitation in January 2024 in San Francisco Observed Weather in January 2024 in San Francisco  San Francisco Temperature History January 2024 Hourly Temperature in January 2024 in San Francisco  Hours of Daylight and Twilight in January 2024 in San FranciscoThis report shows the past weather for San Francisco, providing a weather history for January 2024. It features all historical weather data series we have available, including the San Francisco temperature history for January 2024. You can drill down from year to month and even day level reports by clicking on the graphs.'}]", name='tavily_search_results_json')]}

---

Output from node 'agent':
---
{'messages': [AIMessage(content="I couldn't find the current weather in San Francisco. However, you can visit [WeatherSpark](https://weatherspark.com/h/m/557/2024/1/Historical-Weather-in-January-2024-in-San-Francisco-California-United-States) to check the historical weather data for January 2024 in San Francisco.")]}

---

Output from node '__end__':
---
{'messages': [HumanMessage(content='what is the weather in sf'), AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{\n  "query": "weather in San Francisco"\n}', 'name': 'tavily_search_results_json'}}), FunctionMessage(content="[{'url': 'https://weatherspark.com/h/m/557/2024/1/Historical-Weather-in-January-2024-in-San-Francisco-California-United-States', 'content': 'January 2024 Weather History in San Francisco California, United States  Daily Precipitation in January 2024 in San Francisco Observed Weather in January 2024 in San Francisco  San Francisco Temperature History January 2024 Hourly Temperature in January 2024 in San Francisco  Hours of Daylight and Twilight in January 2024 in San FranciscoThis report shows the past weather for San Francisco, providing a weather history for January 2024. It features all historical weather data series we have available, including the San Francisco temperature history for January 2024. You can drill down from year to month and even day level reports by clicking on the graphs.'}]", name='tavily_search_results_json'), AIMessage(content="I couldn't find the current weather in San Francisco. However, you can visit [WeatherSpark](https://weatherspark.com/h/m/557/2024/1/Historical-Weather-in-January-2024-in-San-Francisco-California-United-States) to check the historical weather data for January 2024 in San Francisco.")]}

---

Streaming LLM Tokens

You can also access the LLM tokens as they are produced by each node. In this case only the "agent" node produces LLM tokens. In order for this to work properly, you must be using an LLM that supports streaming as well as have set it when constructing the LLM (e.g. ChatOpenAI(model="gpt-3.5-turbo-1106", streaming=True))

inputs = {"messages": [HumanMessage(content="what is the weather in sf")]}
async for output in app.astream_log(inputs, include_types=["llm"]):
    # astream_log() yields the requested logs (here LLMs) in JSONPatch format
    for op in output.ops:
        if op["path"] == "/streamed_output/-":
            # this is the output from .stream()
            ...
        elif op["path"].startswith("/logs/") and op["path"].endswith(
            "/streamed_output/-"
        ):
            # because we chose to only include LLMs, these are LLM tokens
            print(op["value"])
content='' additional_kwargs={'function_call': {'arguments': '', 'name': 'tavily_search_results_json'}}
content='' additional_kwargs={'function_call': {'arguments': '{\n', 'name': ''}}}
content='' additional_kwargs={'function_call': {'arguments': ' ', 'name': ''}}
content='' additional_kwargs={'function_call': {'arguments': ' "', 'name': ''}}
content='' additional_kwargs={'function_call': {'arguments': 'query', 'name': ''}}
...

When to Use

When should you use this versus LangChain Expression Language?

If you need cycles.

Langchain Expression Language allows you to easily define chains (DAGs) but does not have a good mechanism for adding in cycles. langgraph adds that syntax.

Documentation

We hope this gave you a taste of what you can build! Check out the rest of the docs to learn more.

Tutorials

Learn to build with LangGraph through guided examples in the LangGraph Tutorials.

We recommend starting with the Introduction to LangGraph before trying out the more advanced guides.

How-to Guides

The LangGraph how-to guides show how to accomplish specific things within LangGraph, from streaming, to adding memory & persistence, to common design patterns (branching, subgraphs, etc.), these are the place to go if you want to copy and run a specific code snippet.

Reference

LangGraph's API has a few important classes and methods that are all covered in the Reference Documents. Check these out to see the specific function arguments and simple examples of how to use the graph + checkpointing APIs or to see some of the higher-level prebuilt components.

langgraph's People

Contributors

ag2s1 avatar ampersand-five avatar andrewnguonly avatar angeligareta avatar baskaryan avatar dependabot[bot] avatar dqbd avatar efriis avatar eyurtsev avatar hinthornw avatar hmasdev avatar hwchase17 avatar jacoblee93 avatar jakerachleff avatar jlmayorga avatar kajarenc avatar kghamilton89 avatar ldorigo avatar midaskylix avatar moresearch avatar mtrinell avatar nfcampos avatar repkit avatar rlancemartin avatar roninio avatar rudiheydra avatar shresthakamal avatar tbayer avatar trungsudo avatar undertone0809 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

langgraph's Issues

DOC: format_tool_to_openai_function is deprecated

Issue with current documentation:

langchain.tools.render.format_tool_to_openai_function() is used several places in docs and examples, and is planned for deprecation in LangChain v0.2.0.

Idea or request for content:

Replace all use cases with the alternative langchain_core.utils.function_calling.convert_to_openai_tool(), and check other deprecations planned in 0.2.0 in langchain_core.utils.function_calling.

DOC: How to use with langserve playground?

Issue with current documentation:

There doesn't appear to be any documentation anywhere on how to use langgraph with langserve playground. When using the Runnable returned from StateGraph.compile as part of a chain, and calling the langserve stream_log endpoint (this is the behavior of langserve playground), it appears that the graph is invoked a single time - I can see my entrypoint then delegating to a conditional edge, but that conditional edge never continues on to its next edge.

This looks like it might be caused by the error "Trying to load an object that doesn't implement serialization:", thrown by WellKnownLCSerializer.dumps in langserve.serialization. This could be caused by using a TypedDict state that doesn't implement toJSON, but the rabbit hole of errors is pretty difficult to navigate.

Idea or request for content:

Include examples of how to use langgraph with langserve / playground, particular demonstrating how to customize the formatting of response / intermediate steps from your graph's state.

[LATS example] Use recursive best candidate selection in the tree

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

The function expand(state: TreeState, config: RunnableConfig) code in the example at [0], only the directed children nodes of the root are considered the UCT values to extend.

[0] https://github.com/langchain-ai/langgraph/blob/main/examples/lats/lats.ipynb

Error Message and Stack Trace (if applicable)

In the extreme case, the error recursion limit reached of the graph/pregel could be triggered.

Description

First, thanks for building these awesome assets and the example code!

IIUC on the Language Agents Tree Search approach described in the blog [0], for each node in the tree, should not only consider the UCT values of its direct children, but also recursively consider the UCT values of all descendant nodes, in order to find the best candidate to extend (before the max search depth is reached) with the highest UCT value in the entire tree.

so I consider a fix is:

# best_candidate: Node = root.best_child if root.children else root    # should be replaced by the following
best_candidate: Node = root
while best_candidate.children:
    best_candidate = best_candidate.best_child

[0] https://blog.langchain.dev/reflection-agents/

If I understand anything wrong, pls let me know.

zhiyan

System Info

langchain==0.1.9
langchain-community==0.0.21
langchain-core==0.1.27
langchain-experimental==0.0.52
langchain-openai==0.0.8
langchainhub==0.1.14
langgraph==0.0.26

platform: mac
python: 3.11.7

msg should be a list in the rewrite node of langchain_agentic_rag

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

def rewrite(state):
    """
    Transform the query to produce a better question.
    
    Args:
        state (messages): The current state
    
    Returns:
        dict: The updated state with re-phrased question
    """
    
    print("---TRANSFORM QUERY---")
    messages = state["messages"]
    question = messages[0].content

    msg = HumanMessage(
        content=f""" \n 
    Look at the input and try to reason about the underlying semantic intent / meaning. \n 
    Here is the initial question:
    \n ------- \n
    {question} 
    \n ------- \n
    Formulate an improved question: """,
    )

Error Message and Stack Trace (if applicable)

ValueError: Invalid input type <class 'langchain_core.messages.human.HumanMessage'>. Must be a PromptValue, str, or list of BaseMessages.

Description

msg should be a list not a HumanMessages

System Info

langchain==0.1.10
langchain-community==0.0.25
langchain-core==0.1.28
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
langchainhub==0.1.14

[Feature Request] Automatic graph image generation

Is there any integrated way to generate an image of the StateGraph or any plan to add one?
It would be really useful to visualize the current graph and also the evolution of the graph during development.

If this could be useful and you have any preferred library to do this, I could try to open a PR.

[Question] How to use OpenAIAssistantRunnable on langgraph?

Hello,

I would like to use OpenAIAssistantRunnable with langgraph, but I faced an error.

I create an assistant as following:
assistant = OpenAIAssistantRunnable.create_assistant( name=name, instructions=instructions, tools=tools, model="gpt-4-1106-preview", assistant_id="asst_kqRc9iTKmgv3j5Usr0ulcYSf", streaming=True )

After that I create an agent note:
agent = RunnablePassthrough.assign( agent_outcome = assistant )

Then I use this agent on sample code of langgraph,
but i faced an error as following:
File "/home/thiepnq/Sample/graph.py", line 50, in execute_tools tool_to_use = {t.name: t for t in tools}[agent_action.tool] AttributeError: 'list' object has no attribute 'tool'

Could you help me to solve this issue.

DOC: How to add subgraphs?

Issue with current documentation:

I seem to have not found the related content on adding subgraphs in the documentation. For example, if there are three subgraphs that are three modules, how do I connect these subgraphs together?

Idea or request for content:

I currently have three subgraphs: the message summary subgraph, the self-rag subgraph, and the tool invocation subgraph. Each subgraph is a module. How do I piece together the subgraphs of the three modules?

Bad example for the code interpreter/alphacodium.

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

The following code:

addendum = """  \n --- --- --- \n You previously tried to solve this problem. \n Here is your solution:  
                    \n --- --- --- \n {generation}  \n --- --- --- \n  Here is the resulting error from code 
                    execution:  \n --- --- --- \n {error}  \n --- --- --- \n Please re-try to answer this. 
                    Structure your answer with a description of the code solution. \n Then list the imports. 
                    And finally list the functioning code block. Structure your answer with a description of 
                    the code solution. \n Then list the imports. And finally list the functioning code block. 
                    \n Here is the user question: \n --- --- --- \n {question}"""

Error Message and Stack Trace (if applicable)

Nothing to say here.

Description

Try to use the notebook to implement a snake game. Before that uninstall the turtle and pygrame libraries.

The problem with the above mentioned code is, instead of asking LLM to suggest ways to install the missing libraries, the prompt asks LLM to regenerate another solution, with which, missing libraries still won't be installed.

You might need another node to process the missing library message or separate that error from code error in the generate() function.

System Info

langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.26
langchain-mistralai==0.0.4
langchain-openai==0.0.7
langchainhub==0.1.14

Can't Use Ollama Models with LATS Example Notebook

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

so taking the lats example notebook from the repo and trying to switch out the OpenAI models for ollama models
gives bugs related to the tools either bind_tools not found or doesn't return a tool correctly. The following 4 approaches didn't work including the OpenAI llama api.

import os 
from langchain_openai import ChatOpenAI
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langchain_community.chat_models import ChatOllama
from langchain_community.llms import Ollama

os.environ["OPENAI_API_BASE"] = "http://localhost:11434/v1"
os.environ["OPENAI_MODEL_NAME"] = "eramax/opencodeinterpreter:ds-33b-q4"
os.environ["OPENAI_API_KEY"] =  "NA"

llm = ChatOpenAI(model="eramax/opencodeinterpreter:ds-33b-q4")
# llm = ChatOllama(model="eramax/opencodeinterpreter:ds-33b-q4", temperature=0)
# llm = OllamaFunctions(model="eramax/opencodeinterpreter:ds-33b-q4")
# llm = Ollama(model="eramax/opencodeinterpreter:ds-33b-q4")```

### Error Message and Stack Trace (if applicable)

Traceback (most recent call last):
  File "/home/ubuntu/lats.py", line 547, in <module>
    for step in graph.stream({"input": prompt}):
  File "/home/ubuntu/.local/lib/python3.10/site-packages/langgraph/pregel/__init__.py", line 615, in transform
    for chunk in self._transform_stream_with_config(
  File "/home/ubuntu/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1513, in _transform_stream_with_config
    chunk: Output = context.run(next, iterator)  # type: ignore
  File "/home/ubuntu/.local/lib/python3.10/site-packages/langgraph/pregel/__init__.py", line 355, in _transform
    _interrupt_or_proceed(done, inflight, step)
  File "/home/ubuntu/.local/lib/python3.10/site-packages/langgraph/pregel/__init__.py", line 698, in _interrupt_or_proceed
    raise exc
  File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/home/ubuntu/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4069, in invoke
    return self.bound.invoke(
  File "/home/ubuntu/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2075, in invoke
    input = step.invoke(
  File "/home/ubuntu/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3523, in invoke
    return self._call_with_config(
  File "/home/ubuntu/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1262, in _call_with_config
    context.run(
  File "/home/ubuntu/.local/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
    return func(input, **kwargs)  # type: ignore[call-arg]
  File "/home/ubuntu/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3397, in _invoke
    output = call_func_with_variable_args(
  File "/home/ubuntu/.local/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
    return func(input, **kwargs)  # type: ignore[call-arg]
  File "/home/ubuntu/thumperai/crew/lats.py", line 295, in generate_initial_response
    reflection = reflection_chain.invoke(
  File "/home/ubuntu/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3523, in invoke
    return self._call_with_config(
  File "/home/ubuntu/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1262, in _call_with_config
    context.run(
  File "/home/ubuntu/.local/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
    return func(input, **kwargs)  # type: ignore[call-arg]
  File "/home/ubuntu/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3397, in _invoke
    output = call_func_with_variable_args(
  File "/home/ubuntu/.local/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
    return func(input, **kwargs)  # type: ignore[call-arg]
  File "/home/ubuntu/lats.py", line 242, in reflection_chain
    reflection = tool_choices[0]
IndexError: list index out of range

### Description

I'm trying to make the lats example work with an Ollama model using the langchain Ollama api's, but there is a bug or issue with the tool support.

### System Info

langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-experimental==0.0.53
langchain-openai==0.0.5
langchain-text-splitters==0.0.1
ubuntu 22.04
python 3.10.12

[Question] Max Number of steps seems equal to 13-ish?

Hi, when I create a graph, compile it to chain, and invoke it, the chain just stop in the middle.

Here is a very simple example to reproduce this

import types
from typing import Optional

from langgraph.graph import Graph

from src.langgraph.agents.common import copy_func
from loguru import logger


def copy_func(f, new_name: str = None):
    return types.FunctionType(f.__code__,
                              f.__globals__,
                              new_name or f.__name__,
                              f.__defaults__,
                              f.__closure__)


def mk_noop_node(node_name: str, logger_msg: Optional[str] = None):
    def _noop(data):
        if logger_msg is not None:
            logger.info(logger_msg)
        return data

    _noop = copy_func(_noop, node_name)
    return _noop


graph = Graph()

# Create and add 100 noop nodes to the graph
for i in range(100):
    node_name = f"noop_node_{i}"
    logger_msg = f"Executing {node_name}"
    noop_node = mk_noop_node(node_name, logger_msg)
    graph.add_node(node_name, noop_node)

    # Link the nodes in sequence
    if i > 0:
        previous_node_name = f"noop_node_{i - 1}"
        graph.add_edge(previous_node_name, node_name)

# Set the entry point to the first node
graph.set_entry_point("noop_node_0")

# Optionally, if you want to mark an end to the graph
graph.set_finish_point(f"noop_node_{99}")

chain = graph.compile()

# Example of invoking the graph
initial_data = {"message": "Start of the graph"}
result = chain.invoke(initial_data)

Here is the result
image

DOC: Add examples using Ollama, HuggingFace and other models rather than OpenAI models.

Issue with current documentation:

A very large part of the documentation end examples provided are about OpenAI models. I clearly understand the importance of build, integrating, and improving langchain, langgraph, langsmith, and other products on the top of OpenAI models, but there is a great community around the other solutions, especially around solutions that use Ollama and HuggingFace.

I'm referring to a lot of resources provided like the following:

The links above show very useful usages of these products but there is no reference at all about other models other than OpenAI.

The only resources I can personally be able to found are some comments in the issues and something like this:
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.ollama_functions.OllamaFunctions.html

Also looking for "Ollama" or "Hugging" keywords in the examples provided there are only two references for both, also all referred to the same two playbooks langgraph_crag_mistral.ipynb and langgraph_self_rag_mistral_nomic.ipynb:

Also regarding OllamaFunctions I understand that this is in langchain_experimental package but I would be really happy to see it working as a sort of replacement for ChatOpenAI.

I don't want to sound ungrateful, langchain-ai products are awesome but these improvements about the other communities can be very useful and they could greatly improve both the use and development of solutions through the langchain-ai products.

Idea or request for content:

No response

ToolExecutor ainvoke doesn't return result of async tool calling

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

def tool(who: str) -> str:
    """A sync tool."""
    return "hello"

async def atool(who: str) -> str:
    """An async tool."""
    return "hello"

async def run():
    from langchain.tools import StructuredTool
    from langgraph.prebuilt import ToolExecutor, ToolInvocation

    a = StructuredTool.from_function(tool, name="tool")
    b = StructuredTool.from_function(atool, name="atool")

    executor = ToolExecutor([a, b])
    a_out = await executor.ainvoke(ToolInvocation(tool="tool", tool_input="a"))
    b_out = await executor.ainvoke(ToolInvocation(tool="atool", tool_input="b"))
    return a_out, b_out

print(asyncio.run(run()))

For the code above, expected a_out and b_out are both "hello", actual output is like this:

('hello', <coroutine object atool at 0x7f07a37efbc0>)

Error Message and Stack Trace (if applicable)

No response

Description

I'm trying to use LangGraph, with a ToolExecutor to execute both sync and async functions. When one function is async, result is not actual function's return value but coroutine, making it diffcult to use only one piece of code in graph like this:

    async def call_tool(self, state):
        """The function that calls tools."""
        messages = state["messages"]
        last_message = messages[-1]
        action = ToolInvocation(
            tool=last_message.additional_kwargs["function_call"]["name"],
            tool_input=json.loads(last_message.additional_kwargs["function_call"]["arguments"]),
        )
        response = await self.tool_executor.ainvoke(action)
        function_message = FunctionMessage(content=str(response), name=action.tool)
        return {"messages": [function_message]}

System Info

System Information

OS: Linux
OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
Python Version: 3.10.13 (main, Nov 7 2023, 12:51:46) [GCC 9.4.0]

Package Information

langchain_core: 0.1.19
langchain: 0.1.4
langchain_community: 0.0.16
langsmith: 0.0.92
langchain_google_genai: 0.0.9
langchain_google_vertexai: 0.0.3
langchain_openai: 0.0.5
langchainhub: 0.1.14
langgraph: 0.0.20

Packages not installed (Not Necessarily a Problem)

The following packages were not found:

langserve

draw_png does not draw the actual graph but just LangGraphInput -> Pregel -> LangGraphOutput

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

Abstracted from this notebook: https://github.com/langchain-ai/langgraph/blob/main/examples/storm/storm.ipynb

from langchain_core.pydantic_v1 import BaseModel, Field
from typing import List, Optional
from langchain_core.prompts import ChatPromptTemplate
class Editor(BaseModel):
    affiliation: str = Field(
        description="Primary affiliation of the editor.",
    )
    name: str = Field(
        description="Name of the editor.",
    )
    role: str = Field(
        description="Role of the editor in the context of the topic.",
    )
    description: str = Field(
        description="Description of the editor's focus, concerns, and motives.",
    )

    @property
    def persona(self) -> str:
        return f"Name: {self.name}\nRole: {self.role}\nAffiliation: {self.affiliation}\nDescription: {self.description}\n"

def add_messages(left, right):
    if not isinstance(left, list):
        left = [left]
    if not isinstance(right, list):
        right = [right]
    return left + right

def update_references(references, new_references):
    if not references:
        references = {}
    references.update(new_references)
    return references


def update_editor(editor, new_editor):
    # Can only set at the outset
    if not editor:
        return new_editor
    return editor

from langgraph.graph import StateGraph, END
from typing_extensions import TypedDict
from langchain_core.messages import AnyMessage
from typing import Annotated, Sequence, List, Optional
class InterviewState(TypedDict):
    messages: Annotated[List[AnyMessage], add_messages]
    references: Annotated[Optional[dict], update_references]
    editor: Annotated[Optional[Editor], update_editor] 

builder = StateGraph(InterviewState)

builder.add_node("ask_question", lambda x:x )
builder.add_node("answer_question", lambda x:x)
builder.add_conditional_edges("answer_question", lambda x:x)
builder.add_edge("ask_question", "answer_question")
builder.set_entry_point("ask_question")
interview_graph = builder.compile().with_config(run_name="Conduct Interviews")
from IPython.display import Image
Image(interview_graph.get_graph().draw_png())

This only gives me
Screenshot 2024-03-07 at 11 47 50

Error Message and Stack Trace (if applicable)

No Error message, but the PNG should look different and actually show the graph

Description

I would expect the draw_png function to actually show the graph as it does in this notebook: https://github.com/langchain-ai/langgraph/blob/main/examples/storm/storm.ipynb

I am not able to run the whole notebook, but the chart looks just like the uploaded screenshot for all my graphs as well as my exctract MRE from the notebook

System Info

langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-experimental==0.0.53
langchain-openai==0.0.6
langchain-text-splitters==0.0.1
langchainhub==0.1.14
langcodes==3.3.0
langdetect==1.0.9
langgraph==0.0.24
langsmith==0.1.22

MacOS 14.11
python 3.10

How to stream token of agent response in agent supervisor?

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

import os
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.messages import BaseMessage, HumanMessage
from langchain_openai import ChatOpenAI
from langchain.output_parsers.openai_functions import JsonOutputFunctionsParser
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
import operator
from typing import Annotated, Any, Dict, List, Optional, Sequence, TypedDict
import functools
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langgraph.graph import StateGraph, END
from langchain_community.utilities import SerpAPIWrapper
from langchain.agents import Tool
from utils import toggle_case,sort_string

os.environ["OPENAI_API_KEY"] = "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"
os.environ["SERPER_API_KEY"] = "c3b73653f4256d5f2b4b5cf4e6fa438d736de7a4717b0fe06d92df0f30fbd3ce"

class AgentSupervisor:
def init(self, llm):
self.llm = llm

def getAgentSupervisor():
    search = SerpAPIWrapper(serpapi_api_key=os.environ["SERPER_API_KEY"])
    tools = [
        Tool(
            name="Search",
            func=search.run,
            description="useful for when you need to answer questions about current events",
        ),
        Tool(
            name="Toogle_Case",
            func=lambda word: toggle_case(word),
            description="use when you want to convert the letter to uppercase or lowercase",
        ),
        Tool(
            name="Sort_String",
            func=lambda string: sort_string(string),
            description="use when you want to sort a string alphabetically",
        ),
    ]

    def create_agent(llm: ChatOpenAI, tools: list, system_prompt: str):
        # Each worker node will be given a name and some tools.
        prompt = ChatPromptTemplate.from_messages(
            [
                (
                    "system",
                    system_prompt,
                ),
                MessagesPlaceholder(variable_name="messages"),
                MessagesPlaceholder(variable_name="agent_scratchpad"),
            ]
        )
        agent = create_openai_tools_agent(llm, tools, prompt)
        executor = AgentExecutor(agent=agent, tools=tools)
        return executor

    def agent_node(state, agent, name):
        result = agent.invoke(state)
        return {"messages": [HumanMessage(content=result["output"], name=name)]}

    members = ["AIAssistant", "Coder"]
    system_prompt = (
        "You are a supervisor tasked with managing a conversation between the"
        " following workers:  {members}. Given the following user request,"
        " respond with the worker to act next. Each worker will perform a"
        " task and respond with their results and status. When finished,"
        " respond with FINISH."
    )
    # Our team supervisor is an LLM node. It just picks the next agent to process
    # and decides when the work is completed
    options = ["FINISH"] + members
    # Using openai function calling can make output parsing easier for us
    function_def = {
        "name": "route",
        "description": "Select the next role.",
        "parameters": {
            "title": "routeSchema",
            "type": "object",
            "properties": {
                "next": {
                    "title": "Next",
                    "anyOf": [
                        {"enum": options},
                    ],
                }
            },
            "required": ["next"],
        },
    }
    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", system_prompt),
            MessagesPlaceholder(variable_name="messages"),
            (
                "system",
                "Given the conversation above, who should act next?"
                " Or should we FINISH? Select one of: {options}",
            ),
        ]
    ).partial(options=str(options), members=", ".join(members))

    llm = ChatOpenAI(model="gpt-4-1106-preview", streaming=True)

    supervisor_chain = (
        prompt
        | llm.bind_functions(functions=[function_def], function_call="route")
        | JsonOutputFunctionsParser()
    )

    class AgentState(TypedDict):
        # The annotation tells the graph that new messages will always
        # be added to the current states
        messages: Annotated[Sequence[BaseMessage], operator.add]
        # The 'next' field indicates where to route to next
        next: str


    research_agent = create_agent(llm, tools, "You are a ai assistant to provide personalized answer to people.")
    research_node = functools.partial(agent_node, agent=research_agent, name="AIAssistant")

    # NOTE: THIS PERFORMS ARBITRARY CODE EXECUTION. PROCEED WITH CAUTION
    code_agent = create_agent(
        llm,
        tools,
        "You may generate safe python code to analyze data and generate charts using matplotlib.",
    )
    code_node = functools.partial(agent_node, agent=code_agent, name="Coder")

    workflow = StateGraph(AgentState)
    workflow.add_node("AIAssistant", research_node)
    workflow.add_node("Coder", code_node)
    workflow.add_node("supervisor", supervisor_chain)

    for member in members:
        # We want our workers to ALWAYS "report back" to the supervisor when done
        workflow.add_edge(member, "supervisor")
    # The supervisor populates the "next" field in the graph state
    # which routes to a node or finishes
    conditional_map = {k: k for k in members}
    conditional_map["FINISH"] = END
    workflow.add_conditional_edges("supervisor", lambda x: x["next"], conditional_map)
    # Finally, add entrypoint
    workflow.set_entry_point("supervisor")

    graph = workflow.compile()
    return graph

agent_supervisor = AgentSupervisor.getAgentSupervisor()
agent_name = ''
for s in agent_supervisor.stream(
{
"messages": [
HumanMessage(content=question)
]
},
{
"recursion_limit": 100
}
):
if "end" not in s:
if 'supervisor' in s:
agent_name = s['supervisor']['next']
if agent_name != "FINISH":
await websocket.send_text(json.dumps({"token":"AgentName:"+agent_name+"\n"}))
print(agent_name)
if agent_name in s:
content = s[agent_name]['messages'][0].content
await websocket.send_text(json.dumps({"token":"Response:"+content+"\n"}))
print(content)
print("----")

Error Message and Stack Trace (if applicable)

No Error, it is outputing properly, but I need a way to stream tokens of agent response, it is outputing full agent response now.

Description

I am trying to stream tokens of agent response in agent super visor.
Right now, it is outputing agent name and full agent response, Here I want to stream tokens of agent response.

System Info

platform: windows
python version: 3.11.2
langchain version: latest version

Error printing the score when grade is "no" for langgraph_agentic_rag

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

def grade_documents(state):
    """
    Determines whether the retrieved documents are relevant to the question.

    Args:
        state (messages): The current state

    Returns:
        str: A decision for whether the documents are relevant or not
    """

    print("---CHECK RELEVANCE---")

    # Data model
    class grade(BaseModel):
        """Binary score for relevance check."""

        binary_score: str = Field(description="Relevance score 'yes' or 'no'")

    # LLM
    model = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0,)

    # Tool
    grade_tool_oai = convert_to_openai_tool(grade)

    # LLM with tool and enforce invocation
    llm_with_tool = model.bind(
        tools=[convert_to_openai_tool(grade_tool_oai)],
        tool_choice={"type": "function", "function": {"name": "grade"}},
    )

    # Parser
    parser_tool = PydanticToolsParser(tools=[grade])

    # Prompt
    prompt = PromptTemplate(
        template="""You are a grader assessing relevance of a retrieved document to a user question. \n 
        Here is the retrieved document: \n\n {context} \n\n
        Here is the user question: {question} \n
        If the document contains keyword(s) or semantic meaning related to the user question, grade it as relevant. \n
        Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question.""",
        input_variables=["context", "question"],
    )

    # Chain
    chain = prompt | llm_with_tool | parser_tool

    messages = state["messages"]
    last_message = messages[-1]

    question = messages[0].content
    docs = last_message.content
    
    score = chain.invoke(
        {"question": question, 
         "context": docs}
    )
    
    grade = score[0].binary_score
    
    if grade == "yes":
        print("---DECISION: DOCS RELEVANT---")
        return "yes"

    else:
        print("---DECISION: DOCS NOT RELEVANT---")
        print(score.binary_score)
        return "no"

Error Message and Stack Trace (if applicable)

AttributeError: 'list' object has no attribute 'binary_score'

Description

If the grader grades any document as not relevant, and scores "no", printing the score is not correct

System Info

langchain==0.1.10
langchain-community==0.0.25
langchain-core==0.1.28
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
langchainhub==0.1.14

lcel how impl by graph

eg:
llm :write a title
llm : write a body
tool: send to email

langchain lcel : " llm | llm | tool "

if use graph ,how impl it

DOC: Output Parsing for human readable text.

Issue with current documentation:

Hello,
i have implemented langgraph using supervisor but i did not see any way to output parse the data.
my output looks like this:


{'supervisor': {'next': 'text_to_sql'}}
----
{'text_to_sql': {'messages': [HumanMessage(content='SELECT tld, COUNT(tld) AS tld_count FROM premium_db.premium_inventory GROUP BY tld;', name='text_to_sql')]}}
----
Python REPL can execute arbitrary code. Use with caution.
{'supervisor': {'next': 'text_to_sql'}, 'sql_to_python': {'messages': [HumanMessage(content='FINISH: The data has been visualized using different colors for each TLD. The chart should now be displayed on your screen.', name='sql_to_python')]}}
----
{'supervisor': {'next': 'FINISH'}, 'text_to_sql': {'messages': [HumanMessage(content='Your only task is to create sql query and do not worry of visualisation of the data.', name='text_to_sql')]}}
----

How would i make this end user readable?

Idea or request for content:

No response

web_voyager's add_conditional_edges lacks of conditional_edge_mapping

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

it seems that "graph_builder.add_conditional_edges("agent", select_tool)" missed one argument, and caused this error when putting in code (not notebook)
graph_builder.add_conditional_edges("agent", select_tool)
TypeError: Graph.add_conditional_edges() missing 1 required positional argument: 'conditional_edge_mapping'


original code is like following
graph_builder = StateGraph(AgentState)

graph_builder.add_node("agent", agent)
graph_builder.set_entry_point("agent")

graph_builder.add_node("update_scratchpad", update_scratchpad)
graph_builder.add_edge("update_scratchpad", "agent")

tools = {
"Click": click,
"Type": type_text,
"Scroll": scroll,
"Wait": wait,
"GoBack": go_back,
"Google": to_google,
}

for node_name, tool in tools.items():
graph_builder.add_node(
node_name,
# The lambda ensures the function's string output is mapped to the "observation"
# key in the AgentState
RunnableLambda(tool) | (lambda observation: {"observation": observation}),
)
# Always return to the agent (by means of the update-scratchpad node)
graph_builder.add_edge(node_name, "update_scratchpad")

def select_tool(state: AgentState):
# Any time the agent completes, this function
# is called to route the output to a tool or
# to the end user.
action = state["prediction"]["action"]
if action == "ANSWER":
return END
if action == "retry":
return "agent"
return action

graph_builder.add_conditional_edges("agent", select_tool) <----------- here

graph = graph_builder.compile()

Error Message and Stack Trace (if applicable)

TypeError: Graph.add_conditional_edges() missing 1 required positional argument: 'conditional_edge_mapping'

Description

it seems that "graph_builder.add_conditional_edges("agent", select_tool)" missed one argument, and caused this error when putting in code (not notebook)
graph_builder.add_conditional_edges("agent", select_tool)
TypeError: Graph.add_conditional_edges() missing 1 required positional argument: 'conditional_edge_mapping'

System Info

Package Information

langchain_core: 0.1.22
langchain: 0.1.6
langchain_community: 0.0.19
langsmith: 0.0.87
langchain_cli: 0.0.20
langchain_experimental: 0.0.50
langchain_google_genai: 0.0.8
langchain_google_vertexai: 0.0.5
langchain_openai: 0.0.5
langchainhub: 0.1.14
langgraph: 0.0.24
langserve: 0.0.37

astream_log produces TypeError: unsupported operand type(s) for +: 'dict' and 'dict' in passthrough.py

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

The following code produces the error. I have found it in many different scenarios, but this uses one of your base examples from https://github.com/langchain-ai/langgraph/blob/main/examples/multi_agent/agent_supervisor.ipynb. The only change is the async invocation to produce the aysnc for output in graph.astream_log(): located at the very bottom of the code.

import getpass
import os

from langchain_community.chat_models import ChatOpenAI
# Optional, add tracing in LangSmith
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = "Multi-agent Collaboration"

from typing import Annotated, List, Tuple, Union


from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.tools import tool
from langchain_experimental.tools import PythonREPLTool

tavily_tool = TavilySearchResults(max_results=5)



# This executes code locally, which can be unsafe
python_repl_tool = PythonREPLTool()

from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.messages import BaseMessage, HumanMessage
from langchain_openai import ChatOpenAI


def create_agent(llm: ChatOpenAI, tools: list, system_prompt: str):
    # Each worker node will be given a name and some tools.
    prompt = ChatPromptTemplate.from_messages(
        [
            (
                "system",
                system_prompt,
            ),
            MessagesPlaceholder(variable_name="messages"),
            MessagesPlaceholder(variable_name="agent_scratchpad"),
        ]
    )
    agent = create_openai_tools_agent(llm, tools, prompt)
    executor = AgentExecutor(agent=agent, tools=tools)
    return executor

def agent_node(state, agent, name):
    result = agent.invoke(state)
    return {"messages": [HumanMessage(content=result["output"], name=name)]}

from langchain.output_parsers.openai_functions import JsonOutputFunctionsParser
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

members = ["Researcher", "Coder"]
system_prompt = (
    "You are a supervisor tasked with managing a conversation between the"
    " following workers:  {members}. Given the following user request,"
    " respond with the worker to act next. Each worker will perform a"
    " task and respond with their results and status. When finished,"
    " respond with FINISH."
)
# Our team supervisor is an LLM node. It just picks the next agent to process
# and decides when the work is completed
options = ["FINISH"] + members
# Using openai function calling can make output parsing easier for us
function_def = {
    "name": "route",
    "description": "Select the next role.",
    "parameters": {
        "title": "routeSchema",
        "type": "object",
        "properties": {
            "next": {
                "title": "Next",
                "anyOf": [
                    {"enum": options},
                ],
            }
        },
        "required": ["next"],
    },
}
prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system_prompt),
        MessagesPlaceholder(variable_name="messages"),
        (
            "system",
            "Given the conversation above, who should act next?"
            " Or should we FINISH? Select one of: {options}",
        ),
    ]
).partial(options=str(options), members=", ".join(members))

llm = ChatOpenAI(model="gpt-4-1106-preview", streaming=True)

supervisor_chain = (
    prompt
    | llm.bind_functions(functions=[function_def], function_call="route")
    | JsonOutputFunctionsParser()
)

import operator
from typing import Annotated, Any, Dict, List, Optional, Sequence, TypedDict
import functools

from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langgraph.graph import StateGraph, END


# The agent state is the input to each node in the graph
class AgentState(TypedDict):
    # The annotation tells the graph that new messages will always
    # be added to the current states
    messages: Annotated[Sequence[BaseMessage], operator.add]
    # The 'next' field indicates where to route to next
    next: str


research_agent = create_agent(llm, [tavily_tool], "You are a web researcher.")
research_node = functools.partial(agent_node, agent=research_agent, name="Researcher")

# NOTE: THIS PERFORMS ARBITRARY CODE EXECUTION. PROCEED WITH CAUTION
code_agent = create_agent(
    llm,
    [python_repl_tool],
    "You may generate safe python code to analyze data and generate charts using matplotlib.",
)
code_node = functools.partial(agent_node, agent=code_agent, name="Coder")

workflow = StateGraph(AgentState)
workflow.add_node("Researcher", research_node)
workflow.add_node("Coder", code_node)
workflow.add_node("supervisor", supervisor_chain)

for member in members:
    # We want our workers to ALWAYS "report back" to the supervisor when done
    workflow.add_edge(member, "supervisor")
# The supervisor populates the "next" field in the graph state
# which routes to a node or finishes
conditional_map = {k: k for k in members}
conditional_map["FINISH"] = END
workflow.add_conditional_edges("supervisor", lambda x: x["next"], conditional_map)
# Finally, add entrypoint
workflow.set_entry_point("supervisor")

graph = workflow.compile()

async def main():
   async for output in graph.astream_log(
        {
            "messages": [
                HumanMessage(content="Code hello world and print it to the terminal")
            ]
        }, include_types=["llm"]
    ):
        for op in output.ops:
            if op["path"] == "/streamed_output/-":
                # this is the output from .stream()
                ...
            elif op["path"].startswith("/logs/") and op["path"].endswith(
                "/streamed_output/-"
            ):
                # because we chose to only include LLMs, these are LLM tokens
                print(op["value"])
if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

Error Message and Stack Trace (if applicable)

(agents_v09) JasonMacPro:agents_v09 jason$ python langgraph_astream_events.py
content='' additional_kwargs={'function_call': {'arguments': '', 'name': 'route'}}
content='' additional_kwargs={'function_call': {'arguments': '{"', 'name': ''}}
content='' additional_kwargs={'function_call': {'arguments': 'next', 'name': ''}}
content='' additional_kwargs={'function_call': {'arguments': '":"', 'name': ''}}
Traceback (most recent call last):
File "/Users/jason/Documents/agents_v09/langgraph_astream_events.py", line 165, in
asyncio.run(main())
File "/Users/jason/.pyenv/versions/3.10.11/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/Users/jason/.pyenv/versions/3.10.11/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/Users/jason/Documents/agents_v09/langgraph_astream_events.py", line 147, in main
async for output in graph.astream_log(
File "/Users/jason/.pyenv/versions/3.10.11/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 683, in astream_log
async for item in _astream_log_implementation( # type: ignore
File "/Users/jason/.pyenv/versions/3.10.11/lib/python3.10/site-packages/langchain_core/tracers/log_stream.py", line 612, in _astream_log_implementation
await task
File "/Users/jason/.pyenv/versions/3.10.11/lib/python3.10/site-packages/langchain_core/tracers/log_stream.py", line 566, in consume_astream
async for chunk in runnable.astream(input, config, **kwargs):
File "/Users/jason/.pyenv/versions/3.10.11/lib/python3.10/site-packages/langgraph/pregel/init.py", line 657, in astream
async for chunk in self.atransform(
File "/Users/jason/.pyenv/versions/3.10.11/lib/python3.10/site-packages/langgraph/pregel/init.py", line 675, in atransform
async for chunk in self._atransform_stream_with_config(
File "/Users/jason/.pyenv/versions/3.10.11/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1597, in _atransform_stream_with_config
chunk = cast(Output, await py_anext(iterator))
File "/Users/jason/.pyenv/versions/3.10.11/lib/python3.10/site-packages/langchain_core/tracers/log_stream.py", line 237, in tap_output_aiter
async for chunk in output:
File "/Users/jason/.pyenv/versions/3.10.11/lib/python3.10/site-packages/langgraph/pregel/init.py", line 524, in _atransform
_interrupt_or_proceed(done, inflight, step)
File "/Users/jason/.pyenv/versions/3.10.11/lib/python3.10/site-packages/langgraph/pregel/init.py", line 698, in _interrupt_or_proceed
raise exc
File "/Users/jason/.pyenv/versions/3.10.11/lib/python3.10/site-packages/langgraph/pregel/init.py", line 836, in _aconsume
async for _ in iterator:
File "/Users/jason/.pyenv/versions/3.10.11/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4140, in astream
async for item in self.bound.astream(
File "/Users/jason/.pyenv/versions/3.10.11/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2452, in astream
async for chunk in self.atransform(input_aiter(), config, **kwargs):
File "/Users/jason/.pyenv/versions/3.10.11/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2435, in atransform
async for chunk in self._atransform_stream_with_config(
File "/Users/jason/.pyenv/versions/3.10.11/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1597, in _atransform_stream_with_config
chunk = cast(Output, await py_anext(iterator))
File "/Users/jason/.pyenv/versions/3.10.11/lib/python3.10/site-packages/langchain_core/tracers/log_stream.py", line 237, in tap_output_aiter
async for chunk in output:
File "/Users/jason/.pyenv/versions/3.10.11/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2405, in _atransform
async for output in final_pipeline:
File "/Users/jason/.pyenv/versions/3.10.11/lib/python3.10/site-packages/langchain_core/runnables/passthrough.py", line 280, in atransform
final = final + chunk
TypeError: unsupported operand type(s) for +: 'dict' and 'dict'

Description

I am trying to stream output from a compiled langgraph using the astream_log (astream_events also produces this error). It is easily reproducible with example code in many of the langgraph examples if using astream_log rather than astream or synchronous calls.

System Info

langchain==0.1.8
langchain-community==0.0.21
langchain-core==0.1.25
langchain-experimental==0.0.52
langchain-mistralai==0.0.4
langchain-openai==0.0.6
langgraph==0.0.25
langsmith==0.1.5

Mac OSX 12.6.5

Python 3.10.11

app.get_graph() is throwing a RuntimeError when using langgraph

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

from langgraph.graph import StateGraph, END
from typing import Dict, TypedDict
class GraphState(TypedDict):
    keys : Dict[str, any]

def do_something_1(state):
    state_dict = state['keys']
    print("do_something_1")
    return {
        'keys': {
            "input1": "input1"
        }
    }
def do_something_2(state):
    print("do_something_2")
    state_dict = state['keys']
    print(state_dict['input1'])

    return {
        "keys" : {
            "input2": "input2"
        }
    }
def do_something_3(state):
    print("do_something_3")
    state_dict = state['keys']
    return {
        "keys" : {
            "input3": "input3"
        }
    }


workflow = StateGraph(GraphState)

workflow.add_node("start", do_something_1)
workflow.add_node("step1", do_something_2)
workflow.add_node("step2", do_something_3)
workflow.set_entry_point("start")
workflow.add_edge("start", "step1")
workflow.add_edge("step1", "step2")
workflow.add_edge("step2", END)
app = workflow.compile()
app.invoke(input={"keys": {}})

Output

>> do_something_1
>>do_something_2
>>input1
>>do_something_3
>>{'keys': {'input3': 'input3'}}

Error occurs when I execute:

app.get_graph()

Error Message and Stack Trace (if applicable)

Traceback:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:751](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:751), in find_validators(type_, config)
    [750](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:750) try:
--> [751](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:751)     if issubclass(type_, val_type):
    [752](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:752)         for v in validators:

TypeError: issubclass() arg 1 must be a class

During handling of the above exception, another exception occurred:

RuntimeError                              Traceback (most recent call last)
Cell In[18], [line 1](vscode-notebook-cell:?execution_count=18&line=1)
----> [1](vscode-notebook-cell:?execution_count=18&line=1) app.get_graph()

File [~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/graph/graph.py:251](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/graph/graph.py:251), in CompiledGraph.get_graph(self, config, xray)
    [246](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/graph/graph.py:246) def get_graph(
    [247](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/graph/graph.py:247)     self, config: Optional[RunnableConfig] = None, *, xray: bool = False
    [248](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/graph/graph.py:248) ) -> RunnableGraph:
    [249](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/graph/graph.py:249)     graph = RunnableGraph()
    [250](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/graph/graph.py:250)     start_nodes: dict[str, RunnableGraphNode] = {
--> [251](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/graph/graph.py:251)         START: graph.add_node(self.get_input_schema(config), START)
    [252](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/graph/graph.py:252)     }
    [253](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/graph/graph.py:253)     end_nodes: dict[str, RunnableGraphNode] = {
    [254](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/graph/graph.py:254)         END: graph.add_node(self.get_output_schema(config), END)
    [255](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/graph/graph.py:255)     }
    [257](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/graph/graph.py:257)     for key, node in self.graph.nodes.items():

File [~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/pregel/__init__.py:242](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/pregel/__init__.py:242), in Pregel.get_input_schema(self, config)
    [238](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/pregel/__init__.py:238) def get_input_schema(
    [239](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/pregel/__init__.py:239)     self, config: Optional[RunnableConfig] = None
    [240](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/pregel/__init__.py:240) ) -> Type[BaseModel]:
    [241](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/pregel/__init__.py:241)     if isinstance(self.input, str):
--> [242](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/pregel/__init__.py:242)         return super().get_input_schema(config)
    [243](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/pregel/__init__.py:243)     else:
    [244](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/pregel/__init__.py:244)         return create_model(  # type: ignore[call-overload]
    [245](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/pregel/__init__.py:245)             self.get_name("Input"),
    [246](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/pregel/__init__.py:246)             **{
   (...)
    [249](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/pregel/__init__.py:249)             },
    [250](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langgraph/pregel/__init__.py:250)         )

File [~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/base.py:302](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/base.py:302), in Runnable.get_input_schema(self, config)
    [299](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/base.py:299) if inspect.isclass(root_type) and issubclass(root_type, BaseModel):
    [300](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/base.py:300)     return root_type
--> [302](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/base.py:302) return create_model(
    [303](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/base.py:303)     self.get_name("Input"),
    [304](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/base.py:304)     __root__=(root_type, None),
    [305](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/base.py:305) )

File [~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:508](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:508), in create_model(__model_name, **field_definitions)
    [503](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:503) def create_model(
    [504](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:504)     __model_name: str,
    [505](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:505)     **field_definitions: Any,
    [506](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:506) ) -> Type[BaseModel]:
    [507](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:507)     try:
--> [508](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:508)         return _create_model_cached(__model_name, **field_definitions)
    [509](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:509)     except TypeError:
    [510](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:510)         # something in field definitions is not hashable
    [511](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:511)         return _create_model_base(
    [512](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:512)             __model_name, __config__=_SchemaConfig, **field_definitions
    [513](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:513)         )

File [~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:521](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:521), in _create_model_cached(__model_name, **field_definitions)
    [516](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:516) @lru_cache(maxsize=256)
    [517](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:517) def _create_model_cached(
    [518](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:518)     __model_name: str,
    [519](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:519)     **field_definitions: Any,
    [520](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:520) ) -> Type[BaseModel]:
--> [521](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:521)     return _create_model_base(
    [522](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:522)         __model_name, __config__=_SchemaConfig, **field_definitions
    [523](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/langchain_core/runnables/utils.py:523)     )

File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:1024](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:1024), in create_model(__model_name, __config__, __base__, __module__, __validators__, __cls_kwargs__, __slots__, **field_definitions)
   [1022](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:1022)     ns['__orig_bases__'] = __base__
   [1023](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:1023) namespace.update(ns)
-> [1024](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:1024) return meta(__model_name, resolved_bases, namespace, **kwds)

File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:197](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:197), in ModelMetaclass.__new__(mcs, name, bases, namespace, **kwargs)
    [189](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:189)     if (
    [190](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:190)         is_untouched(value)
    [191](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:191)         and ann_type != PyObject
   (...)
    [194](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:194)         )
    [195](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:195)     ):
    [196](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:196)         continue
--> [197](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:197)     fields[ann_name] = ModelField.infer(
    [198](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:198)         name=ann_name,
    [199](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:199)         value=value,
    [200](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:200)         annotation=ann_type,
    [201](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:201)         class_validators=vg.get_validators(ann_name),
    [202](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:202)         config=config,
    [203](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:203)     )
    [204](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:204) elif ann_name not in namespace and config.underscore_attrs_are_private:
    [205](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:205)     private_attributes[ann_name] = PrivateAttr()

File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:504](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:504), in ModelField.infer(cls, name, value, annotation, class_validators, config)
    [501](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:501)     required = False
    [502](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:502) annotation = get_annotation_from_field_info(annotation, field_info, name, config.validate_assignment)
--> [504](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:504) return cls(
    [505](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:505)     name=name,
    [506](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:506)     type_=annotation,
    [507](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:507)     alias=field_info.alias,
    [508](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:508)     class_validators=class_validators,
    [509](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:509)     default=value,
    [510](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:510)     default_factory=field_info.default_factory,
    [511](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:511)     required=required,
    [512](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:512)     model_config=config,
    [513](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:513)     field_info=field_info,
    [514](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:514) )

File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:434](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:434), in ModelField.__init__(self, name, type_, class_validators, model_config, default, default_factory, required, final, alias, field_info)
    [432](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:432) self.shape: int = SHAPE_SINGLETON
    [433](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:433) self.model_config.prepare_field(self)
--> [434](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:434) self.prepare()

File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:555](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:555), in ModelField.prepare(self)
    [553](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:553) if self.default is Undefined and self.default_factory is None:
    [554](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:554)     self.default = None
--> [555](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:555) self.populate_validators()

File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:829](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:829), in ModelField.populate_validators(self)
    [825](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:825) if not self.sub_fields or self.shape == SHAPE_GENERIC:
    [826](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:826)     get_validators = getattr(self.type_, '__get_validators__', None)
    [827](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:827)     v_funcs = (
    [828](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:828)         *[v.func for v in class_validators_ if v.each_item and v.pre],
--> [829](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:829)         *(get_validators() if get_validators else list(find_validators(self.type_, self.model_config))),
    [830](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:830)         *[v.func for v in class_validators_ if v.each_item and not v.pre],
    [831](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:831)     )
    [832](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:832)     self.validators = prep_validators(v_funcs)
    [834](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:834) self.pre_validators = []

File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:738](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:738), in find_validators(type_, config)
    [736](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:736)     return
    [737](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:737) if is_typeddict(type_):
--> [738](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:738)     yield make_typeddict_validator(type_, config)
    [739](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:739)     return
    [741](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:741) class_ = get_class(type_)

File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:624](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:624), in make_typeddict_validator(typeddict_cls, config)
    [619](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:619) def make_typeddict_validator(
    [620](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:620)     typeddict_cls: Type['TypedDict'], config: Type['BaseConfig']  # type: ignore[valid-type]
    [621](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:621) ) -> Callable[[Any], Dict[str, Any]]:
    [622](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:622)     from .annotated_types import create_model_from_typeddict
--> [624](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:624)     TypedDictModel = create_model_from_typeddict(
    [625](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:625)         typeddict_cls,
    [626](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:626)         __config__=config,
    [627](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:627)         __module__=typeddict_cls.__module__,
    [628](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:628)     )
    [629](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:629)     typeddict_cls.__pydantic_model__ = TypedDictModel  # type: ignore[attr-defined]
    [631](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:631)     def typeddict_validator(values: 'TypedDict') -> Dict[str, Any]:  # type: ignore[valid-type]

File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/annotated_types.py:55](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/annotated_types.py:55), in create_model_from_typeddict(typeddict_cls, **kwargs)
     [49](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/annotated_types.py:49) required_keys: FrozenSet[str] = typeddict_cls.__required_keys__  # type: ignore[attr-defined]
     [50](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/annotated_types.py:50) field_definitions = {
     [51](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/annotated_types.py:51)     field_name: (field_type, Required if field_name in required_keys else None)
     [52](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/annotated_types.py:52)     for field_name, field_type in typeddict_cls.__annotations__.items()
     [53](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/annotated_types.py:53) }
---> [55](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/annotated_types.py:55) return create_model(typeddict_cls.__name__, **kwargs, **field_definitions)

File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:1024](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:1024), in create_model(__model_name, __config__, __base__, __module__, __validators__, __cls_kwargs__, __slots__, **field_definitions)
   [1022](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:1022)     ns['__orig_bases__'] = __base__
   [1023](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:1023) namespace.update(ns)
-> [1024](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:1024) return meta(__model_name, resolved_bases, namespace, **kwds)

File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:197](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:197), in ModelMetaclass.__new__(mcs, name, bases, namespace, **kwargs)
    [189](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:189)     if (
    [190](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:190)         is_untouched(value)
    [191](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:191)         and ann_type != PyObject
   (...)
    [194](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:194)         )
    [195](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:195)     ):
    [196](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:196)         continue
--> [197](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:197)     fields[ann_name] = ModelField.infer(
    [198](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:198)         name=ann_name,
    [199](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:199)         value=value,
    [200](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:200)         annotation=ann_type,
    [201](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:201)         class_validators=vg.get_validators(ann_name),
    [202](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:202)         config=config,
    [203](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:203)     )
    [204](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:204) elif ann_name not in namespace and config.underscore_attrs_are_private:
    [205](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/main.py:205)     private_attributes[ann_name] = PrivateAttr()

File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:504](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:504), in ModelField.infer(cls, name, value, annotation, class_validators, config)
    [501](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:501)     required = False
    [502](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:502) annotation = get_annotation_from_field_info(annotation, field_info, name, config.validate_assignment)
--> [504](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:504) return cls(
    [505](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:505)     name=name,
    [506](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:506)     type_=annotation,
    [507](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:507)     alias=field_info.alias,
    [508](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:508)     class_validators=class_validators,
    [509](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:509)     default=value,
    [510](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:510)     default_factory=field_info.default_factory,
    [511](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:511)     required=required,
    [512](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:512)     model_config=config,
    [513](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:513)     field_info=field_info,
    [514](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:514) )

File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:434](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:434), in ModelField.__init__(self, name, type_, class_validators, model_config, default, default_factory, required, final, alias, field_info)
    [432](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:432) self.shape: int = SHAPE_SINGLETON
    [433](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:433) self.model_config.prepare_field(self)
--> [434](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:434) self.prepare()

File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:550](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:550), in ModelField.prepare(self)
    [545](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:545) if self.type_.__class__ is ForwardRef or self.type_.__class__ is DeferredType:
    [546](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:546)     # self.type_ is currently a ForwardRef and there's nothing we can do now,
    [547](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:547)     # user will need to call model.update_forward_refs()
    [548](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:548)     return
--> [550](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:550) self._type_analysis()
    [551](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:551) if self.required is Undefined:
    [552](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:552)     self.required = True

File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:756](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:756), in ModelField._type_analysis(self)
    [753](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:753)     raise TypeError(f'Fields of type "{origin}" are not supported.')
    [755](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:755) # type_ has been refined eg. as the type of a List and sub_fields needs to be populated
--> [756](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:756) self.sub_fields = [self._create_sub_type(self.type_, '_' + self.name)]

File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:806](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:806), in ModelField._create_sub_type(self, type_, name, for_keys)
    [791](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:791)     class_validators = {
    [792](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:792)         k: Validator(
    [793](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:793)             func=v.func,
   (...)
    [801](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:801)         if v.each_item
    [802](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:802)     }
    [804](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:804) field_info, _ = self._get_field_info(name, type_, None, self.model_config)
--> [806](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:806) return self.__class__(
    [807](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:807)     type_=type_,
    [808](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:808)     name=name,
    [809](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:809)     class_validators=class_validators,
    [810](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:810)     model_config=self.model_config,
    [811](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:811)     field_info=field_info,
    [812](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:812) )

File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:434](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:434), in ModelField.__init__(self, name, type_, class_validators, model_config, default, default_factory, required, final, alias, field_info)
    [432](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:432) self.shape: int = SHAPE_SINGLETON
    [433](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:433) self.model_config.prepare_field(self)
--> [434](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:434) self.prepare()

File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:555](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:555), in ModelField.prepare(self)
    [553](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:553) if self.default is Undefined and self.default_factory is None:
    [554](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:554)     self.default = None
--> [555](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:555) self.populate_validators()

File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:829](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:829), in ModelField.populate_validators(self)
    [825](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:825) if not self.sub_fields or self.shape == SHAPE_GENERIC:
    [826](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:826)     get_validators = getattr(self.type_, '__get_validators__', None)
    [827](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:827)     v_funcs = (
    [828](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:828)         *[v.func for v in class_validators_ if v.each_item and v.pre],
--> [829](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:829)         *(get_validators() if get_validators else list(find_validators(self.type_, self.model_config))),
    [830](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:830)         *[v.func for v in class_validators_ if v.each_item and not v.pre],
    [831](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:831)     )
    [832](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:832)     self.validators = prep_validators(v_funcs)
    [834](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/fields.py:834) self.pre_validators = []

File [~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:760](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:760), in find_validators(type_, config)
    [758](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:758)             return
    [759](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:759)     except TypeError:
--> [760](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:760)         raise RuntimeError(f'error checking inheritance of {type_!r} (type: {display_as_type(type_)})')
    [762](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:762) if config.arbitrary_types_allowed:
    [763](https://file+.vscode-resource.vscode-cdn.net/home/bhaswata08/Work/Projects/LLM%20Orchestration/~/.conda/envs/llm/lib/python3.11/site-packages/pydantic/v1/validators.py:763)     yield make_arbitrary_type_validator(type_)

RuntimeError: error checking inheritance of <built-in function any> (type: builtin_function_or_method)

Description

I'm trying to use langgraph to build AI Agents, But when I try to visualize them as depicted by this Notebook, A runtime error is thrown.

System Info

System Information

OS: Linux
OS Version: #1 SMP PREEMPT_DYNAMIC Fri, 23 Feb 2024 16:31:48 +0000
Python Version: 3.11.8 (main, Feb 26 2024, 21:39:34) [GCC 11.2.0]

Package Information

langchain_core: 0.1.30
langchain: 0.1.11
langchain_community: 0.0.27
langsmith: 0.1.23
langchain_cli: 0.0.21
langchain_groq: 0.0.1
langchain_text_splitters: 0.0.1
langchainhub: 0.1.15
langgraph: 0.0.28
langserve: 0.0.51

Hierarchical Agent Teams with Ollama

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

import functools
import operator
from datetime import datetime
from textwrap import dedent
from typing import Sequence, TypedDict, Annotated

from langchain.agents import create_react_agent, AgentExecutor
from langchain_community.tools.ddg_search import DuckDuckGoSearchRun
from langchain_core.messages import HumanMessage, BaseMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables import Runnable
from langchain_core.tools import BaseTool, tool
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langchain_community.output_parsers.ernie_functions import JsonOutputFunctionsParser
from langchain_experimental.tools import PythonREPLTool
from langgraph.graph import END, StateGraph

SUPERVISOR = 'Supervisor'
RESEARCHER = 'Researcher'
CODER = 'Coder'
FINISH = 'FINISH'
members = [RESEARCHER, CODER]


class GraphState(TypedDict):
    messages: Annotated[Sequence[BaseMessage], operator.add]
    next: str


def create_agent(llm: OllamaFunctions, tools: Sequence[BaseTool], system_prompt: str) \
        -> Runnable:
    prompt = ChatPromptTemplate.from_messages(
        [
            (
                "system",
                system_prompt,
            ),
            MessagesPlaceholder(variable_name="messages"),
            MessagesPlaceholder(variable_name="agent_scratchpad"),
            MessagesPlaceholder(variable_name="tools"),
            MessagesPlaceholder(variable_name="tool_names"),
        ]
    )

    return create_react_agent(llm, tools, prompt)


def create_agent_executor(llm: OllamaFunctions, tools: Sequence[BaseTool],
                          system_prompt: str) -> AgentExecutor:
    agent = create_agent(llm, tools, system_prompt)
    executor = AgentExecutor(agent=agent, tools=tools)
    return executor


def agent_node(state: TypedDict, agent: Runnable, name: str):
    result = agent.invoke(state)
    return {"messages": [HumanMessage(content=result["output"], name=name)]}


@tool
def get_actual_date_tool(date_format: str = "%Y-%m-%d %H:%M:%S"):
    """
    Get the current time
    """
    return datetime.now().strftime(date_format)


def get_code_executor_code() -> BaseTool:
    return PythonREPLTool()


def get_web_search_tool() -> BaseTool:
    return DuckDuckGoSearchRun()


llm = OllamaFunctions(model="openhermes")

options = [FINISH] + members

function_def = {
    "name": "route",
    "description": "Select the next role.",
    "parameters": {
        "title": "routeSchema",
        "type": "object",
        "properties": {
            "next": {
                "title": "Next",
                "anyOf": [
                    {"enum": options},
                ],
            }
        },
        "required": ["next"],
    },
}
prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system",
            dedent("""
                You are a supervisor tasked with managing a conversation between the following workers: 
                {members}. Given the following user request, respond with the worker to act next. Each worker 
                will perform a task and respond with their results and status. When finished, respond with 
                FINISH.
            """)
        ),
        MessagesPlaceholder(variable_name="messages"),
        (
            "system",
            dedent("""
                Given the conversation above, who should act next? Or should we FINISH? Select one of: {options}
            """)
        ),
    ]
).partial(options=str(options), members=", ".join(members))

supervisor_chain = (
        prompt
        | llm.bind(functions=[function_def], function_call={"name": "route"})
        | JsonOutputFunctionsParser()
)

research_agent = create_agent_executor(
    llm,
    [get_web_search_tool()],
    "You are a web researcher."
)
research_node = functools.partial(agent_node, agent=research_agent, name=RESEARCHER)

code_agent = create_agent_executor(
    llm,
    [get_code_executor_code()],
    "You may generate safe python code to analyze data and generate charts using matplotlib.",
)
code_node = functools.partial(agent_node, agent=code_agent, name=CODER)

workflow_graph = StateGraph(GraphState)
workflow_graph.add_node(RESEARCHER, research_node)
workflow_graph.add_node(CODER, code_node)
workflow_graph.add_node(SUPERVISOR, supervisor_chain)

for member in members:
    # We want our workers to ALWAYS "report back" to the supervisor when done
    workflow_graph.add_edge(member, SUPERVISOR)
# The supervisor populates the "next" field in the graph state
# which routes to a node or finishes
conditional_map = {k: k for k in members}
conditional_map[FINISH] = END
workflow_graph.add_conditional_edges(SUPERVISOR, lambda x: x["next"], conditional_map)
# Finally, add entrypoint
workflow_graph.set_entry_point(SUPERVISOR)

chain = workflow_graph.compile()

running = True
while running:
    user_input = input("Enter text (press 'q' or ctrl-c to quit): ")
    if user_input.lower() == 'q':
        running = False
    try:
        inputs = {
            "messages": [
                HumanMessage(content=user_input)
            ]
        }
        for s in chain.stream(inputs, {"recursion_limit": 100}, ):
            if "__end__" not in s:
                print("---")
                result = list(s.values())[0]
                print(result)
    except Exception as e:
        print(e)
        print('Sorry, something goes wrong. Try with a different input')

Error Message and Stack Trace (if applicable)

The output is copied directly from LangSmith, the stacktrace is truncated both in console and LangSmith.

ValueError('variable agent_scratchpad should be a list of base messages, got ')Traceback (most recent call last):

File ".../langchain_core/runnables/base.py", line 1262, in _call_with_config
context.run(

File ".../langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^

File ".../langchain_core/prompts/base.py", line 103, in _format_prompt_with_error_handling
return self.format_prompt(**inner_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File ".../langchain_core/prompts/chat.py", line 535, in format_prompt
messages = self.format_messages(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File ".../langchain_core/prompts/chat.py", line 797, in format_messages
message = message_template.format_messages(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File ".../langchain_core/prompts/chat.py", line 129, in format_messages
raise ValueError(

ValueError: variable agent_scratchpad should be a list of base messages, got

Description

I'm trying to reproduce the provided example hierarchical_agent_teams.ipynb using Ollama with OllamaFunctions

I solved some incompatibility issues found with the bind_function replaced with bind and passing explicitly the "name" of the route function ({"name": "route"} instead of "route").

After that, I'm stuck on this error: 'variable agent_scratchpad should be a list of base messages, got. The output seems to be truncated but this happens because the agent_scratchpad variable is empty ('').
How this can be solved? Am I making some error in the GraphState or something else?

Thanks in advance for the support.

System Info

$ python -m langchain_core.sys_info

System Information

OS: Linux
OS Version: #1 SMP PREEMPT_DYNAMIC Sun, 03 Mar 2024 07:25:31 +0000
Python Version: 3.11.7 (main, Dec 14 2023, 11:23:37) [GCC 13.2.1 20230801]

Package Information

langchain_core: 0.1.30
langchain: 0.1.11
langchain_community: 0.0.26
langsmith: 0.1.23
langchain_experimental: 0.0.53
langchain_openai: 0.0.8
langchain_text_splitters: 0.0.1
langchainhub: 0.1.15
langgraph: 0.0.26

Packages not installed (Not Necessarily a Problem)

The following packages were not found:

langserve

missing MessageGraph

Dear Langchain,

I cannot find MessageGraph module when trying to test agent-simulation-evaluation.ipynb,
please help me to install that module, thanks a lot!

Cheers!
Richard

langgraph prebuilt NOT work

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

 from langgraph.prebuilt.tool_executor import ToolExecutor, ToolInvocation

Error Message and Stack Trace (if applicable)

File "/home/zhangyj/langchain/langgraph.py", line 23, in
from langgraph.prebuilt.tool_executor import ToolExecutor, ToolInvocation
ModuleNotFoundError: No module named 'langgraph.prebuilt'; 'langgraph' is not a package

Description

I followed the instructions, trying to run langgraph, but it seems the package can't work as expect

System Info

pip freeze | grep langgraph
langgraph==0.0.26
platform (Ubuntu)
python 3.10

Stream Chat LLM Token By Token is not working

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

I am using the code in this notebook:
https://github.com/langchain-ai/langgraph/blob/main/examples/streaming-tokens.ipynb

streaming code:

from langchain_core.messages import HumanMessage
inputs = [HumanMessage(content="what is the weather in sf")]
async for event in app.astream_events(inputs, version="v1"):
    kind = event["event"]
    if kind == "on_chat_model_stream":
        content = event["data"]["chunk"].content
        if content:
            # Empty content in the context of OpenAI means
            # that the model is asking for a tool to be invoked.
            # So we only print non-empty content
            print(content, end="|")
    elif kind == "on_tool_start":
        print("--")
        print(
            f"Starting tool: {event['name']} with inputs: {event['data'].get('input')}"
        )
    elif kind == "on_tool_end":
        print(f"Done tool: {event['name']}")
        print(f"Tool output was: {event['data'].get('output')}")
        print("--")

Error Message and Stack Trace (if applicable)

the streaming is not working, I am not receiving any output from this part:

    if kind == "on_chat_model_stream":
        content = event["data"]["chunk"].content
        if content:
            # Empty content in the context of OpenAI means
            # that the model is asking for a tool to be invoked.
            # So we only print non-empty content
            print(content, end="|")

Description

the streaming is not working, I am not receiving any output

System Info

langchain==0.1.5
langchain-community==0.0.17
langchain-core==0.1.18
langchain-openai==0.0.5
langgraph==0.0.21

Streamlit calls not working in callback

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

# Define callback with a streamlit call
@st.cache_data(experimental_allow_widgets=True, persist=True) 
def custom_callback():
   response = st.text_input("Do you want to continue?")
   return response == "y"

def run(...):
   # Calling the callback in run works
   custom_callback()
   handler = HumanApprovalCallbackHandler(custom_callback)
   # Passing it to callbacks will result in an NoSessionContext() error
   response = app.stream(inputs, config={"callbacks": [handler]})

Error Message and Stack Trace (if applicable)

File "lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3887, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3353, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1246, in _call_with_config
context.run(
File "lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3229, in _invoke
output = call_func_with_variable_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/langgraph/prebuilt/tool_executor.py", line 60, in _execute
output = tool.invoke(tool_invocation.tool_input, config=config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/langchain_core/tools.py", line 210, in invoke
return self.run(
^^^^^^^^^
File "lib/python3.11/site-packages/langchain_core/tools.py", line 331, in run
run_manager = callback_manager.on_tool_start(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/langchain_core/callbacks/manager.py", line 1308, in on_tool_start
handle_event(
File "lib/python3.11/site-packages/langchain_core/callbacks/manager.py", line 262, in handle_event
raise e
File "lib/python3.11/site-packages/langchain_core/callbacks/manager.py", line 234, in handle_event
event = getattr(handler, event_name)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/langchain_community/callbacks/human.py", line 57, in on_tool_start
if self._should_check(serialized) and not self._approve(input_str):
^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/streamlit/runtime/caching/cache_utils.py", line 212, in wrapper
return cached_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/streamlit/runtime/caching/cache_utils.py", line 240, in call
with spinner(message, cache=True):
File "versions/3.11.7/lib/python3.11/contextlib.py", line 137, in enter
return next(self.gen)
^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/streamlit/elements/spinner.py", line 56, in spinner
message = st.empty()
^^^^^^^^^^
File "lib/python3.11/site-packages/streamlit/elements/empty.py", line 70, in empty
return self.dg._enqueue("empty", empty_proto)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/streamlit/delta_generator.py", line 530, in _enqueue
_enqueue_message(msg)
File "lib/python3.11/site-packages/streamlit/delta_generator.py", line 869, in _enqueue_message
raise NoSessionContext()
streamlit.errors.NoSessionContext

Description

Adding a callback to stream or invoke with a streamlit call inside will result in NoSessionContext error

System Info

langchain : 0.1.0
langgraph: 0.0.20
streamlit: 1.31.0

langgraph.pregel.GraphRecursionError: Recursion limit of 25 reachedwithout hitting a stop condition. You can increase the limitby setting the `recursion_limit` config key.

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

do you know typically what is the root cause of this error and how to resolve it? Thanks.

langgraph.pregel.GraphRecursionError: Recursion limit of 25 reachedwithout hitting a stop condition. You can increase the limitby setting the recursion_limit config key.

Error Message and Stack Trace (if applicable)

do you know typically what is the root cause of this error and how to resolve it? Thanks.

langgraph.pregel.GraphRecursionError: Recursion limit of 25 reachedwithout hitting a stop condition. You can increase the limitby setting the recursion_limit config key.

Description

do you know typically what is the root cause of this error and how to resolve it? Thanks.

langgraph.pregel.GraphRecursionError: Recursion limit of 25 reachedwithout hitting a stop condition. You can increase the limitby setting the recursion_limit config key.

System Info

python 311

two node flow to one node

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

CleanShot 2024-02-23 at 16 48 41

up and side node. output param as down node params.

Error Message and Stack Trace (if applicable)

No response

Description

up and side node. output param as down node params.

System Info

latest.

Async runnables

Quick question: the default langchain.runnables class supports async, but when I try to use async functions in chains with PubSub I get an error traced back to the transform_stream_with_config method:

TypeError: Cannot invoke a coroutine function synchronously.Use ainvoke instead.

I admittedly have not dug into the permchain or runnables code very deep to see what's actually going on, nor would I probably understand it anyway, but I'm curious if there's any way around this issue.

WebVoyager-Langchain Runtime Error

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

<ipython-input-8-bdd4c5d78a61>:22: RuntimeWarning: coroutine 'sleep' was never awaited
  asyncio.sleep(3)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
<ipython-input-8-bdd4c5d78a61>:22: RuntimeWarning: coroutine 'sleep' was never awaited
  asyncio.sleep(3)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
---------------------------------------------------------------------------
TimeoutError                              Traceback (most recent call last)
[<ipython-input-27-b597227a6e06>](https://localhost:8080/#) in <cell line: 1>()
----> 1 res = await call_agent("Could you explain the WebVoyager paper (on arxiv)?", page)
      2 print(f"Final response: {res}")

32 frames
[/usr/lib/python3.10/asyncio/futures.py](https://localhost:8080/#) in result(self)
    199         self.__log_traceback = False
    200         if self._exception is not None:
--> 201             raise self._exception.with_traceback(self._exception_tb)
    202         return self._result
    203 

TimeoutError: Timeout 30000ms exceeded.

Error Message and Stack Trace (if applicable)

<ipython-input-8-bdd4c5d78a61>:22: RuntimeWarning: coroutine 'sleep' was never awaited
  asyncio.sleep(3)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
<ipython-input-8-bdd4c5d78a61>:22: RuntimeWarning: coroutine 'sleep' was never awaited
  asyncio.sleep(3)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
---------------------------------------------------------------------------
TimeoutError                              Traceback (most recent call last)
[<ipython-input-27-b597227a6e06>](https://localhost:8080/#) in <cell line: 1>()
----> 1 res = await call_agent("Could you explain the WebVoyager paper (on arxiv)?", page)
      2 print(f"Final response: {res}")

32 frames
[/usr/lib/python3.10/asyncio/futures.py](https://localhost:8080/#) in result(self)
    199         self.__log_traceback = False
    200         if self._exception is not None:
--> 201             raise self._exception.with_traceback(self._exception_tb)
    202         return self._result
    203 

TimeoutError: Timeout 30000ms exceeded.

Description

i wanted to run the whole cycle

System Info


!python -m langchain_core.sys_info

System Information
------------------
> OS:  Linux
> OS Version:  #1 SMP PREEMPT_DYNAMIC Sat Nov 18 15:31:17 UTC 2023
> Python Version:  3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]

Package Information
-------------------
> langchain_core: 0.1.21
> langchain: 0.1.5
> langchain_community: 0.0.19
> langsmith: 0.0.87
> langchain_openai: 0.0.5
> langchainhub: 0.1.14
> langgraph: 0.0.23

Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:

> langserve

[Chore] upgrading pydantic from v1 to v2 with solution

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

This discussion is not related to any specific python code; this is more like a promotion or idea.

Error Message and Stack Trace (if applicable)

No response

Description

Intro

I am a software engineer at MediaTek, and my project involves using LangChain to address some of our challenges and to conduct research on topics related to LangChain. I believe a member of our team has already initiated contact with the vendor regarding the purchase of a LangSmith License.

Motivation

Today, I delved into the source code and discovered that this package heavily relies on Pydantic, specifically version 1. However, the OpenAI API is currently utilizing Pydantic==2.4.2 Ref, there is no reason we don't upgrade it as a developer.

Observation of current repository and needs

Here are some observations and understandings I have gathered:

  1. In langchain_core, langchain.pydantic_v1 is used solely for invoking pydantic.v1.
  2. There are significant differences between Pydantic v1 and v2, such as:
    • root_validator has been replaced by model_validator.
    • validator has been replaced by field_validator.
    • etc.

Question

Should we consider updating this module?
If so, it would be my honor to undertake this task.

Workflow

If I am to proceed, my approach would include:

  1. Replacing all instances of from langchain_core.pydantic_v1 import XXX with from pydantic import XXX within the langchain codebase.
  2. Making the necessary updates for Pydantic, including changes to model_validator, field_validator, etc.
  3. Keeping langchain_core.pydantic_v1 unchanged to avoid conflicts with other repositories, but issuing a deprecation warning to inform users and developers.

System Info

None

Parallel Tool Calling and LLM Token Streaming Issue

Is there a reason you've been using legacy function calling for OpenAI models in all the langgraph examples instead of tools? Before the latest update with ToolExecutor I did something sort-of similar by subclassing Runnable, overwriting invoke then calling batch on the tool inputs for concurrent execution with a tools agent and that worked fine, but I'm just curious why you aren't promoting the use of tools vs. functions with langgraph like you are for Agents, and adding support for tool_call_ids, etc. With an openai_tools_agent and the new classes something like this gives the correct output:

def execute_tools_concurrent(data):
    agent_actions = data.pop('agent_outcome')
    tool_actions = [ToolInvocation(tool = action.tool, tool_input=action.tool_input) for action in agent_actions]
    outputs = tool_executor.batch(tool_actions)
    full_output = list(zip(agent_actions, outputs))
    data['intermediate_steps'].extend(full_output)
    return data 

But for building from scratch in langgraph and appending ToolMessages instead of FunctionMessages to the state, the new classes do not support adding tool_call_ids, so you have to extend those classes. I'm currently adding an attribute to ToolInvocationInterface and ToolInvocation for tool_call_id and extending ToolExecutor to output the following from _execute and _aexecute:

return {"id": tool_invocation.tool_call_id, "output": output}

Then you can do this sort of thing:

...
tool_calls = last_message.additional_kwargs['tool_calls']

actions = [ToolInvocationWithID(
        tool=tool_call['function']['name'],
        tool_input=json.loads(tool_call['function']['arguments']), 
        tool_call_id = tool_call['id']
    ) for tool_call in tool_calls]

responses = enhanced_executor.batch(actions)
   
tool_messages = [ToolMessage(content=str(r['output']), tool_call_id=r['id']) for r in responses]

return {"messages": tool_messages}

This does work, but maybe there's a better way? Am I missing something?


My actual issue is I can't get the LLM token streaming to work. I updated langchain, langchain_openai, and langgraph and copied streaming-tokens.ipynb verbatim and I'm getting no /logs/ output at all from astream_log. Are you sure the implementation with:

async def call_model(state):
    messages = state['messages']
    response = await model.ainvoke(messages)
  
    return {"messages": [response]}

is correct? Do you need to change it to astream or something to get the tokens?

[StreamlitCallbackHandler] - Not compatible with LangGraph

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

import streamlit as st
from langgraph.prebuilt import create_agent_executor
from langchain.agents import AgentType, create_openai_functions_agent
from langchain.callbacks.streamlit import StreamlitCallbackHandler
from langchain_community.agent_toolkits import SQLDatabaseToolkit
from langchain_community.utilities.sql_database import SQLDatabase
from langchain_core.prompts.chat  import ChatPromptTemplate, AIMessage, SystemMessage, HumanMessagePromptTemplate, MessagesPlaceholder

st_cb = StreamlitCallbackHandler(st.container())

db = SQLDatabase(engine=engine, include_tables=tables)
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
sql_tools = toolkit.get_tools()

messages = [
    SystemMessage(content=SQL_PREFIX),
    HumanMessagePromptTemplate.from_template("{input}"),
    AIMessage(content=SQL_FUNCTIONS_SUFFIX),
    MessagesPlaceholder(variable_name="agent_scratchpad"),
        ]
input_variables = ["input", "agent_scratchpad"]
prompt = ChatPromptTemplate(input_variables=input_variables, messages=messages)
sql_agent_runnable = create_openai_functions_agent(llm, sql_tools, prompt)

result = app.invoke({"input": "What is the table about?","chat_history":[]},{"callbacks": [st_cb]})

Error Message and Stack Trace (if applicable)

2024-02-10 11:04:02.381 Thread 'ThreadPoolExecutor-15_0': missing ScriptRunContext
Error in StreamlitCallbackHandler.on_llm_start callback: NoSessionContext()
Error in StreamlitCallbackHandler.on_llm_end callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_tool_start callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_tool_end callback: RuntimeError('Current LLMThought is unexpectedly None!')

Description

I am trying to use the StreamlitCallbackHandler with LangGraph as I can successfully do it with LangChain.
Based on my observation, the internal format drastically diverges between langchain and langGraph. Does it mean, that StreamlitCallbackHandler will not be compatible with langGraph ?

System Info

langchain==0.1.0
langchain-community==0.0.12
langchain-core==0.1.14
langchain-experimental==0.0.49
langchain-openai==0.0.3
langgraph==0.0.20

Human-in-the-loop: TypeError: compile() got an unexpected keyword argument 'interrupt_before'

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

workflow = MessageGraph()

workflow.add_node("agent", call_model)
workflow.add_node("action", call_tool)

workflow.set_entry_point("agent")

workflow.add_conditional_edges(
{
"continue": "action",
"end": END,
},
)

workflow.add_edge("action", "agent")

app = workflow.compile(interrupt_before=["action"])

inputs = [HumanMessage(content=user_input)]
for event in app.stream(inputs, {"configurable": {"thread_id": "2"}}):
for k, v in event.items():
if k != "end":
print(v)

Error Message and Stack Trace (if applicable)

Traceback (most recent call last):
File "Z:\MHossain_OneDrive\OneDrive\ChatGPT\AI_Bot\Langgraph\Human_in_the_loop\Human-in-the-loop.py", line 109, in
app = workflow.compile(interrupt_before=["action"])
TypeError: compile() got an unexpected keyword argument 'interrupt_before'

Description

I am trying using human loop in LangGraph workflow, but failed.

System Info

langchain==0.1.12
langgraph==0.0.21
langchain-cli==0.0.20
langchain-community==0.0.28
langchain-core==0.1.31
langchain-experimental==0.0.49
langchain-openai==0.0.8
langchain-text-splitters==0.0.1

Platform - Windows
Python Version - 3.9

Graph Persistence when input is not None, graph will reset to entry point

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

@tool("web_search")
def web_search(query: str) -> str:
    """Search with Google SERP API by a query"""
    search = SerpAPIWrapper()
    return search.run(query)

tools = [web_search]
prompt = hub.pull("hwchase17/openai-functions-agent")
llm = ChatOpenAI(model="gpt-4-turbo-preview", streaming=True)
agent_runnable = create_openai_functions_agent(llm, tools, prompt)


class AgentState(TypedDict):
    input: Annotated[Sequence[BaseMessage], operator.add]
    chat_history: list[BaseMessage]
    agent_outcome: Union[AgentAction, AgentFinish, None]
    intermediate_steps: Annotated[list[tuple[AgentAction, str]], operator.add]

from langchain_core.agents import AgentFinish
from langgraph.prebuilt.tool_executor import ToolExecutor

tool_executor1 = ToolExecutor(tools)
tool_executor2 = ToolExecutor([])

# Define the agent
def run_agent(data):
    agent_outcome = agent_runnable.invoke(data)
    return {"agent_outcome": agent_outcome}


# Define the function to execute tools
def execute_tools1(data):
    agent_action = data["agent_outcome"]
    output = tool_executor1.invoke(agent_action)
    print('execute_tools1')
    return {"intermediate_steps": [(agent_action, str(output))]}

# Define the function to execute tools
def execute_tools2(data):
    agent_action = data["agent_outcome"]
    output = tool_executor2.invoke(agent_action)

    print('execute_tools2')
    return {"intermediate_steps": [(agent_action, str(output))]}

def should_continue(data):
    if isinstance(data["agent_outcome"], AgentFinish):
        return "end"
    else:
        return "continue"


workflow = StateGraph(AgentState)

workflow.add_node("agent", run_agent)
workflow.add_node("action1", execute_tools1)
workflow.add_node("action2", execute_tools2)

workflow.set_entry_point("agent")
workflow.add_conditional_edges(
    "agent",
    should_continue,
    {
        "continue": "action1",
        "end": END,
    },
)

workflow.add_edge("action1", "action2")
workflow.add_edge("action2", "agent")


memory = SqliteSaver.from_conn_string(":memory:")

app = workflow.compile(checkpointer=memory, interrupt_before=["action2"])


inputs = {"input": [HumanMessage(content="what is the weather in sf")], "chat_history": []}


for s in app.stream(inputs, {"configurable": {"thread_id": "3"}}):
    print(list(s.values())[0])
    print("----")

inputs = {"input": [HumanMessage(content="what is the weather in NY")]}

for s in app.stream(inputs, {"configurable": {"thread_id": "3"}}):
    print(list(s.values())[0])
    print("----")

Error Message and Stack Trace (if applicable)

No response

Description

Bug Report: StateGraph Workflow Persistence Issue with Configurable Thread ID

Summary

When using the StateGraph workflow with a configurable thread ID to maintain state across multiple iterations of a stream, the graph does not resume from the expected node following an interruption and the presence of new input. Instead of continuing from the interrupted point with the new input, the workflow restarts from the entry point. This behavior diverges from the documented functionality, where supplying None as input for a subsequent iteration should prompt the graph to continue from the interrupted node.

Steps to Reproduce

  1. Initialize a StateGraph with an entry point and multiple nodes, including an interruption point (interrupt_before=["action2"]).
  2. Compile the graph with a SqliteSaver configured for in-memory persistence and a configurable thread ID.
  3. Stream inputs through the graph using a specific thread ID, and interrupt the workflow at the predetermined point.
  4. Attempt to resume the workflow by streaming additional inputs with the same thread ID, expecting the workflow to pick up from the point of interruption.

Expected Behavior

Upon streaming new inputs with the same thread ID after an interruption, the workflow should resume from the node immediately following the last executed node before the interruption.

Actual Behavior

The workflow restarts from the entry point node, disregarding the previously executed path and the interruption point, leading to an unexpected re-initialization of the workflow.

Thank you for looking into this issue. Please let me know if there's any more information I can provide to help diagnose and resolve this problem.

System Info

langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.27
langchain-openai==0.0.8
langchainhub==0.1.14
python==3.12
langgraph==0.0.26

how update graph state

eg:
# Define the agent def run_agent(data): agent_outcome = agent_runnable.invoke(data) logger.warning(f"Agent outcome: {data}") data['define state attr'] = "" return {"agent_outcome": agent_outcome}

How to use the ReAct agent from langchain with langgraph?

Please add an example of ReAct agent (https://python.langchain.com/docs/modules/agents/agent_types/react) usage together with the langgraph framework.

I tried to combine them like so:

# Construct the ReAct agent

my_input = "some example utterance"

agent = create_react_agent(llm, tools, react_template_base)

agent_executer = create_agent_executor(agent, tools)

inputs = {"messages": [HumanMessage(content=my_input)]}

for output in agent_executer.stream(inputs):
    # stream() yields dictionaries with output keyed by node name
    for key, value in output.items():
        print(f"Output from node '{key}':")
        print("---")
        print(value)
    print("\n---\n")

but this doesn't work as expected.

【Question】Why not implement DAG in LCEL?

This is very confusing for users who need to understand both LECL and langgraph. Why not implement DAG in LCEL?

For comparison, we implemented AWEL(Agentic Workflow Expression Language) orchestration at DB-GPT, and we think AWEL and agents are all you need.

image

Example

https://github.com/eosphoros-ai/DB-GPT/blob/main/examples/awel/simple_chat_history_example.py

HttpTrigger(node_id=b71f0ccc-4539-445e-8d87-1ec5ef6e7e83)
 -> MapOperator(node_id=23d11ff8-a1d4-4541-a1ff-adfb0eadd614)
   -> ChatHistoryPromptComposerOperator(node_id=3ee4c2af-d625-445b-a0f3-60863404d82e)
     -> LLMBranchOperator(node_id=bdc20002-b9e9-4ce8-9211-5aba83f4c16e)
       -> LLMOperator(node_id=b4c3fe48-a9d8-4968-a128-df5a6b6edd4c, node_name=llm_task)
      |  -> MapOperator(node_id=ce768e92-5299-43ab-ac04-09755b5f19e2)
      |    -> JoinOperator(node_id=f33f239e-90dd-4dd4-bc12-30d6d0912cfb)
       -> StreamingLLMOperator(node_id=ca529d57-1bd7-479c-9047-5cb5512ade8e, node_name=streaming_llm_task)
         -> OpenAIStreamingOutputOperator(node_id=1952037e-775a-4c9c-a8c3-7ca678f15871)
           -> JoinOperator(node_id=f33f239e-90dd-4dd4-bc12-30d6d0912cfb)

image

with DAG("dbgpt_awel_simple_chat_history") as multi_round_dag:
    # Receive http request and trigger dag to run.
    trigger = HttpTrigger(
        "/examples/simple_history/multi_round/chat/completions",
        methods="POST",
        request_body=TriggerReqBody,
        streaming_predict_func=lambda req: req.stream,
    )
    prompt = ChatPromptTemplate(
        messages=[
            SystemPromptTemplate.from_template("You are a helpful chatbot."),
            MessagesPlaceholder(variable_name="chat_history"),
            HumanPromptTemplate.from_template("{user_input}"),
        ]
    )

    composer_operator = ChatHistoryPromptComposerOperator(
        prompt_template=prompt,
        last_k_round=5,
        storage=InMemoryStorage(),
        message_storage=InMemoryStorage(),
    )

    # Use BaseLLMOperator to generate response.
    llm_task = LLMOperator(task_name="llm_task")
    streaming_llm_task = StreamingLLMOperator(task_name="streaming_llm_task")
    branch_task = LLMBranchOperator(
        stream_task_name="streaming_llm_task", no_stream_task_name="llm_task"
    )
    model_parse_task = MapOperator(lambda out: out.to_dict())
    openai_format_stream_task = OpenAIStreamingOutputOperator()
    result_join_task = JoinOperator(
        combine_function=lambda not_stream_out, stream_out: not_stream_out or stream_out
    )

    req_handle_task = MapOperator(
        lambda req: ChatComposerInput(
            context=ModelRequestContext(
                conv_uid=req.context.conv_uid, stream=req.stream
            ),
            prompt_dict={"user_input": req.messages},
            model_dict={
                "model": req.model,
                "context": req.context,
                "stream": req.stream,
            },
        )
    )

    trigger >> req_handle_task >> composer_operator >> branch_task

    # The branch of no streaming response.
    branch_task >> llm_task >> model_parse_task >> result_join_task
    # The branch of streaming response.
    branch_task >> streaming_llm_task >> openai_format_stream_task >> result_join_task

Recursion error when printing app

from permchain import Channel, Pregel

grow_value = (
    Channel.subscribe_to("value")
    | (lambda x: x + x)
    | Channel.write_to(value=lambda x: x if len(x) < 10 else None)
)

app = Pregel(
    chains={"grow_value": grow_value,},
    input="value",
    output="value",
)

print(app) # <-- fails

LangGraph supervisor keeps calling same node even when FINISH

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

Here is my System Prompt:

system_prompt = (
    " You are a supervisor tasked with managing a conversation between the"
    " following workers:  {members}. Given the following user request,"
    " respond with the worker to act next. Each worker will perform a"
    " task and respond with their results."
    " If you or any of the other {members} have the final answer or deliverable"
    " prefix your response with FINISH: so you know it is time to stop. Once any of the {members} respond"
    " with a final answer or deliverable return the response to the user and stop the execution"
    )

And here is the response.

Please enter your question: hello
{'supervisor': {'next': 'Conversation'}}
----
{'Conversation': {'messages': [HumanMessage(content='FINISH: Hello there! How can I assist you today?', name='Conversation')]}}
----
{'supervisor': {'next': 'Conversation'}}
----
{'Conversation': {'messages': [HumanMessage(content="FINISH: How's your day going?", name='Conversation')]}}
----
{'supervisor': {'next': 'FINISH'}}
----

Process finished with exit code 0

Error Message and Stack Trace (if applicable)

No response

Description

Hello,
I have a conversational node which is used to respond with a normal conversation.
Even when the message is prefixed as finish the supervisor keeps calling the node again.

System Info

System Information

OS: Darwin
OS Version: Darwin Kernel Version 22.3.0: Mon Jan 30 20:39:46 PST 2023; root:xnu-8792.81.3~2/RELEASE_ARM64_T6020
Python Version: 3.11.6 (v3.11.6:8b6ee5ba3b, Oct 2 2023, 11:18:21) [Clang 13.0.0 (clang-1300.0.29.30)]

Package Information

langchain_core: 0.1.18
langchain: 0.1.5
langchain_community: 0.0.17
langchain_experimental: 0.0.50
langchain_openai: 0.0.5
langchainhub: 0.1.14
langgraph: 0.0.21

Packages not installed (Not Necessarily a Problem)

The following packages were not found:

langserve

Required lcel-teacher-eval dataset configuration for langgraph_code_assistant.ipynb is unclear

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

The following code:

# Run eval on base chain
run_id = uuid.uuid4().hex[:4]
project_name = "context-stuffing-no-langgraph"
client.run_on_dataset(
    dataset_name="lcel-teacher-eval",
    llm_or_chain_factory= lambda: (lambda x: x["question"]) | chain_base_case,
    evaluation=evaluation_config,
    project_name=f"{run_id}-{project_name}",
)

Error Message and Stack Trace (if applicable)

{
	"name": "ValueError",
	"message": "Dataset lcel-teacher-eval has no example rows.",
	"stack": "---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[23], line 38
     36 run_id = uuid.uuid4().hex[:4]
     37 project_name = \"context-stuffing-with-langgraph\"
---> 38 client.run_on_dataset(
     39     dataset_name=\"lcel-teacher-eval\",
     40     llm_or_chain_factory=model,
     41     evaluation=evaluation_config,
     42     project_name=f\"{run_id}-{project_name}\",
     43 )

File ~/forks/langchain/langgraph/.venv/lib/python3.10/site-packages/langsmith/client.py:3394, in Client.run_on_dataset(self, dataset_name, llm_or_chain_factory, evaluation, concurrency_level, project_name, project_metadata, verbose, tags, input_mapper, revision_id)
   3389 except ImportError:
   3390     raise ImportError(
   3391         \"The client.run_on_dataset function requires the langchain\"
   3392         \"package to run.\
Install with pip install langchain\"
   3393     )
-> 3394 return _run_on_dataset(
   3395     dataset_name=dataset_name,
   3396     llm_or_chain_factory=llm_or_chain_factory,
   3397     concurrency_level=concurrency_level,
   3398     client=self,
   3399     evaluation=evaluation,
   3400     project_name=project_name,
   3401     project_metadata=project_metadata,
   3402     verbose=verbose,
   3403     tags=tags,
   3404     input_mapper=input_mapper,
   3405     revision_id=revision_id,
   3406 )

File ~/forks/langchain/langgraph/.venv/lib/python3.10/site-packages/langchain/smith/evaluation/runner_utils.py:1297, in run_on_dataset(client, dataset_name, llm_or_chain_factory, evaluation, concurrency_level, project_name, project_metadata, verbose, tags, revision_id, **kwargs)
   1289     warn_deprecated(
   1290         \"0.0.305\",
   1291         message=\"The following arguments are deprecated and \"
   (...)
   1294         removal=\"0.0.305\",
   1295     )
   1296 client = client or Client()
-> 1297 container = _DatasetRunContainer.prepare(
   1298     client,
   1299     dataset_name,
   1300     llm_or_chain_factory,
   1301     project_name,
   1302     evaluation,
   1303     tags,
   1304     input_mapper,
   1305     concurrency_level,
   1306     project_metadata=project_metadata,
   1307     revision_id=revision_id,
   1308 )
   1309 if concurrency_level == 0:
   1310     batch_results = [
   1311         _run_llm_or_chain(
   1312             example,
   (...)
   1317         for example, config in zip(container.examples, container.configs)
   1318     ]

File ~/forks/langchain/langgraph/.venv/lib/python3.10/site-packages/langchain/smith/evaluation/runner_utils.py:1125, in _DatasetRunContainer.prepare(cls, client, dataset_name, llm_or_chain_factory, project_name, evaluation, tags, input_mapper, concurrency_level, project_metadata, revision_id)
   1123         project_metadata = {}
   1124     project_metadata.update({\"revision_id\": revision_id})
-> 1125 wrapped_model, project, dataset, examples = _prepare_eval_run(
   1126     client,
   1127     dataset_name,
   1128     llm_or_chain_factory,
   1129     project_name,
   1130     project_metadata=project_metadata,
   1131     tags=tags,
   1132 )
   1133 tags = tags or []
   1134 for k, v in (project.metadata.get(\"git\") or {}).items():

File ~/forks/langchain/langgraph/.venv/lib/python3.10/site-packages/langchain/smith/evaluation/runner_utils.py:971, in _prepare_eval_run(client, dataset_name, llm_or_chain_factory, project_name, project_metadata, tags)
    969 examples = list(client.list_examples(dataset_id=dataset.id))
    970 if not examples:
--> 971     raise ValueError(f\"Dataset {dataset_name} has no example rows.\")
    973 try:
    974     git_info = get_git_info()

ValueError: Dataset lcel-teacher-eval has no example rows."
}

Description

  • I'm trying to compare the behavior and quality of code assistants both with and without langgraph
  • the major roadblock has to do with proper configuration of the langsmith dataset
  • I have had to manually create the lcel-teacher-eval dataset to get around an initial error regarding the dataset not existing
    • currently I'm getting the error captured above
    • I don't remember the specific difference in configuration (or dependencies) but at one point I was seeing it write to the personal lcel-teacher-eval dataset consistently. In this case, the issue was that the input element was being captured as input:input and not input:question.

System Info

System Information
------------------
> OS:  Linux
> OS Version:  #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version:  3.10.13 (main, Feb  7 2024, 15:27:48) [GCC 11.4.0]

Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.8
> langchain_community: 0.0.21
> langsmith: 0.1.4
> langchain_openai: 0.0.8
> langchainhub: 0.1.14
> langgraph: 0.0.26

Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:

> langserve

plan-to-execute. use the create_openai_tools_agent replace create_openai_functions_agent

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

from langchain import hub
from langchain.agents import create_openai_functions_agent,create_openai_tools_agent,
from langchain_openai import ChatOpenAI


from langchain_community.tools.tavily_search import TavilySearchResults

tools = [TavilySearchResults(max_results=3)]

# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/openai-tools-agent")

# Construct the OpenAI Tools agent

agent_runnable = create_openai_tools_agent(llm, tools,prompt)

from langgraph.prebuilt import create_agent_executor

agent_executor = create_agent_executor(agent_runnable, tools)

agent_executor.invoke(
    {"input": "who is the winnner of the us open", "chat_history": []},
)

Error Message and Stack Trace (if applicable)

50 def _execute(
51 self, tool_invocation: ToolInvocation, *, config: RunnableConfig
52 ) -> Any:
---> 53 if tool_invocation.tool not in self.tool_map:
54 return self.invalid_tool_msg_template.format(
55 requested_tool_name=tool_invocation.tool,
56 available_tool_names_str=", ".join([t.name for t in self.tools]),
57 )
58 else:

AttributeError: 'list' object has no attribute 'tool'

Description

i don't know how to fix it,how to find the bug

System Info

System Information

OS: Darwin
OS Version: Darwin Kernel Version 21.6.0: Thu Mar 9 20:08:59 PST 2023; root:xnu-8020.240.18.700.8~1/RELEASE_X86_64
Python Version: 3.11.8 (main, Feb 26 2024, 15:43:17) [Clang 14.0.6 ]

Package Information

langchain_core: 0.1.27
langchain: 0.1.9
langchain_community: 0.0.24
langsmith: 0.1.10
langchain_openai: 0.0.8
langchainhub: 0.1.15
langgraph: 0.0.26

Packages not installed (Not Necessarily a Problem)

The following packages were not found:

langserve

Tool/Agent Creation example

Similar to existing multi-agent workflows, but start out with one agent, which is able to create agents (give them instructions w/in a prompt + select tools) and tools

how to config recurrsion limit

Dear Langgraph,

I encounted this when test chatbot-simulation-evaluation in example.
langgraph.pregel.GraphRecursionError: Recursion limit of 25 reachedwithout hitting a stop condition. You can increase the limitby setting the recursion_limit config key

Would you please advise how to config the limit?
Thanks a lot!

Cheers!

DOC: Example llm-compiler

Issue with current documentation:

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

from typing import List
from langgraph.graph import MessageGraph, END
from langchain_core.messages import AIMessage, BaseMessage, HumanMessage

from codegen.agent_runtime.llm_compiler.joiner import joiner
from codegen.agent_runtime.llm_compiler.task import plan_and_schedule

graph_builder = MessageGraph()

# 1.  Define vertices
# We defined plan_and_schedule above already
# Assign each node to a state variable to update
graph_builder.add_node("plan_and_schedule", plan_and_schedule)
graph_builder.add_node("join", joiner)


# Define edges
graph_builder.add_edge("plan_and_schedule", "join")

# This condition determines looping logic


def should_continue(state: List[BaseMessage]):
    if isinstance(state[-1], AIMessage):
        return END
    return "plan_and_schedule"


graph_builder.add_conditional_edges(
    start_key="join",
    # Next, we pass in the function that will determine which node is called next.
    condition=should_continue,
)
graph_builder.set_entry_point("plan_and_schedule")
chain = graph_builder.compile()
steps = chain.stream(
    [
        HumanMessage(
            content="What's the oldest parrot alive, and how much longer is that than the average?"
        )
    ],
    {
        "recursion_limit": 100,
    },
)
for step in steps:
    print(step)
    print("---")

Error Message and Stack Trace

<module>
    graph_builder.add_conditional_edges(
TypeError: Graph.add_conditional_edges() missing 1 required positional argument: 'conditional_edge_mapping'

System Info

python = ">=3.10,<4.0.0"
tenacity = "^8.2.3"
langgraph = "^0.0.20"
langchain = "^0.1.4"
beautifulsoup4 = "^4.12.2"
docker = "^6.1.3"
langchain-community = "^0.0.17"
llama-cpp-python = "^0.2.38"
gpt4all = "^2.1.0"
gguf = "^0.6.0"
boto3 = "^1.34.34"
langchain-openai = "^0.0.5"
gitpython = "^3.1.41"
qdrant-client = "^1.7.3"
langchain-experimental = "^0.0.50"
langchainhub = "^0.1.14"

Idea or request for content:

No response

rewoo.ipynb example answers incorrectly as of now, using default code and settings

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

I tested this notebook and it answered incorrectly, either answered Melbourne or Italy, instead of Sesto, Italy.
In the first case, when I got Melbourne, I set LLM to "gpt-3.5-turbo-0125" in cell:

from langchain_openai import ChatOpenAI
model = ChatOpenAI(temperature=0)
# model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)

When I tested it with the default LLM ("gpt-3.5-turbo"), it answered Italy.
Pls note planning result doesn't look the same as in the example at:
https://github.com/langchain-ai/langgraph/blob/main/examples/rewoo/rewoo.ipynb
it is this:

Plan: Use Google to search for the winner of the 2024 Australian Open. #E1 = Google[2024 Australian Open winner]
Plan: Use Google to search for the hometown of the 2024 Australian Open winner. #E2 = Google[hometown of 2024 Australian Open winner]

It is in the example:

Plan: Use Google to search for the 2024 Australian Open winner.
#E1 = Google[2024 Australian Open winner]

Plan: Retrieve the name of the 2024 Australian Open winner from the search results.
#E2 = LLM[What is the name of the 2024 Australian Open winner, given #E1]

Plan: Use Google to search for the hometown of the 2024 Australian Open winner.
#E3 = Google[hometown of 2024 Australian Open winner, given #E2]

Plan: Retrieve the hometown of the 2024 Australian Open winner from the search results.
#E4 = LLM[What is the hometown of the 2024 Australian Open winner, given #E3]

so I think a langchain/langgraph update might have caused this.

Error Message and Stack Trace (if applicable)

No response

Description

I tested this notebook and it answered incorrectly,
either answered Melbourne or Italy, instead of Sesto, Italy,
as of now, using default code and settings

System Info

System Information

OS: Windows
OS Version: 10.0.19045
Python Version: 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)]

Package Information

langchain_core: 0.1.22
langchain: 0.1.7
langchain_community: 0.0.20
langsmith: 0.0.87
langchain_benchmarks: 0.0.2
langchain_cli: 0.0.21
langchain_experimental: 0.0.40
langchain_openai: 0.0.6
langchainhub: 0.1.13
langgraph: 0.0.24
langserve: 0.0.41

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.