07. State correction and replay through reverting of intermediate intervention
LangGraph provides a way to manually update the state of intermediate steps.
Updating the state allows you to control the path by modifying the agent's behavior, and even modify the past.
This feature is especially useful when correcting an agent's mistakes, exploring alternative paths, or changing an agent's behavior based on a specific goal.
Note: The agent used in this tutorial defines the same graph as in the previous tutorial.
# Configuration file for managing API keys as environment variablesfrom dotenv import load_dotenv# Load API key informationload_dotenv()
True
# Set up LangSmith tracking. https://smith.langchain.com# !pip install -qU langchain-teddynotefrom langchain_teddynote import logging# Enter a project name.logging.langsmith("CH17-LangGraph-Modules")
from typing import Annotatedfrom typing_extensions import TypedDictfrom langchain_teddynote.tools.tavily import TavilySearchfrom langchain_openai import ChatOpenAIfrom langgraph.checkpoint.memory import MemorySaverfrom langgraph.graph import StateGraph,START,ENDfrom langgraph.graph.message import add_messagesfrom langgraph.prebuilt import ToolNode, tools_conditionfrom langchain_teddynote.graphs import visualize_graph########## 1. State Definition ########### State DefinitionclassState(TypedDict):# Add a message list comment messages: Annotated[list, add_messages]########## 2. Tool definition and binding ########### Initialize tooltool =TavilySearch(max_results=3)# Define the tool listtools =[tool]# LLM Initializationllm =ChatOpenAI(model="gpt-4o-mini")# Combining tools and LLMllm_with_tools = llm.bind_tools(tools)########## 3. add note ########### Defining a chatbot functiondefchatbot(state: State):# Calling and returning messagesreturn{"messages":[llm_with_tools.invoke(state["messages"])]}# Create a state graphgraph_builder =StateGraph(State)# Add a chatbot nodgraph_builder.add_node("chatbot", chatbot)# Creating and adding tool nodestool_node =ToolNode(tools=tools)# Add tool nodegraph_builder.add_node("tools", tool_node)# Conditional Edgegraph_builder.add_conditional_edges("chatbot", tools_condition,)########## 4. add edge ########### tools > chatbotgraph_builder.add_edge("tools","chatbot")# START > chatbotgraph_builder.add_edge(START,"chatbot")# chatbot > ENDgraph_builder.add_edge("chatbot",END)########## 5. Compile the graph ########### Initialize memory storagememory =MemorySaver()# Compile the graph buildergraph = graph_builder.compile(checkpointer=memory)########## 6. Graph visualization ########### Graph visualizationvisualize_graph(graph)
First, print out the channel list and print out the list to which interrupt_before and interrupt_after can be applied.
The current step was interrupted by a ToolNode.
If you check the most recent message, you can see that the ToolNode contains a query before performing the search.
Here, the query simply contains the word LangGraph (the original question was "Please research and tell me what LangGraph is!").
Naturally, web search results may not be what we want.
human intervention (Human-in-the-loop)
Edit search results in TavilySearch tool
We often find ourselves not liking the results of a ToolMessage.
In particular, answers obtained through web searches can easily contain incorrect information, which can then affect the chatbot's responses.
What if a human being wants to intervene and modify the ToolMessage, which is the search result of the web search tool Tavily Tool, and send it to LLM?
Below I have created some modified virtual web search results that are slightly different from the original web search results.
Next, we inject the modified search results into ToolMessage.
importance
To edit a message here, you must specify the tool_call_id that matches the Message you want to edit.
StateGraph's update_state method
The update_state method updates the state of the graph with a given value. This method behaves as if the value came from as_node.
as_node (Optional[str]): The node name to consider as the source of the value. Defaults to None.
return value
RunnableConfig
key features
Load the previous state and save the new state via checkpointer.
Handles state updates to subgraphs.
If as_node is not specified, it finds the node that last updated its state.
Updates the state by executing writers on the specified node.
Save the updated state to a checkpoint.
Main Logic
Checks the checkpointer and raises a ValueError if it is not found.
If the update is to a subgraph, the update_state method of that subgraph is called.
Loads a previous checkpoint and determines as_node if necessary.
Updates the state using writers on the specified node.
Save the updated state as a new checkpoint.
reference
This method is used when manually updating the state of the graph.
Use checkpointers to ensure versioning and persistence of state.
If as_node is not specified, it is determined automatically, but an error may occur in ambiguous cases.
Writing to SharedValues is not allowed while the state is being updated.
The graph is now complete.
Because you provided the final response message!
Since state updates simulate graph steps, they also generate corresponding traces.
wwwwww
The same logic applies here, the message passed to update_state will cause the message to be added in the same way.
The update_state function acts as if it were one of the nodes in the graph! By default, the update operation uses the last node that was executed, but you can specify it manually below. Let's add an update and tell the graph to treat it as if it came from a "chatbot".
Below is the code you can use if you also want to modify the status of the final answer.
Now, let's visualize the graph and check the overall output.
Check the current state as before to ensure that the checkpoint reflects the manual update.
Check if there is a next node to proceed to. You can see that () is empty, which means that all the processes have proceeded normally.
Update message status after interrupt - continue
Modify search query in TavilySearch tool
This time, we will look at a method to stop by generating an interrupt before proceeding to the next node, update the state, and then proceed.
First, create a new thread_id.
Here we use the generate_random_hash function, which generates a random hash value.
Next, let's update the tool call for the agent. First, let's get the Message ID.
The last message is related to calling the tavily_web_search tool.
The main properties are:
name: Name of the tool
args: Search query
id: Tool Call ID
type: Tool call type (tool_call)
Let's update the query of args among the above property values.
Create a new tool, new_tool_call, by copying the existing existing_message.
Since we copied using the opy() method, all property values are copied.
Then, enter the desired search query in the query parameter.
importance
id uses the id of the existing message as is. (If the id changes, the message reducer will not update the message but add it.)
You can see that the search query has been updated.
Check the tool_calls of the last message that was updated.
You can see that the query in args has been modified.
You can see that the search is performed with the changed search query "LangGraph site:teddylee777.github.io" instead of the existing search term "LangGraph".
Continue streaming the grwaph using the existing settings and None input.
In the final state, check the last message in messages (this is the final response message).
Modify and Replay Results from Previous Snapshots
This time, we'll look at how to Replay by modifying the results of a previous snapshot.
After checking the last snapshot, go back to a specific node, modify the state, and then proceed from that node again.
This is called Replay.
First, get the state of the last snapshot.
Check the contents of the selected message.
Open toolbar Write a captionCheck if the search query has been updated and reflected.
Create an updated AIMessage.
Below is the message before the update.
Update the state using the update_state method in graph.
Save the updated state to updated_state.
Now we stream the updated state, where the input is None and we replay it.
Prints the final result.
The config used at this time is not the one that retrieves the final state, but the initial config for retrieving the final state.
from langchain_core.runnables import RunnableConfig
# question
question = "Please research and tell me what LangGraph is!"
# Define the initial input state
input = State(messages=[("user", question)])
# config setting
config = RunnableConfig(
configurable={"thread_id": "1"}, # thread ID setting
)
# Output a list of graph channels
list(graph.channels)
# Calling a graph stream
events = graph.stream(
input=input, config=config, interrupt_before=["tools"], stream_mode="values"
)
# Repeating events
for event in events:
# If a message is included in an event
if "messages" in event:
# Pretty output of the last message
event["messages"][-1].pretty_print()
================================ Human Message =================================
Please research and let me know what LangGraph is!
================================== Ai Message ==================================
Tool Calls:
tavily_web_search (call_uAZwKKbpIcMsKKOHI6aFIMty)
Call ID: call_uAZwKKbpIcMsKKOHI6aFIMty
Args:
query: LangGraph
# Create a graph state snapshot
snapshot = graph.get_state(config)
# Extract most recent messages
last_message = snapshot.values["messages"][-1]
# Output message
last_message.pretty_print()
modified_search_result = """[Modified web search results]
LangGraph enables building stateful multi-actor applications using LLM.
LangGraph is an open source library that provides cycle flow, controllability, persistence, and cloud deployment capabilities.
For a detailed tutorial, see [LangGraph Tutorial](https://langchain-ai.github.io/langgraph/tutorials/) 과
Please refer to Teddy Note's [Langchain Korean Tutorial](https://wikidocs.net/233785) please refer to."""
print(modified_search_result)
[Modified web search results]
LangGraph enables building stateful multi-actor applications using LLM.
LangGraph is an open source library that provides cycle flow, controllability, persistence, and cloud deployment capabilities.
For a detailed tutorial, see [LangGraph Tutorial](https://langchain-ai.github.io/langgraph/tutorials/) 과
Please refer to Teddy Note's [Langchain Korean Tutorial](https://wikidocs.net/233785).
# Extract `tool_call_id` of `ToolMessage` you want to modify
tool_call_id = last_message.tool_calls[0]["id"]
print(tool_call_id)
call_uAZwKKbpIcMsKKOHI6aFIMty
from langchain_core.messages import AIMessage, ToolMessage
new_messages = [
# ToolMessage matching tool call in LLM API required
ToolMessage(
content=modified_search_result,
tool_call_id=tool_call_id,
),
# Add content directly to LLM's response
# AIMessage(content=modified_search_result),
]
new_messages[-1].pretty_print()
================================= Tool Message =================================
[Modified web search results]
LangGraph enables building stateful multi-actor applications using LLM.
LangGraph is an open source library that provides cycle flow, controllability, persistence, and cloud deployment capabilities.
For a detailed tutorial, see [LangGraph Tutorial](https://langchain-ai.github.io/langgraph/tutorials/) 과
Teddy Note's [Langchain Korean Tutorial](https://wikidocs.net/233785) The name is Chinese.
graph.update_state(
# Specify the status to update
config,
# The updated value to provide. Messages in `State` are added to the existing state as "append-only".
{"messages": new_messages},
as_node="tools",
)
print("(Print 1 recent message)\n")
print(graph.get_state(config).values["messages"][-1])
(Print 1 recent message)
content='[Modified web search results] \nLangGraphsupports building stateful multi-actor applications using LLM.\nLangGraph is an open source library that provides cycle flow, controllability, persistence, and cloud deployment capabilities.\n\nFor a detailed tutorial, see [LangGraph Tutorial](https:
snapshot = graph.get_state(config)
snapshot.next
('chatbot',)
# `None`adds nothing to the current state
events = graph.stream(None, config, stream_mode="values")
# Repeating events
for event in events:
# If a message is included in an event
if "messages" in event:
# Pretty output of the last message
event["messages"][-1].pretty_print()
================================= Tool Message =================================
[Modified web search results]
LangGraph enables building stateful multi-actor applications using LLM.
LangGraph is an open source library that provides cycle flow, controllability, persistence, and cloud deployment capabilities.
For a detailed tutorial, see [LangGraph Tutorial] (https://langchain-ai.github.io/langgraph/tutorials/) 과
Teddy Note's [Langchain Korean Tutorial](https://wikidocs.net/233785) The name is Chinese.
================================== Ai Message ==================================
LangGraph is an open source library that helps build stateful multi-actor applications using large language models (LLMs). It provides cycle flow, controllability, persistence, and cloud deployment features to help developers build complex applications more easily.
For a detailed tutorial, [LangGraph 튜토리얼](https://langchain-ai.github.io/langgraph/tutorials/) And Teddy Note's [Langchain Korean Tutorial](https://wikidocs.net/233785) You can check it out at .
# Specifies the node on which this function will operate. Automatically continues processing as if this node had just been executed.
# graph.update_state(
# config,
# {
# "messages": [
# AIMessage(content="Finally, we finish by adding a final message.")
# ]
# },
# as_node="chatbot",
# )
from langchain_teddynote.graphs import visualize_graph
visualize_graph(graph)
# Create a graph state snapshot
snapshot = graph.get_state(config)
# Print the last three messages
for message in snapshot.values["messages"]:
message.pretty_print()
================================ Human Message =================================
Please research and let me know what LangGraph is!
================================== Ai Message ==================================
Tool Calls:
tavily_web_search (call_uAZwKKbpIcMsKKOHI6aFIMty)
Call ID: call_uAZwKKbpIcMsKKOHI6aFIMty
Args:
query: LangGraph
================================= Tool Message =================================
[Revised web search results]
LangGraph enables building stateful multi-actor applications using LLM.
LangGraph is an open source library that provides cycle flow, controllability, persistence, and cloud deployment capabilities.
For a detailed tutorial, [LangGraph tutorial](https://langchain-ai.github.io/langgraph/tutorials/) 과
Please refer to Teddy Note's [Langchain Korean Tutorial](https://wikidocs.net/233785).
================================== Ai Message ==================================
LangGraph is an open source library that helps build stateful multi-actor applications using large language models (LLMs). It provides cycle flow, controllability, persistence, and cloud deployment capabilities to help developers build complex applications more easily.
You can find detailed tutorials in [LangGraph Tutorial](https://langchain-ai.github.io/langgraph/tutorials/) and Teddy Note's [Langchain Korean Tutorial](https://wikidocs.net/233785).
# Next status output
print(snapshot.next)
()
from langchain_teddynote.graphs import generate_random_hash
thread_id = generate_random_hash()
print(f"thread_id: {thread_id}")
question = "I want to learn about LangGraph. Please recommend some useful materials!"
# Define the initial input state
input = State(messages=[("user", question)])
# Create a new config
config = {"configurable": {"thread_id": thread_id}}
events = graph.stream(
input=input,
config=config,
interrupt_before=["tools"],
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
thread_id: 6c95af
================================ Human Message =================================
I want to learn about LangGraph. Please recommend some useful resources!
================================== Ai Message ==================================
Tool Calls:
tavily_web_search (call_xJVDxiPGNJcHmErUMrCEMxdF)
Call ID: call_xJVDxiPGNJcHmErUMrCEMxdF
Args:
query: Introduction and materials of LangGraph
# Copy config
config_copy = config.copy()
from langchain_core.messages import AIMessage
# Get snapshot status
snapshot = graph.get_state(config)
# Get the last message of messages
existing_message = snapshot.values["messages"][-1]
# Print message ID
print("Message ID", existing_message.id)
Message ID run-f78c69ba-6403-45dc-b355-28a2b326471e-0
# First tool call output
print(existing_message.tool_calls[0])
# Create AIMessage
new_message = AIMessage(
content=existing_message.content,
tool_calls=[new_tool_call],
# IMPORTANT! The ID is how you replace a message instead of adding it to the status.
id=existing_message.id,
)
print(new_message.id)
# Print the modified message
new_message.pretty_print()
# Receiving events from a graph stream
events = graph.stream(None, config, stream_mode="values")
# Processing each event
for event in events:
# Print the last message if there are any messages
if "messages" in event:
event["messages"][-1].pretty_print()
================================== Ai Message ==================================
Tool Calls:
tavily_web_search (call_xJVDxiPGNJcHmErUMrCEMxdF)
Call ID: call_xJVDxiPGNJcHmErUMrCEMxdF
Args:
query: LangGraph site:teddylee777.github.io
================================= Tool Message =================================
Name: tavily_web_search
["https://teddylee777.github.io/langgraph/langgraph-multi-agent-collaboration/The biggest reason for the release of the LangGraph library is \\\"Complex problems are difficult to solve with a single AI agent.\\\" It starts with the sentence: In this article, we will introduce how to effectively solve these problems through LangGraph's multi-agent collaboration. 🔥Notification🔥\n① Teddy Note YouTube -\nGo check it out!\n② LangChain Korean Tutorial\nGo here 👀\n③ LangChain Note Free E-book (wikidocs)\nGo here 🙌\n④ RAG Secret Note LangChain Lecture Open\nGo here 🙌\n⑤ Seoul National University PyTorch Deep Learning Lecture\nGo here 🙌\nLangGraph - Creating an LLM application that performs complex tasks with Multi-Agent Collaboration\nJanuary 29, 2024\n26 minutes requiredRetrieval...\n[LangChain] Guide to building an intelligent search system using agents and tools\nFebruary 9, 2024\n41 minutes required\nIn this article, we will introduce LangChain's Agent We introduce how to perform complex search and data processing tasks using the framework. We use LangSmith to track the Agent's inference steps. We use the search tool (Tavily Search) that the Agent utilizes, a PDF-based search retriever..."]
================================== Ai Message ==================================
Here are some useful resources to learn more about LangGraph:
1. **[LangGraph - Multi-Agent Collaboration](https://teddylee777.github.io/langgraph/langgraph-multi-agent-collaboration/)**
In this article, we introduce the LangGraph library and how to solve complex problems using multi-agent collaboration. In particular, we describe an approach to effectively distribute tasks by creating specialized agents.
2. **[LangGraph Retrieval Agent Dynamic document retrieval and processing using](https://teddylee777.github.io/langchain/langchain-tutorial-07/)**
In this post, we will cover how to use LangGraph Retrieval Agent to provide a variety of functionalities including language processing, AI model integration, and database management.
3. **[Rank Chain + Format Data](https://teddylee777.github.io/langchain/langchain-tutorial-04/)**
LangChain to analyze data through ChatGPT-based question-answering on structured data (CSV, Excel). It includes an explanation of the process of creating an agent and asking questions in natural language.
Through these materials, you will be able to gain a deeper understanding of the functions and utilization of LangGraph.
# Create an event stream
events = graph.stream(
{
"messages": (
"user",
"Please answer in Korean very kindly and sincerely about what I have learned so far! Be sure to include the source!",
)
},
config,
stream_mode="values",
)
# Message event handling
for event in events:
if "messages" in event:
# Print last message
event["messages"][-1].pretty_print()
================================ Human Message =================================
Please answer in Korean very kindly and sincerely about what I have learned so far! Be sure to include sources!
================================== Ai Message ==================================
Of course! I will kindly and sincerely summarize what you have learned so far.
---
### LangGraph overvieww
LangGraph is a library that enables multi-agent collaboration to solve complex problems. This library approaches problems that are difficult to solve with a single AI agent by dividing and conquering them. That is, it creates agents specialized for each task or domain and lets these agents work together to solve the problem.
### key featuresw
1. **Multi-agent collaboration**:
- LangGraph is designed to allow multiple AI agents to work together to perform a task. Each agent is optimized for a specific domain or task, allowing it to effectively solve complex problems.
2. **Creating specialized agents**:
- Create specialized agents for complex tasks and route tasks to the right experts, allowing each agent to perform optimally.
3. **State-based workflow**:
- LangGraph uses a state graph to manage interactions between agents. It defines the state of each agent and tracks the flow of messages to proceed with tasks.
4. **Integration of tools and features**:
- LangGraph can be used to integrate various tools and functions, for example, it includes functions to execute code or perform specific tasks through the Python REPL tool.
### Use Cases
- **Document Search and Processing**: You can perform dynamic document search and processing through language processing and database management using LangGraph Retrieval Agent.
- **Structured Data Analysis**: You can perform data analysis through question-and-answer on CSV or Excel files. When you ask questions in natural language, the agent processes them and returns results.
### source
- [LangGraph - Multi-Agent Collaboration](https://teddylee777.github.io/langgraph/langgraph-multi-agent-collaboration/)
- [LangGraph Retrieval Agent Dynamic document retrieval and processing using ](https://teddylee777.github.io/langchain/langchain-tutorial-07/)
- [Rank Chain + Format Data](https://teddylee777.github.io/langchain/langchain-tutorial-04/)
---
I hope this summary helps! If you have any additional questions, please let me know.
================================== Ai Message ==================================
Of course! I will kindly and sincerely summarize what you have learned so far.w
---
### LangGraph outlinewww
LangGraph is designed to allow multiple AI agents to work together to perform a task. Each agent is optimized for a specific domain or task, allowing it to effectively solve complex problems.
### key features
1. **Multi-agent collaboration**:
- LangGraph is designed to allow multiple AI agents to work together to perform a task. Each agent is optimized for a specific domain or task, allowing it to effectively solve complex problems.
2. **Creating specialized agents**:
- Create specialized agents for complex tasks and route tasks to the right experts, allowing each agent to perform optimally.
3. **State-based workflow**:
- LangGraph uses a state graph to manage interactions between agents. It defines the state of each agent and tracks the flow of messages to proceed with tasks.
4. **Integration of tools and features**:
- LangGraph can be used to integrate various tools and functions, for example, it includes functions to execute code or perform specific tasks through the Python REPL tool.
### use cases
- **Document search and processing**: LangGraph Retrieval Agent You can use it to perform dynamic document retrieval and processing, including language processing and database management.
- **Structured data analysis**: You can perform data analysis through question-and-answer on CSV or Excel files, and when you ask questions in natural language, the agent processes them and returns results.
### source
- [LangGraph - Multi-Agent Collaboration](https://teddylee777.github.io/langgraph/langgraph-multi-agent-collaboration/)
- [LangGraph Retrieval Dynamic document retrieval and processing using agents](https://teddylee777.github.io/langchain/langchain-tutorial-07/)
- [Rank Chain + Format Data](https://teddylee777.github.io/langchain/langchain-tutorial-04/)
---
I hope this summary helps! If you have any additional questions, please let me know.
to_replay_state = None
# Get status records
for state in graph.get_state_history(config):
messages = state.values["messages"]
if len(messages) > 0:
print(state.values["messages"][-1].id)
# Number of messages and next status output
print("number of messages: ", len(state.values["messages"]), "next node: ", state.next)
print("-" * 80)
# Select specific status criteria: Number of chat messages
if len(state.values["messages"]) == 2:
# Select a specific message ID
to_replay_state = state
run-55afc5ee-6d63-4b11-b2a7-2835553e09d6-0
Number of messages: 6 Next node: ()
--------------------------------------------------------------------------------
d45abb6b-b2c4-40f8-86ce-1038fc289d0f
Number of messages: 5 Next node: ('chatbot',)
--------------------------------------------------------------------------------
run-af0f4be0-8950-4c29-b996-93e9332ca1d9-0
Number of messages: 4 Next node:w ('__start__',)
--------------------------------------------------------------------------------
run-af0f4be0-8950-4c29-b996-93e9332ca1d9-0
Number of messages: 4 Next node: ()
--------------------------------------------------------------------------------
b599532c-fea7-4dac-918a-787264946d9b
Number of messages: 3 Next node: ('chatbot',)
--------------------------------------------------------------------------------
run-f78c69ba-6403-45dc-b355-28a2b326471e-0
Number of messages: 2 Next node: ('tools',)
--------------------------------------------------------------------------------
run-f78c69ba-6403-45dc-b355-28a2b326471e-0
Number of messages: 2 Next node: ('tools',)
--------------------------------------------------------------------------------
a08330c9-456f-4cf3-b155-194afa04a353
Number of messages: 1 Next node:w ('chatbot',)
--------------------------------------------------------------------------------
from langchain_teddynote.messages import display_message_tree
# Get selected messages
existing_message = to_replay_state.values["messages"][-1]
# Message tree output
display_message_tree(existing_message)
# Create AIMessage
new_message = AIMessage(
content=existing_message.content,
tool_calls=[tool_call],
# IMPORTANT! The ID is how you replace a message instead of adding it to the status.
id=existing_message.id,
)
# Print the modified message
new_message.tool_calls[0]["args"]
# config passes updated_state. This is a randomly updated state.
for event in graph.stream(None, updated_state, stream_mode="values"):
# If a message is included in an event
if "messages" in event:
# Print last message
event["messages"][-1].pretty_print()
================================== Ai Message ==================================
Tool Calls:
tavily_web_search (call_xJVDxiPGNJcHmErUMrCEMxdF)
Call ID: call_xJVDxiPGNJcHmErUMrCEMxdF
Args:
query: LangGraph human-in-the-loop workflow site:reddit.com
================================= Tool Message =================================
Name: tavily_web_search
["https://www.reddit.com/r/LangChain/comments/1byz3lr/insights_and_learnings_from_building_a_complex/For the chatbot, Chainlit provides everything we need, except background processing. Let's use that. Langchain and LCEL are both flexible and unify the interfaces with the LLMs. We'll need a rather complicated agent workflow, in fact, multiple ones. LangGraph is more flexible than crew.ai or autogen.", "https://www.reddit.com/r/LangChain/comments/1ci3m0k/toolcalling_agents_human_approval_before_tool/Thank you. I've seen both of those examples before - the \\\"human-in-the-loop\\\" uses a linear chain (rather than agents). The \\\"human-as-a-tool\\\" is when a question can't be answered by the LLM / one of the supplied tools, so it asks a human to supplement with info.", "https://www.reddit.com/r/Langchaindev/comments/1dl6pyk/flow_engineering_with_langchainlanggraph_and/The talk among Itamar Friedman (CEO of CodiumAI) and Harrison Chase (CEO of LangChain) explores best practices, insights, examples, and hot takes on flow engineering: Flow Engineering with LangChain/LangGraph and CodiumAI. Flow Engineering can be used for many problems involving reasoning, and can outperform naive prompt engineering."]
================================== Ai Message ==================================
I recommend the following useful resources on LangGraph:
1. **[Insights and Learnings from Building a Complex Multi-Agent System](https://www.reddit.com/r/LangChain/comments/1byz3lr/insights_and_learnings_from_building_a_complex/)** - 이 글에서는 LangGraph의 유연성과 복잡한 에이전트 워크플로우에 대해 논의하고 있습니다. LangGraph가 crew.ai나 autogen보다 더 유연하다는 점을 강조합니다.
2. **[Tool-calling agents: Human approval before tool invocation?](https://www.reddit.com/r/LangChain/comments/1ci3m0k/toolcalling_agents_human_approval_before_tool/)** - 이 글에서는 'human-in-the-loop' 접근 방식에 대해 설명하며, LangGraph의 활용 사례를 다룹니다.
3. **[Flow Engineering with LangChain/LangGraph and CodiumAI](https://www.reddit.com/r/Langchaindev/comments/1dl6pyk/flow_engineering_with_langchainlanggraph_and/)** - Itamar Friedman과 Harrison Chase가 Flow Engineering의 모범 사례와 통찰을 공유하는 내용으로, LangGraph와 CodiumAI의 결합을 다룹니다.
These materials will help you gain a deeper understanding of the features and uses of LangGraph.
# Output the final result
for msg in graph.get_state(config).values["messages"]:
msg.pretty_print()
================================ Human Message =================================
I want to learn about LangGraph. Please recommend some useful resources!w
================================== Ai Message ==================================
Tool Calls:
tavily_web_search (call_xJVDxiPGNJcHmErUMrCEMxdF)
Call ID: call_xJVDxiPGNJcHmErUMrCEMxdF
Args:
query: LangGraph human-in-the-loop workflow site:reddit.com
================================= Tool Message =================================
Name: tavily_web_search
["https://www.reddit.com/r/LangChain/comments/1byz3lr/insights_and_learnings_from_building_a_complex/For the chatbot, Chainlit provides everything we need, except background processing. Let's use that. Langchain and LCEL are both flexible and unify the interfaces with the LLMs. We'll need a rather complicated agent workflow, in fact, multiple ones. LangGraph is more flexible than crew.ai or autogen.", "https://www.reddit.com/r/LangChain/comments/1ci3m0k/toolcalling_agents_human_approval_before_tool/Thank you. I've seen both of those examples before - the \\\"human-in-the-loop\\\" uses a linear chain (rather than agents). The \\\"human-as-a-tool\\\" is when a question can't be answered by the LLM / one of the supplied tools, so it asks a human to supplement with info.", "https://www.reddit.com/r/Langchaindev/comments/1dl6pyk/flow_engineering_with_langchainlanggraph_and/The talk among Itamar Friedman (CEO of CodiumAI) and Harrison Chase (CEO of LangChain) explores best practices, insights, examples, and hot takes on flow engineering: Flow Engineering with LangChain/LangGraph and CodiumAI. Flow Engineering can be used for many problems involving reasoning, and can outperform naive prompt engineering."]
================================== Ai Message ==================================
I recommend the following useful resources on LangGraph:
1. **[Insights and Learnings from Building a Complex Multi-Agent System](https://www.reddit.com/r/LangChain/comments/1byz3lr/insights_and_learnings_from_building_a_complex/)** - 이 글에서는 LangGraph의 유연성과 복잡한 에이전트 워크플로우에 대해 논의하고 있습니다. LangGraph가 crew.ai나 autogen보다 더 유연하다는 점을 강조합니다.
2. **[Tool-calling agents: Human approval before tool invocation?](https://www.reddit.com/r/LangChain/comments/1ci3m0k/toolcalling_agents_human_approval_before_tool/)** - 이 글에서는 'human-in-the-loop' 접근 방식에 대해 설명하며, LangGraph의 활용 사례를 다룹니다.
3. **[Flow Engineering with LangChain/LangGraph and CodiumAI](https://www.reddit.com/r/Langchaindev/comments/1dl6pyk/flow_engineering_with_langchainlanggraph_and/)** - Itamar Friedman과 Harrison Chase가 Flow Engineering의 모범 사례와 통찰을 공유하는 내용으로, LangGraph와 CodiumAI의 결합을 다룹니다.
These materials will help you gain a deeper understanding of the functions and uses of LangGraph..