You can do a lot with the correction of these state values, but if you want to define complex behaviors without relying solely on the message list, you can add additional fields to the state. This tutorial explains how to extend the chatbot by adding new nodes.
In the example above, whenever a tool is called Graph always stops through interrupt Human-in-the-loop was implemented to be.
This time, let's say you want to allow the chatbot to choose whether to rely on humans.
One way to do this is that the graph always stops "human" node Is to generate. This node only runs when LLM calls the "human" tool. For convenience, we will include the "ask_human" flag in the graph state to have LLM switch the flag when it calls this tool.
# Configuration file for managing API keys as environment variablesfrom dotenv import load_dotenv# Load API key informationload_dotenv()
True
# Set up LangSmith tracking. https://smith.langchain.com# !pip install -qU langchain-teddynotefrom langchain_teddynote import logging# Enter a project name.logging.langsmith("CH17-LangGraph-Modules")
Setting a node to ask people for comments
This time, you are asking whether to ask a person in the middle ( ask_human ) To add.
human Defines the schema used on request for.
Next, define the chatbot node.
The main fix here is the chatbot RequestAssistance When the flag is called ask_human Is to switch flags.
Next, create a graph builder and do the same as before chatbot and tools Add nodes to the graph.
Human node setting
next human Generate nodes.
This node acts primarily as a placeholder to trigger an interrupt in the graph. User interrupt If you don't update the status manually during, LLM inserts a tool message to let you know that the user has been asked but has not responded.
This node is also ask_human Release the flag so that the graph will never visit the node again unless there is an additional request.
Reference image
Next, define conditional logic.
select_next_node When the flag is set human Specify the path as a node. Otherwise, pre-built tools_condition Have the function select the next node.
tools_condition The function is simply chatbot In this response message tool_calls Make sure you used it.
When used, action Specify the path as a node. Otherwise, exit the graph.
Finally, connect the edge and compile the graph.
Visualize the graph.
chatbot Nodes do the following actions:
Chatbots can ask humans for help (chatbot->select->human)
Call the search engine tool (chatbot->select->action)
You can respond directly (chatbot->select-> end ).
Once an action or request is made, the graph chatbot Switch back to the node to continue working.
Notice: LLM Provided" HumanRequest "The tool was called, and the interrupt was set. Let's check the graph status.
Graph status is actually 'human' Before the node stop Will.
In this scenario, act as an "expert" and use input to new ToolMessage You can update the status manually by adding.
Next, to respond to the chatbot's request, do the following:
Including responses ToolMessage Generate. This is chatbot Passed back to.
update_state Manually update the graph state by calling.
You can check the status to see if a response has been added.
from typing import Annotated
from typing_extensions import TypedDict
from langchain_teddynote.tools.tavily import TavilySearch
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
# message list
messages: Annotated[list, add_messages]
# Add a state that asks whether to ask the person a question
ask_human: bool
from pydantic import BaseModel
class HumanRequest(BaseModel):
"""Forward the conversation to an expert. Use when you can't assist directly or the user needs assistance that exceeds your authority.
To use this function, pass the user's 'request' so that an expert can provide appropriate guidance.
"""
request: str
from langchain_openai import ChatOpenAI
# add tool
tool = TavilySearch(max_results=3)
# Add tool list (HumanRequest tool)
tools = [tool, HumanRequest]
# Add LLM
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
# Tool binding
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
# Generating responses through LLM tool calls
response = llm_with_tools.invoke(state["messages"])
# Reset whether to ask people questions
ask_human = False
# If there is a tool call and the name is 'HumanRequest'
if response.tool_calls and response.tool_calls[0]["name"] == HumanRequest.__name__:
ask_human = True
# Return message and ask_human status
return {"messages": [response], "ask_human": ask_human}
# Initialize state graph
graph_builder = StateGraph(State)
# Add a chatbot node
graph_builder.add_node("chatbot", chatbot)
# Add tool node
graph_builder.add_node("tools", ToolNode(tools=[tool]))
from langchain_core.messages import AIMessage, ToolMessage
# Generate a response message (Function for generating ToolMessage)
def create_response(response: str, ai_message: AIMessage):
return ToolMessage(
content=response,
tool_call_id=ai_message.tool_calls[0]["id"],
)
# Human node processing
def human_node(state: State):
new_messages = []
if not isinstance(state["messages"][-1], ToolMessage):
# If there is no response from the person
new_messages.append(
create_response("No response from human.", state["messages"][-1])
)
return {
# Add new message
"messages": new_messages,
# Unflag
"ask_human": False,
}
# Add a human node to the graph
graph_builder.add_node("human", human_node)
from langgraph.graph import END
# Select next node
def select_next_node(state: State):
# Check if the question is asked to a human
if state["ask_human"]:
return "human"
# Set the same path as before
return tools_condition(state)
# Add conditional edge
graph_builder.add_conditional_edges(
"chatbot",
select_next_node,
{"human": "human", "tools": "tools", END: END},
)
# Add Edge: From 'tools' to 'chatbot'
graph_builder.add_edge("tools", "chatbot")
# Adding an edge: from 'human' to 'chatbot'
graph_builder.add_edge("human", "chatbot")
# Add Edge: From START to 'chatbot'
graph_builder.add_edge(START, "chatbot")
# Initialize memory storage
memory = MemorySaver()
# Compiling graphs: Using memory checkpointers
graph = graph_builder.compile(
checkpointer=memory,
# 'human' Previously set up interrupts
interrupt_before=["human"],
)
from langchain_teddynote.graphs import visualize_graph
visualize_graph(graph)
# user_input = "I need expert help to build this AI agent. Please search and answer" (if performing a non-human web search) user_input = "I need expert help to build this AI agent. Can you help me?"
# config settings
config = {"configurable": {"thread_id": "1"}}
# Configuration as the second positional argument to a stream or call
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
if "messages" in event:
# Pretty output of the last message
event["messages"][-1].pretty_print()
================================ Human Message =================================
We need expert help to build this AI agent. Can I ask for help?
================================== Ai Message ==================================
Tool Calls:
HumanRequest (call_y4CLW2orZVftBHfc9Tss0qoV)
Call ID: call_y4CLW2orZVftBHfc9Tss0qoV
Args:
request: We ask for the help of experts who need to build an AI agent. Specifically, we need some technical stack and methodology, and advice on the steps to start the project.
# Create a graph state snapshot
snapshot = graph.get_state(config)
# Access the next snapshot state
snapshot.next
('human',)
# AI Message Extraction
ai_message = snapshot.values["messages"][-1]
# Generate human responses
human_response = (
"Experts are here to help! We highly recommend checking out LangGraph for building your agents. "
"It is much more reliable and scalable than simple autonomous agents. "
"More information can be found at https://wikidocs.net/233785wwww."
)
# generate tool message
tool_message = create_response(human_response, ai_message)
# graph status update
graph.update_state(config, {"messages": [tool_message]})
[HumanMessage (content=' needs expert help to build an AI agent. Can I ask for help?', additional_kwargs={}, response_metadata={}, id='9dde72ce-d719-40b7-804b-25cae897892a'), AIMessage (content=', addition Specifically, I need some technical stack and methodology, and advice on the steps to start the project.” }','name':'HumanRequest'},'type':'function'}],'refusal': None}, response_metadata={'token_usage '4fol','cached_tokens': None,'cached_tokens': 0{,'model_name':'gpt-4o-mini-2024-07-18','system_fingerprint':'fp_0ba0d Specifically, I need some technical stack and methodology, and advice on the steps to start the project.'},'id':'call_y4CLW2orZVftBHfc9Ts0qoV','type':'tool_call'}], usage_metadata={'input_tokens' ToolMessage (content=' experts will help! We highly recommend that you check LangGraph to build an agent. Much more stable and more scalable than a simple autonomous agent. You can see more information in https://wikidocs.net/233785.', id='398cac20-42dc-4d96-8292-30d5821e006f', tool_call_id='call_y4CLW2orZVftB
# Generating an event stream from a graph
events = graph.stream(None, config, stream_mode="values")
# Processing each event
for event in events:
# Print the last message if there are any messages
if "messages" in event:
event["messages"][-1].pretty_print()
================================= Tool Message =================================
Experts will help! We highly recommend that you check LangGraph to build an agent. Much more stable and more scalable than a simple autonomous agent. You can see more information in https://wikidocs.net/233785.
================================= Tool Message =================================
Experts will help! We highly recommend that you check LangGraph to build an agent. Much more stable and more scalable than a simple autonomous agent. You can see more information in https://wikidocs.net/233785.
================================== Ai Message ==================================
It was assisted by experts in building AI agents. We recommend using LangGraph. This technology is much more stable and scalable than a simple autonomous agent. You can check [here] (https://wikidocs.net/233785)에서) for more information. Please tell me if you need help!
# Final status check
state = graph.get_state(config)
# Step-by-step message output
for message in state.values["messages"]:
message.pretty_print()
================================ Human Message =================================
We need expert help to build this AI agent. Can I ask for help?
================================== Ai Message ==================================
Tool Calls:
HumanRequest (call_y4CLW2orZVftBHfc9Tss0qoV)
Call ID: call_y4CLW2orZVftBHfc9Tss0qoV
Args:
request: We ask for the help of experts who need to build an AI agent. Specifically, we need some technical stack and methodology, and advice on the steps to start the project.
================================= Tool Message =================================
Experts will help! We highly recommend that you check LangGraph to build an agent. Much more stable and more scalable than a simple autonomous agent. You can see more information in https://wikidocs.net/233785.
================================== Ai Message ==================================
It was assisted by experts in building AI agents. We recommend using LangGraph. This technology is much more stable and scalable than a simple autonomous agent. You can check [here] (https://wikidocs.net/233785)에서) for more information. Please tell me if you need help!