# 01. Agent conversation simulation (customer response scenario)

## Chatbot conversation simulation <a href="#id-1" id="id-1"></a>

When building a chatbot, for example, a customer support assistant, it can be difficult to properly evaluate the performance of the chatbot. Interacting manually intensively for each code change is time consuming.

One way to make the evaluation process easier and reproducible **Simulating user interaction** is.

With LangGraph, it is easy to set this up.

Below is an example of how to create a "Simulated User" to simulate a conversation.

![](https://wikidocs.net/images/page/267816/agent-simulations.png)

Preferences

```python
# API 키를 환경변수로 관리하기 위한 설정 파일  
from dotenv import load_dotenv  

# API 키 정보 로드  
load_dotenv()  
```

```
 True 
```

```python
# LangSmith 추적을 설정합니다. https://smith.langchain.com  
# !pip install -qU langchain-teddynote  
from langchain_teddynote import logging  

# 프로젝트 이름을 입력합니다.  
logging.langsmith("CH17-LangGraph-Use-Cases")  
```

```
 Start tracking LangSmith.  
[Project name]  
CH17-LangGraph-Use-Cases  
```

### State definition <a href="#state" id="state"></a>

```python
from langgraph.graph.message import add_messages  
from typing import Annotated  
from typing_extensions import TypedDict  


# State 정의  
class State(TypedDict):  
    messages: Annotated[list, add_messages]  # 사용자 - 상담사 간의 대화 메시지  
```

### Counselor defines customer roles <a href="#id-3" id="id-3"></a>

#### Define the role of the counselor <a href="#id-4" id="id-4"></a>

In simulation **Counselor** Define a chatbot that plays a role.

**Reference**

* `call_chatbot` My implementation is setable, and it is also possible to change the model used inside to Agent.
* `call_chatbot` We will receive a message from the user as input and give the role of consulting the customer.

*It can be used to generate conversation responses in customer support scenarios.*

```python
from typing import List  
from langchain_teddynote.models import LLMs, get_model_name  
from langchain_openai import ChatOpenAI  
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder  
from langchain_core.messages import HumanMessage, AIMessage, BaseMessage  
from langchain_core.output_parsers import StrOutputParser  

# set model name  
MODEL_NAME = get_model_name(LLMs.GPT4)  


def call_chatbot(messages: List[BaseMessage]) -> dict:  
    # LangChain ChatOpenAI model can be changed to Agent.  
    prompt = ChatPromptTemplate.from_messages(  
        [  
            (  
                "system",  
                "You are a customer support agent for an airline. Answer in Korean.",  
            ),  
            MessagesPlaceholder(variable_name="messages"),  
        ]  
    )  
    model = ChatOpenAI(model=MODEL_NAME, temperature=0.6)  
    chain = prompt | model | StrOutputParser()  
    return chain.invoke({"messages": messages})  
```

`call_chatbot` Take the user's input and process the chatbot's response.

```python
call_chatbot([("user", "hello?")])  
```

```
 'Hello! How can I help you?' 
```

Define customer roles (Simulated User)

Now define the role of the simulated customer. Simulate conversations in customer support scenarios.

The system prompt establishes the interaction between the customer and the customer support representative, and provides details of the scenario through user directives.

This configuration is used to simulate the model's response to specific user needs (e.g. refund requests).

```python
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder  
from langchain_openai import ChatOpenAI  


def create_scenario(name: str, instructions: str):  
    # Define the system prompt: change as needed  
    system_prompt_template = """You are a customer of an airline company. \  
You are interacting with a user who is a customer support person. \  

Your name is {name}.  

# Instructions:  
{instructions}  

[IMPORTANT]  
- When you are finished with the conversation, respond with a single word 'FINISHED'  
- You must speak in Korean."""  

    # Create chat prompt templates by combining conversation messages and system prompts.  
    prompt = ChatPromptTemplate.from_messages(  
        [  
            ("system", system_prompt_template),  
            MessagesPlaceholder(variable_name="messages"),  
        ]  
    )  

    # Partially fill in the prompt using a specific user name and instructions.  
    prompt = prompt.partial(name=name, instructions=instructions)  
    return prompt  
```

Generate a hypothetical scenario. This fictitious scenario is from the customer's point of view.

Here we define the scenario for requesting a refund.

```python
# defines user instructions.  
instructions = """You are tyring to get a refund for the trip you took to Jeju Island. \  
You want them to give you ALL the money back. This trip happened last year."""  

# defines the user name.  
name = "Teddy"  

create_scenario(name, instructions).pretty_print()  
```

```
 ================================ System Message ================================  

You are a customer of an airline company. You are interacting with a user who is a customer support person.  
Your name is {name}.  

# Instructions:  
{instructions}  

[IMPORTANT]  
-When you are finished with the conversation, respect with a single word'FINISHED'  
-You must speak in English.  

============================= Messages Placeholder =============================  

{messages}  
```

```python
# Initialize the OpenAI chatbot model.  
model = ChatOpenAI(model=MODEL_NAME, temperature=0.6)  

# Generate simulated user conversations. 
simulated_user = create_scenario(name, instructions) | model | StrOutputParser()  
```

Created `simulated_user` Call to forward the message to the simulated user.

```python
from langchain_core.messages import HumanMessage  

# send a message to a simulated user  
messages = [HumanMessage(content="Hello, how can I help you?")]  
simulated_user.invoke({"messages": messages})  
```

```
 'Hello. I went on a trip to Jeju last year, and I would like to request a refund for that trip. I want to get all the money back. Can you help me?' 
```

### Define agent simulation <a href="#id-5" id="id-5"></a>

The code below creates a LangGraph workflow to run the simulation.

The main components are:

1. These are two nodes above the simulated user and the chatbot.
2. The graph itself with conditional stop criteria.

```python
Copyai_assistant_node(  
    [  
        ("user", "Hello?"),  
        ("assistant", "Hello! How can I help you?"),  
        ("user", "How do I get a refund?"),  
    ]  
)  
```

Call the node of the counselor role.

```python
from langchain_core.messages import AIMessage  


# Counselor Role 
def ai_assistant_node(messages):  
    # Call the counselor to respond
    ai_response = call_chatbot(messages)  

    # Returns the AI ​​counselor's response  
    return {"messages": [("assistant", ai_response)]}  
```

In this example, we will assume that HumanMessages are messages from simulated users. This means that the simulated user node needs logic to exchange AI and Human messages.

Since both chatbots and simulated users are LLMs, both will respond with AI messages. Our state will be a list of alternating human and AI messages. This means that one of the nodes needs logic to change AI and human roles.

**Note:** The tricky thing here is to distinguish which message is what.

First, define the node in the graph. They need to receive a list of messages as input and return a list of messages to add to the status.\
These are the chatbots on top and the rappers around the simulated user.

**Note:** The tricky thing here is to distinguish which message is what.

Since both chatbots and simulated users are LLMs, both will respond with AI messages. Our state will be a list of alternating human and AI messages. This means that one of the nodes needs logic to change AI and human roles.

In this example, we will assume that HumanMessages are messages from simulated users. This means that the simulated user node needs logic to exchange AI and Human messages.

```python
from langchain_core.messages import AIMessage  


# Counselor role 
def ai_assistant_node(messages):  
    # call the couselor to respond
    ai_response = call_chatbot(messages)  

    # AI 상담사의 응답을 반환  
    return {"messages": [("assistant", ai_response)]}  
```

Call the node of the counselor role.

```python
ai_assistant_node(  
    [  
        ("user", "hello?"),  
        ("assistant", "Hello! How can I help you?"),  
        ("user", "How do I get a refund?"),  
    ]  
)  
```

```
 {'messages': [('assistant','The refund process is as follows:\n\n1. ** Confirm your reservation **: Please prepare your reservation number and name.\n2. **Customer Center Contact**: Call our customer center or contact us via our website's customer support page.\n3. ** Submit a refund request **: A refund request must be completed and submitted. If necessary, you can request it via email or online form.\n4. **Refund processing**: When a request is received, a refund process will proceed. The refund processing period can usually take 7-14 days.\n\n If you have any further questions, please tell us!')]} 
```

Next, let's define a node for our simulated user.

**Reference**

* This process will include a small logic that replaces the role of the message.

```python
def _swap_roles(messages):  
    # Exchange the role of messages: AI message type at simulation user stage -> Human, Human -> AI 로 교환합니다.  
    new_messages = []  
    for m in messages:  
        if isinstance(m, AIMessage):  
            # If it is AIMessage, convert it to HumanMessage. 
            new_messages.append(HumanMessage(content=m.content))  
        else:  
            # If it is HumanMessage, convert it to AIMessage.  
            new_messages.append(AIMessage(content=m.content))  
    return new_messages  


# Defining the Counselor Role (AI Assistant) Node  
def ai_assistant_node(state: State):  
    # Call the counselor to respond  
    ai_response = call_chatbot(state["messages"])  

    # Returns the AI ​​counselor's response  
    return {"messages": [("assistant", ai_response)]}  


# Defining a Simulated User node  
def simulated_user_node(state: State):  
    # Exchange message types: AI -> Human, Human -> AI  
    new_messages = _swap_roles(state["messages"])  

    # Calling a simulated user  
    response = simulated_user.invoke({"messages": new_messages})  
    return {"messages": [("user", response)]}  
```

### Edge definition <a href="#id-7" id="id-7"></a>

Now you need to define a logic for the edge. The main logic occurs after the simulated user has finished working, and should lead to one of two results:

* Continue by calling the customer support bot ("continue")
* End the conversation ("end")

So what is the logic that ends the conversation? We have this human chatbot `FINISHED` We will either respond with (see system prompt) or define the conversation as if it exceeds 6 messages (this is a random number to keep this example short).\
`should_continue` The function takes the message list as an factor and returns'end' if the length of the list exceeds 6 or the content of the last message is'FINISHED'.

Otherwise, return'continue' to continue processing.

```python
def should_continue(state: State):  
    # If the length of the message list is greater than 6, 'end' is returned.  
    if len(state["messages"]) > 6:  
        return "end"  
    # If the last message's content is 'FINISHED', return 'end'.
    elif state["messages"][-1].content == "FINISHED":  
        return "end"  
    # If the above conditions are not met, it returns 'continue'.  
    else:  
        return "continue"  
```

```python
Copyfrom langchain_teddynote.graphs import visualize_graph  

visualize_graph(simulation) 
```

```python
from langgraph.graph import END, StateGraph  

# Creating a StateGraph instance  
graph_builder = StateGraph(State)  

# node definition 
graph_builder.add_node("simulated_user", simulated_user_node)  
graph_builder.add_node("ai_assistant", ai_assistant_node)  

# Edge definition (chatbot -> simulated user) 
graph_builder.add_edge("ai_assistant", "simulated_user")  

# Conditional edge definition 
graph_builder.add_conditional_edges(  
    "simulated_user",  
    should_continue,  
    {  
        "end": END,  # Stop the simulation when the termination condition is met.
        "continue": "ai_assistant",  # If the termination condition is not met, pass the message to the counselor role node.  
    },  
)  

# Set a starting point 
graph_builder.set_entry_point("ai_assistant")  

# Compile the graph  
simulation = graph_builder.compile()  
```

### Graph definition <a href="#id-8" id="id-8"></a>

Now define the graph that sets the simulation.

`MessageGraph` Classes are used to construct and simulate interactions between chatbots and simulated users.

```python
from langgraph.graph import END, StateGraph  

# Creating a StateGraph instance  
graph_builder = StateGraph(State)  

# Node definition  
graph_builder.add_node("simulated_user", simulated_user_node)  
graph_builder.add_node("ai_assistant", ai_assistant_node)  

# Edge definition (chatbot -> simulated user)  
graph_builder.add_edge("ai_assistant", "simulated_user")  

# Conditional edge definition  
graph_builder.add_conditional_edges(  
    "simulated_user",  
    should_continue,  
    {  
        "end": END,  # Stop the simulation when the termination condition is met.  
        "continue": "ai_assistant",  # If the termination condition is not met, pass the message to the counselor role node.  
    },  
)  

# Set a starting point  
graph_builder.set_entry_point("ai_assistant")  

# Compile the graph  
simulation = graph_builder.compile()  
```

```python
Copyfrom langchain_teddynote.graphs import visualize_graph  

visualize_graph(simulation)  
```

```python
Copyfrom langchain_core.runnables import RunnableConfig  
from langchain_teddynote.messages import stream_graph, random_uuid  


# config settings (max recursion count, thread_id)  
config = RunnableConfig(recursion_limit=10, configurable={"thread_id": random_uuid()})  

# Set input message  
inputs = {  
    "messages": [HumanMessage(content="안녕하세요? 저 지금 좀 화가 많이 났습니다^^")]  
}  

# Graph Streaming 
stream_graph(simulation, inputs, config, node_names=["simulated_user", "ai_assistant"])  
```

```
==================================================  
🔄 Node: ai_assistant🔄  
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
Hello! I am so sorry for the inconvenience. Please tell me what you are upset with, and I will do my best to help.  
==================================================  
🔄 Node: simulated_user🔄  
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
I would like to request a refund for my trip to Jeju last year. I want to get all the money back.  
==================================================  
🔄 Node: ai_assistant🔄  
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
Thank you for telling me about the refund request. Refunds for travel to Jeju last year may vary depending on the type of ticket you booked and the cancellation policy. Please let us know what reason you want a refund along with your reservation number, we can provide you with more accurate information. We will also guide you with any additional documents or procedures.  
==================================================  
🔄 Node: simulated_user🔄  
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
The reservation number is 12345678. I have a problem while traveling and I want a refund. I hope you refund all costs.  
==================================================  
🔄 Node: ai_assistant🔄  
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
Let's check with reservation number 12345678. I am really sorry for the problem that occurred during the trip. Requests for refunds may be handled differently depending on the cancellation policy of the ticket you booked.  

Some information is required to proceed with the refund process:  

One. Specifics about problems encountered during travel  
2. Refund method if desired (if you want a full refund, please clarify the reason and it will help)  

Please provide this information and we will help you expedite it. Thank you.  
==================================================  
🔄 Node: simulated_user🔄  
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
The flight was delayed during the trip, ruining the plan, which incurred additional costs. So I request a full refund. please. 
```

<br>

\
\ <br>
