# 09. Delete message (RemoveMessage)

## How to delete messages <a href="#id-1" id="id-1"></a>

One of the general states of graphs is a list of messages. Normally, only add messages to that state. But sometimes **Remove message** You may need to.

for this `RemoveMessage` Modifiers are available. And, `RemoveMessage` Modifier `reducer` Is to have a key.

basic `MessagesState` has a messages key, and the reducer of that key is `RemoveMessage` Allow modifiers.

This reducer `RemoveMessage` Delete the message from the key using

### Settings <a href="#id-2" id="id-2"></a>

First, let's build a simple graph that uses messages. Essential `reducer` Include `MessagesState` Please note that you are using.

```python
# Configuration file for managing API keys as environment variables
from dotenv import load_dotenv

# Load API key information
load_dotenv()
```

```
 True 
```

```python
# Set up LangSmith tracking. https://smith.langchain.com
# !pip install -qU langchain-teddynote
from langchain_teddynote import logging

# Enter a project name.
logging.langsmith("CH17-LangGraph-Modules")
```

```
 Start tracking LangSmith. 
[Project name] 
CH17-LangGraph-Modules 
```

### Build a basic LangGraph for tutorial progress <a href="#langgraph" id="langgraph"></a>

`RemoveMessage` Build the basic LangGraph required to use modifiers.

```python
from typing import Literal

from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import MessagesState, StateGraph, START, END
from langgraph.prebuilt import ToolNode, tools_condition

# Initialize memory object for checkpoint storage
memory = MemorySaver()


# Define a tool function that mimics the web search function.
@tool
def search(query: str):
    """Call to surf on the web."""
    return "Web search results: LangGraph 한글 튜토리얼은 https://wikidocs.net/233785 You can check it at."


# Create a tool list and initialize the tool node
tools = [search]
tool_node = ToolNode(tools)

# Model Initialization and Tool Binding
model = ChatOpenAI(model_name="gpt-4o-mini")
bound_model = model.bind_tools(tools)


# # Function to determine the next execution node based on the conversation status
def should_continue(state: MessagesState):
    last_message = state["messages"][-1]
    if not last_message.tool_calls:
        return END
    return "tool"


# LLM model call and response processing functions
def call_model(state: MessagesState):
    response = model.invoke(state["messages"])
    return {"messages": response}


# Initializing a state-based workflow graph
workflow = StateGraph(MessagesState)

# Adding Agents and Action Nodes
workflow.add_node("agent", call_model)
workflow.add_node("tool", tool_node)

# Set the starting point to the agent node
workflow.add_edge(START, "agent")

# Conditional Edge Setting: Defining the execution flow after the agent node
workflow.add_conditional_edges("agent", should_continue, {"tool": "tool", END: END})

# Added an edge that returns to the agent after running the tool
workflow.add_edge("tool", "agent")

# Compile the final executable workflow with checkpoints
app = workflow.compile(checkpointer=memory)
```

Visualize the graph.

```python
from langchain_teddynote.graphs import visualize_graph

visualize_graph(app)
```

![](https://wikidocs.net/images/page/265749/langgraph-11.jpeg)

```python
from langchain_core.messages import HumanMessage

# Initialize default settings object with thread ID 1
config = {"configurable": {"thread_id": "1"}}

# Perform the first question
input_message = HumanMessage(
    content="Hello! My name is Teddy. Nice to meet you."
)

# Process messages and output responses in stream mode, display details of the last message
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
    event["messages"][-1].pretty_print()
```

```
 ================================ Human Message ================================= 
Hello! My name is Teddy. Take good care of me. 
================================== Ai Message ================================== 
Hello, Teddy! Nice to meet you. How can I help you? 
```

```python
# Ask follow-up questions
input_message = HumanMessage(content="내 이름이 뭐라고요?")

# Process the second message in stream mode and output the response
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
    event["messages"][-1].pretty_print()
```

```
 ================================ Human Message ================================= 
What is my name? 
================================== Ai Message ================================== 
You said Teddy! Is that correct? 
```

```python
# Step-by-step status check
messages = app.get_state(config).values["messages"]
for message in messages:
    message.pretty_print()
```

```
 ================================ Human Message ================================= 
Hello! My name is Teddy. Take good care of me. 
================================== Ai Message ================================== 
Hello, Teddy! Nice to meet you. What can you help? 
================================ Human Message ================================= 
What is my name? 
================================== Ai Message ================================== 
You said Teddy! Is that correct? 
```

### Delete message using RemoveMessage modifier <a href="#removemessage" id="removemessage"></a>

First, let's look at how to delete messages manually. Let's check the status of the current thread.

```python
# Extract and save message list from app status
messages = app.get_state(config).values["messages"]
# Return a list of messages
for message in messages:
    message.pretty_print()
```

```
 ================================== Ai Message ================================== 
Hello, Teddy! nice to meet you. How can I help you? 
================================ Human Message ================================= 
What is my name? 
================================== Ai Message ================================== 
You said Teddy! Is that correct? 
```

`update_state` When you call and pass id of the first message, that message is deleted.

```python
from langchain_core.messages import RemoveMessage

# Remove the first message from the message array based on ID and update the app state.
app.update_state(config, {"messages": RemoveMessage(id=messages[0].id)})
```

```
 {'configurable': {'thread_id': '1','checkpoint_ns':'','checkpoint_id': '1ef99ffd-6687-6200-8005-d84565cbcc69'}} 
```

Now if you check the messages, you can confirm that the first message has been deleted.

```python
# Extract message list from app status and view saved conversation history
messages = app.get_state(config).values["messages"]
for message in messages:
    message.pretty_print()
```

```
 ================================== Ai Message ================================== 
Hello, Teddy! Nice to meet you. What can you help? 
================================ Human Message ================================= 
What is my name? 
================================== Ai Message ================================== 
You said Teddy! Is that correct? 
```

### Dynamically delete more messages <a href="#id-3" id="id-3"></a>

You can also delete messages programmatically inside the graph.

Let's take a look at how to modify the graph to delete old messages (messages earlier than 3 previous messages) when the graph execution ends.

```python
from langchain_core.messages import RemoveMessage
from langgraph.graph import END


# When the number of messages exceeds 3, delete old messages and keep only the latest messages.
def delete_messages(state):
    messages = state["messages"]
    if len(messages) > 3:
        return {"messages": [RemoveMessage(id=m.id) for m in messages[:-3]]}


# Logic to determine the next execution node based on message status
def should_continue(state: MessagesState) -> Literal["action", "delete_messages"]:
    """Return the next node to execute."""
    last_message = state["messages"][-1]
    # Execute message deletion function if no function call is made
    if not last_message.tool_calls:
        return "delete_messages"
    # Execute an action when a function call is made
    return "action"


# Defining a message state-based workflow graph
workflow = StateGraph(MessagesState)

# Adding Agents and Action Nodes
workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)

# Add a Delete Message node
workflow.add_node(delete_messages)

# Connecting from the start node to the agent node
workflow.add_edge(START, "agent")

# Controlling flow between nodes by adding conditional edges
workflow.add_conditional_edges(
    "agent",
    should_continue,
)

# Connecting from action node to agent node
workflow.add_edge("action", "agent")

# Connect from the message delete node to the end node
workflow.add_edge("delete_messages", END)

# Compile workflow using memory checkpointer
app = workflow.compile(checkpointer=memory)
```

Visualize the graph.

```python
from langchain_teddynote.graphs import visualize_graph

visualize_graph(app)
```

![](https://wikidocs.net/images/page/265749/langgraph-12.jpeg)

Now you can try this. `graph` You can call 2 times and then check the status.

```python
# Import HumanMessage class for LangChain message processing
from langchain_core.messages import HumanMessage

# Initialize a settings object containing a thread ID.
config = {"configurable": {"thread_id": "2"}}

# Perform the first question
input_message = HumanMessage(
    content="Hello! My name is Teddy. Nice to meet you."
)
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
    print([(message.type, message.content) for message in event["messages"]])
```

```
 [('human','Hello! My name is Teddy. Take good care of me.')] 
[('human','Hello! My name is Teddy. Take good care of me.'), ('ai','Hello, Teddy! nice to meet you. How can I help you?')] 
```

```python
# Perform the second question
input_message = HumanMessage(content="What is my name?")

for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
    print([(message.type, message.content) for message in event["messages"]])
```

```
 [('human','Hello! My name is Teddy. Take good care of me.'), ('ai','Hello, Teddy! nice to meet you. How can I help you?'), ('human','What is my name saying?')] 
[('human','Hello! My name is Teddy. Take good care of me.'), ('ai','Hello, Teddy! nice to meet you. How can I help you?'), ('human','What is my name saying?'), ('ai','Teddy said! Is that correct?')] 
[('ai','Hello, Teddy! nice to meet you. How can I help you?'), ('human','What is my name saying?'), ('ai','Teddy said! Is that correct?')] 
```

When you check the final state, you can see that there are only three messages.

This is because I just deleted the previous messages.

```python
# Extract and save message list from app status
messages = app.get_state(config).values["messages"]
# Return a list of messages
for message in messages:
    message.pretty_print()
```

```
 ================================== Ai Message ==================================  
Hello, Teddy! nice to meet you. How can I help you?  
================================ Human Message=================================  
What is my name?  
================================== Ai Message ==================================  
You said Teddy! Is that correct? 
```

<br>
