# 06. Multi-Agent Collaboration Network

## Multi-agent collaboration network <a href="#id-1" id="id-1"></a>

In this tutorial **Multi agent network** Covers how to implement LangGraph.\
A multi-agent network is an architecture that uses a "split conquest" approach that divides complex tasks into multiple specialized agents.

This solves the problem of a single agent using many tools inefficiently, and allows each agent to effectively solve the problem in their area of expertise.

This tutorial [AutoGen thesis ](https://arxiv.org/abs/2308.08155)Inspired by, let's take a step-by-step look at how to build these multi-agent networks using LangGraph. It also introduces how to use LangSmith to improve project performance and quickly identify problems.

**Why multi-agent network?**

A single agent can be efficient when using a certain number of tools within a specific domain. But if one agent deals with too many tools,\
One. Using tools complicates logic,\
2\. The amount of information an agent needs to process at once can be inefficient.

The "split conquest" approach allows each agent to focus on a specific task or area of expertise, and the entire task is divided into network forms.\
Each agent handles what they do well, delegates that task to other professional agents when needed, or utilizes the tools appropriately.

***

**Main contents**

* **Agent creation** : How to define the agent and set it to the node of the LangGraph graph
* **Tool definition** : How to define the tools the agent will use and add them as nodes
* **Graph generation** : How to configure multi-agent network graphs by connecting agents and tools
* **Status definition** : How to define graph status and manage status information required for each agent's behavior
* **Agent node definition** : How to define each professional agent as a node
* **Tool node definition** : How to define the tool as a node and let the agent take advantage of it
* **Edge logic definition** : How to set up a logic to branch to another agent or tool based on agent results
* **Graph definition** : How to construct the final graph by synthesizing the agents, tools, states, and edge logic defined above
* **Graph execution** : How to call the configured graph and do the actual work

***

**Reference**

The pattern presented in this tutorial is an example showing a specific design pattern for configuring a complex network of agents in LangGraph.\
We recommend that you modify these patterns according to your actual application situation, or combine them with other basic patterns suggested by the LangGraph documentation to derive optimal performance.

**Main reference material**\
\- [LangGraph multi-agent network concept](https://langchain-ai.github.io/langgraph/concepts/multi_agent/#network)\
\- [AutoGen thesis: Enabling Next-Gen LLM Applications via Multi-Agent Conversation (Wu et al.)](https://arxiv.org/abs/2308.08155)

### Preferences <a href="#id-2" id="id-2"></a>

```python
# Configuration file for managing API keys as environment variables
from dotenv import load_dotenv

# Load API key information
load_dotenv()
```

```
 True 
```

```python
# Set up LangSmith tracking. https://smith.langchain.com
# !pip install -qU langchain-teddynote
from langchain_teddynote import logging

# Enter a project name.
logging.langsmith("CH17-LangGraph-Use-Cases")
```

```
 Start tracking LangSmith. 
[Project name] 
CH17-LangGraph-Use-Cases 
```

Specify the model name to use for this agent.

```python
from langchain_teddynote.models import get_model_name, LLMs

# get the latest model name
MODEL_NAME = get_model_name(LLMs.GPT4o)

print(MODEL_NAME)
```

```
 gpt-4o
```

### Status definition <a href="#id-3" id="id-3"></a>

`messages` Is a list of messages shared between Agents, `sender` is the sender of the last message.

```python
import operator
from typing import Annotated, Sequence
from typing_extensions import TypedDict
from langchain_core.messages import BaseMessage


# state definition
class AgentState(TypedDict):
    messages: Annotated[
        Sequence[BaseMessage], operator.add
    ]  # List of messages shared between gents
    sender: Annotated[str, "The sender of the last message"]  # 마지막 메시지의 발신자
```

### Tool definition <a href="#id-4" id="id-4"></a>

Defines some tools that agents will use in the future.

* `TavilySearch` Is a tool for retrieving information from the Internet. `Research Agent` Use it to retrieve the information you need.
* `PythonREPL` Is a tool that runs Python code. `Chart Generator Agent` Used to generate a chart.

````python
from typing import Annotated

from langchain_teddynote.tools.tavily import TavilySearch
from langchain_core.tools import tool
from langchain_experimental.utilities import PythonREPL

# Tavily Search Tool Definition
tavily_tool = TavilySearch(max_results=5)

# Defining a tool to run Python code
python_repl = PythonREPL()


# Defining a tool to run Python code
@tool
def python_repl_tool(
    code: Annotated[str, "The python code to execute to generate your chart."],
):
    """Use this to execute python code. If you want to see the output of a value,
    you should print it out with `print(...)`. This is visible to the user."""
    try:
        # Execute given code in Python REPL and return results
        result = python_repl.run(code)
    except BaseException as e:
        return f"Failed to execute code. Error: {repr(e)}"
    # Returns a success message with the results when execution is successful.
    result_str = f"Successfully executed:\n```python\n{code}\n```\nStdout: {result}"
    return (
        result_str + "\n\nIf you have completed all tasks, respond with FINAL ANSWER."
    )
````

### Agent creation <a href="#id-5" id="id-5"></a>

#### Research Agent <a href="#research-agent" id="research-agent"></a>

`TavilySearch` Use tools to create agents that conduct research. Use this agent to research the information you need.

```python
Copydef make_system_prompt(suffix: str) -> str:
    return (
        "You are a helpful AI assistant, collaborating with other assistants."
        " Use the provided tools to progress towards answering the question."
        " If you are unable to fully answer, that's OK, another assistant with different tools "
        " will help where you left off. Execute what you can to make progress."
        " If you or any of the other assistants have the final answer or deliverable,"
        " prefix your response with FINAL ANSWER so the team knows to stop."
        f"\n{suffix}"
    )
```

```python
from langchain_core.messages import HumanMessage
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
from langgraph.graph import MessagesState

# LLM definition
llm = ChatOpenAI(model=MODEL_NAME)

# Create a Research Agent
research_agent = create_react_agent(
    llm,
    tools=[tavily_tool],
    state_modifier=make_system_prompt(
        "You can only do research. You are working with a chart generator colleague."
    ),
)


# Research Agent Node Definition
def research_node(state: MessagesState) -> MessagesState:
    result = research_agent.invoke(state)

    # Convert last message to HumanMessage
    last_message = HumanMessage(
        content=result["messages"][-1].content, name="researcher"
    )
    return {
        # Return a list of messages from Research Agent
        "messages": [last_message],
    }
```

#### Chart Generator Agent <a href="#chart-generator-agent" id="chart-generator-agent"></a>

`PythonREPL` Use tools to create agents that generate charts. Used to chart this agent.

```python
chart_generator_system_prompt = """
You can only generate charts. You are working with a researcher colleague.
Be sure to use the following font code in your code when generating charts.

##### font setting #####
import platform

# OS judgement
current_os = platform.system()

if current_os == "Windows":
    # Windows environment font settings
    font_path = "C:/Windows/Fonts/malgun.ttf"  # Clear Gothic Font Path
    fontprop = fm.FontProperties(fname=font_path, size=12)
    plt.rc("font", family=fontprop.get_name())
elif current_os == "Darwin":  # macOS
    # Mac environment font settings
    plt.rcParams["font.family"] = "AppleGothic"
else:  # Other OS such as Linux
    # Try setting the default Korean font
    try:
        plt.rcParams["font.family"] = "NanumGothic"
    except:
        print("The Korean font could not be found. The system default font will be used.")

##### Prevent minus font breakage #####
plt.rcParams["axes.unicode_minus"] = False  # 마이너스 폰트 깨짐 방지
"""

# Create a Chart Generator Agent
chart_agent = create_react_agent(
    llm,
    [python_repl_tool],
    state_modifier=make_system_prompt(chart_generator_system_prompt),
)
```

```python
def chart_node(state: MessagesState) -> MessagesState:
    result = chart_agent.invoke(state)

    # Convert last message to HumanMessage
    last_message = HumanMessage(
        content=result["messages"][-1].content, name="chart_generator"
    )
    return {
        # share internal message history of chart agent with other agents
        "messages": [last_message],
    }
```

```python
from langgraph.graph import END


def router(state: MessagesState):
    # This is the router
    messages = state["messages"]
    last_message = messages[-1]
    if "FINAL ANSWER" in last_message.content:
        # Any agent decided the work is done
        return END
    return "continue"
```

### Graph generation <a href="#id-6" id="id-6"></a>

#### Define agent nodes and edges <a href="#id-7" id="id-7"></a>

Now you need to define the node. First, define the node for the agent.

```python
from langchain_core.messages import HumanMessage, ToolMessage
from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.memory import MemorySaver

workflow = StateGraph(MessagesState)
workflow.add_node("researcher", research_node)
workflow.add_node("chart_generator", chart_node)

workflow.add_conditional_edges(
    "researcher",
    router,
    {"continue": "chart_generator", END: END},
)
workflow.add_conditional_edges(
    "chart_generator",
    router,
    {"continue": "researcher", END: END},
)

workflow.add_edge(START, "researcher")
app = workflow.compile(checkpointer=MemorySaver())
```

```python
Copyfrom langchain_teddynote.graphs import visualize_graph

visualize_graph(app, xray=True)
```

Proceed with the graph visualization generated.

```
 Python REPL can execute arbitrary code. Use with caution.
```

```
 ================================================== 
🔄 Node: agent in [researcher] 🔄 
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
================================== Ai Message ================================== 
Tool Calls: 
  tavily_web_search (call_pliuzgOBQ8Nkvq58KcDW5yIE) 
 Call ID: call_pliuzgOBQ8Nkvq58KcDW5yIE 
  Args: 
    query: South Korea GDP per capita 2010-2023 data 
  tavily_web_search (call_CuCuRPz7SrW7wGKTHkdIUFGv) 
 Call ID: call_CuCuRPz7SrW7wGKTHkdIUFGv 
  Args: 
    query: South Korea GDP per capita forecast 2024 
================================================== 

================================================== 
🔄 Node: agent in [researcher] 🔄 
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
================================== Ai Message ================================== 

I found historical and forecasted data for South Korea's GDP per capita from 2010 to 2024. 

From 2010 to 2023, the GDP per capita in South Korea (in current US dolls) was as Follows: 

-2010: $23,079 
-2011: $25,098 
-2012: $25,459 
-2013: $27,180 
-2014: $29,253 
-2015: $28,737 
-2016: $29,280 
-2017: $31,601 
-2018: $33,447 
-2019: $31,902 
-2020: $31,721 
-2021: $35,126 
-2022: $32,395 
-2023: $33,121 

For 2024, the GDP per capita is forecasted to be approximately $36,132 according to the IMF World Economic Outlook. 

With this data, the chart generator colleague can create a visualization of the GDP per capita trend from 2010 to 2024. 
================================================== 

================================================== 
🔄 Node: researcher 🔄 
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
================================ Human Message ================================= 
Name: researcher 

I found historical and forecasted data for South Korea's GDP per capita from 2010 to 2024. 

From 2010 to 2023, the GDP per capita in South Korea (in current US dolls) was as Follows: 

-2010: $23,079 
-2011: $25,098 
-2012: $25,459 
-2013: $27,180 
-2014: $29,253 
-2015: $28,737 
-2016: $29,280 
-2017: $31,601 
-2018: $33,447 
-2019: $31,902 
-2020: $31,721 
-2021: $35,126 
-2022: $32,395 
-2023: $33,121 

For 2024, the GDP per capita is forecasted to be approximately $36,132 according to the IMF World Economic Outlook. 

With this data, the chart generator colleague can create a visualization of the GDP per capita trend from 2010 to 2024. 
================================================== 
```

```python
from langchain_core.runnables import RunnableConfig
from langchain_teddynote.messages import random_uuid, invoke_graph

# config settings (max recursion count, thread_id)
config = RunnableConfig(recursion_limit=10, configurable={"thread_id": random_uuid()})

# enter your question
inputs = {
    "messages": [
        HumanMessage(
            content="2010년 ~ 2024년까지의 대한민국의 1인당 GDP 추이를 그래프로 시각화 해주세요."
        )
    ],
}

# running the graph
invoke_graph(app, inputs, config, node_names=["researcher", "chart_generator", "agent"])
```

```
================================================== 
🔄 Node: agent in [chart_generator] 🔄 
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
================================== Ai Message ================================== 
Tool Calls: 
  python_repl_tool (call_MsNHykNMvZNA5EINQ8MpPGLQ) 
 Call ID: call_MsNHykNMvZNA5EINQ8MpPGLQ 
  Args: 
    code: import matplotlib.pyplot as plt 
import matplotlib.font_manager as fm 

# Data for South Korea's GDP per capita from 2010 to 2024 
years = list (range(2010, 2025)) 
gdp_per_capita = [ 
    23079, # 2010 
    25098, # 2011 
    25459, # 2012 
    27180, # 2013 
    29253, # 2014 
    28737, # 2015 
    29280, # 2016 
    31601, # 2017 
    33447, # 2018 
    31902, # 2019 
    31721, # 2020 
    35126, # 2021 
    32395, # 2022 
    33121, # 2023 
    36132 # 2024 (forecast) 
] 

##### Font Settings ##### 
import platform 

# OS judgment 
current_os = platform.system() 

if current_os == "Windows": 
    # Windows environment font settings 
    font_path = "C:/Windows/Fonts/malgun.ttf" # Clear Gothic Font Path 
    fontprop = fm.FontProperties(fname=font_path, size=12) 
    plt.rc ("font", family=fontprop.get_name()) 
elif current_os == "Darwin": # macOS 
    # Mac environment font settings 
    plt.rcParams["font.family"] = "AppleGothic" 
else: # Linux etc other OS 
    # Try setting the default Hangeul font 
    try: 
        plt.rcParams["font.family"] = "NanumGothic" 
    exception: 
        print("Korean font not found. Use the system default font.") 

##### Prevent minus font breakage ##### 
plt.rcParams["axes.unicode_minus"] = False # Minus font prevention 

# Plotting the GDP per capita trend 
plt.figure(figsize=(10, 6)) 
plt.plot (years, gdp_per_capita, marker='o') 
plt.title ('2010 ~ 2024 Korea's GDP per capita') 
plt.xlabel ('year') 
plt.ylabel ('GDP per person (US dollars)') 
plt.grid(True) 
plt.xticks (years) 
plt.tight_layout() 
plt.show() 
================================================== 

================================================== 
🔄 Node: agent in [chart_generator] 🔄 
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
================================== Ai Message ================================== 

FINAL ANSWER 
================================================== 

================================================== 
🔄 Node: chart_generator 🔄 
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
================================ Human Message ================================= 
Name: chart_generator 

FINAL ANSWER 
================================================== 
```

<br>
