# 03. Relevance Checker module added

## Add relevance check module <a href="#id-1" id="id-1"></a>

**step**

1. Perform Naive RAG
2. (This tutorial) Added relevance check for documents in the answer

**Reference**

* It's an extension from the previous tutorial, so there may be overlapping parts. Please refer to the previous tutorial for insufficient explanation.

![](https://wikidocs.net/images/page/267810/langgraph-add-relevance-check.png)

### Preferences <a href="#id-2" id="id-2"></a>

```python
# !pip install -U langchain-teddynote  
```

```python
# Configuration file for managing API keys as environment variablesAPI w키를 환경변수로 관리하기 위한 설정 파일  
from dotenv import load_dotenv  

# Load API key information
load_dotenv()  
```

```
 True 
```

```python
# Set up LangSmith tracking. https://smith.langchain.com  
# !pip install -qU langchain-teddynote  
from langchain_teddynote import logging  

# Enter a project name. 
logging.langsmith("CH17-LangGraph-Structures")  
```

```
 Start tracking LangSmith.  
[Project name]  
CH17-LangGraph-Structures  
```

### Basic PDF-based Retrieval Chain creation <a href="#pdf-retrieval-chain" id="pdf-retrieval-chain"></a>

Here, we create a Retrieval Chain based on PDF documents. Retrieval Chain with the simplest structure.

However, LangGraph creates Retirever and Chain separately. Only then can you do detailed processing for each node.

**Reference**

* As covered in the previous tutorial, we omit the detailed description.

```python
from rag.pdf import PDFRetrievalChain  

# Load a PDF document.
pdf = PDFRetrievalChain(["data/SPRI_AI_Brief_2023년12월호_F.pdf"]).create_chain()  

# Create a retriever and a chain.  
pdf_retriever = pdf.retriever  
pdf_chain = pdf.chain  
```

### State definition <a href="#state" id="state"></a>

`State` : Defines the state of sharing between nodes and nodes in Graph.

Generally `TypedDict` Use format.\
This time, we add the results of the relevance check to the state.

```python
from typing import Annotated, TypedDict  
from langgraph.graph.message import add_messages  


# GraphState State Definition
class GraphState(TypedDict):  
    question: Annotated[str, "Question"]  # question  
    context: Annotated[str, "Context"]  # Search results for the document  
    answer: Annotated[str, "Answer"]  # answer 
    messages: Annotated[list, add_messages]  # Message (cumulative list)  
    relevance: Annotated[str, "Relevance"]  # 관련성  
```

### Node definition <a href="#node" id="node"></a>

* `Nodes` : Nodes that handle each step. Usually implemented as a Python function. Input and output are state values.

**Reference**

* `State` Updated after performing a defined logic with input `State` Returns.

```python
from langchain_openai import ChatOpenAI  
from langchain_teddynote.evaluator import GroundednessChecker  
from langchain_teddynote.messages import messages_to_history  
from rag.utils import format_docs  


# Document Search Node
def retrieve_document(state: GraphState) -> GraphState:  
    # Get the question from the state.  
    latest_question = state["question"]  

    # Search the documentation to find relevant articles.
    retrieved_docs = pdf_retriever.invoke(latest_question)  

    # Formats the retrieved document (for input into the prompt) 
    retrieved_docs = format_docs(retrieved_docs)  

    # Stores the searched document in the context key.
    return GraphState(context=retrieved_docs)  


# Generate Answer Node  
def llm_answer(state: GraphState) -> GraphState:  
    # Get the question from the state.  
    latest_question = state["question"]  

    # Get the searched documents in status.  
    context = state["context"]  

    # Call the chain to generate an answer.  
    response = pdf_chain.invoke(  
        {  
            "question": latest_question,  
            "context": context,  
            "chat_history": messages_to_history(state["messages"]),  
        }  
    )  

    # Stores generated answers, (user's questions, answers) messages in the state.  
    return GraphState(  
        answer=response, messages=[("user", latest_question), ("assistant", response)]  
    )  


# Relevance check node  
def relevance_check(state: GraphState) -> GraphState:  
    # Create a relevance evaluator. 
    question_answer_relevant = GroundednessChecker(  
        llm=ChatOpenAI(model="gpt-4o-mini", temperature=0), target="question-retrieval"  
    ).create()  

    # Run a relevance check ("yes" or "no")  
    response = question_answer_relevant.invoke(  
        {"question": state["question"], "context": state["context"]}  
    )  

    print("==== [RELEVANCE CHECK] ====")  
    print(response.score)  
NOTE: The relevance evaluator here can be modified using your own prompts. Create your own Groundedness Check and try it out!  
    return GraphState(relevance=response.score)  


# Function to check relevance (router) 
def is_relevant(state: GraphState) -> GraphState:  
    return state["relevance"]  
```

### Edges <a href="#edges" id="edges"></a>

* `Edges` : Currently `State` Run next based on `Node` Python function to determine.

General edges, conditional edges, and more.

```python
from langgraph.graph import END, StateGraph  
from langgraph.checkpoint.memory import MemorySaver  

# Graph Definition 
workflow = StateGraph(GraphState)  

# Add node  
workflow.add_node("retrieve", retrieve_document)  
# Add a relevance check node  
workflow.add_node("relevance_check", relevance_check)  
workflow.add_node("llm_answer", llm_answer)  

# add edgewww  
workflow.add_edge("retrieve", "relevance_check")  # w-> 관련성 체크  


# # Add a conditional edge. 
workflow.add_conditional_edges(  
    "relevance_check",  # The result from the relevance check node is passed to the is_relevant function.  
    is_relevant,  
    {  
        "yes": "llm_answer",  # If relevant, it will generate an answer.  
        "no": "retrieve",  # If it's not relevant, try again.  
    },  
)  

# Setting the graph entry point  
workflow.set_entry_point("retrieve")  

# Set checkpoint  
memory = MemorySaver()  

# Compile the graphw  
app = workflow.compile(checkpointer=memory)  
```

Visualize compa-like graphs.

```
from langchain_teddynote.graphs import visualize_graph  

visualize_graph(app)  
```

### Graph execution <a href="#id-3" id="id-3"></a>

* `config` Parameters convey the necessary setting information when running the graph.
* `recursion_limit` : Set the maximum number of recurses when running the graph.
* `inputs` : Pass the required input information when running the graph.

```python
from langchain_core.runnables import RunnableConfig  
from langchain_teddynote.messages import stream_graph, random_uuid  

# config settings (max recursion count, thread_id)  
config = RunnableConfig(recursion_limit=20, configurable={"thread_id": random_uuid()})  

# Enter your question  
inputs = GraphState(question="앤스로픽에 투자한 기업과 투자금액을 알려주세요.")  

# Running the graph 
stream_graph(app, inputs, config, ["relevance_check", "llm_answer"])  
```

```
  
==================================================  
🔄 Node: relevance_check🔄  
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
==== [RELEVANCE CHECK] ====  
yes  

==================================================  
🔄 Node: llm_answer🔄  
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
Google has agreed to invest up to $20 billion in Ansropic, of which $500 million has been invested first. Amazon has released an investment plan of up to $400 billion in Antwerp.  

**Source**  
-data/SPRI_AI_Brief_2023 December issue_F.pdf (page 14) 
```

```python
outputs = app.get_state(config).values  

print(f'Question: {outputs["question"]}')  
print("===" * 20)  
print(f'Answer:\n{outputs["answer"]}')  
```

```
 Question: Please tell us the amount of your investment and the companies that have invested in Ansropic.  
============================================================  
Answer:  
Google has agreed to invest up to $20 billion in Ansropic, of which $500 million has been invested first. Amazon has released an investment plan of up to $400 billion in Antwerp.  

**Source**  
-data/SPRI_AI_Brief_2023 December issue_F.pdf (page 14)  
```

```python
print(outputs["relevance"])  
```

```
 yes  
```

However, the search results `relevance_check` If it fails, a situation arises where the same Query repeatedly enters the retrieve node again.

Repeatedly, when the same Query enters the retrieve node again, it leads to the same search results, which eventually leads to a recursive state.

The maximum number of recursiones, to prevent Mock recursive status `recursion_limit` ). And for error processing `GraphRecursionError` Process.

The next tutorial will cover how to solve this recursive problem.

```python
from langgraph.errors import GraphRecursionError  
from langchain_core.runnables import RunnableConfig  

# config settings (max recursion count, thread_id)  
config = RunnableConfig(recursion_limit=10, configurable={"thread_id": random_uuid()})  

# Enter your question 
inputs = GraphState(question="Please tell me about Teddy Note's Langchain tutorial.")  

try:  
    # Running the graph  
    stream_graph(app, inputs, config, ["relevance_check", "llm_answer"])  
except GraphRecursionError as recursion_error:  
    print(f"GraphRecursionError: {recursion_error}")  
```

```
  
==================================================  
🔄 Node: relevance_check🔄  
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
==== [RELEVANCE CHECK] ====  
no  
==== [RELEVANCE CHECK] ====  
no  
==== [RELEVANCE CHECK] ====  
no  
==== [RELEVANCE CHECK] ====  
no  
==== [RELEVANCE CHECK] ====  
no  
GraphRecursionError: Recursion limit of 10 reached without hitting a stop condition. You can increase the limit by setting the `recursion_limit` config key.  
For troubleshooting, visit: https://python.langchain.com/docs/troubleshooting/errors/GRAPH_RECURSION_LIMIT  
```
