# 04. Self-RAG

## Self-RAG <a href="#self-rag" id="self-rag"></a>

In this tutorial **Self-RAG** La introduces the Retrieval Augmented Generation (RAG) strategy, [LangGraph ](https://langchain-ai.github.io/langgraph/)Describe step by step how to implement using.

Self-RAG is a RAG strategy that includes self-reflection and self-evaluation of searched documents and generated responses, which can contribute to improving the performance of RAG-based systems.

![](https://wikidocs.net/images/page/270687/langgraph-self-rag.png)![](https://wikidocs.net/images/page/270687/langgraph-self-rag.png)<br>

**What is Self-RAG?**

**Self-RAG** is a RAG strategy that includes additional steps to check and verify for both searched documents and generated responses.\
In traditional RAGs, if LLM was the main process to generate answers based on the information retrieved, in Self-RAG **Self evaluation** Verify the following:

1. Determining the need to search: Determine whether additional searches are needed for current questions.
2. Evaluate Search Result Relevance: Make sure that the fragments of the searched documents (chunks) help you solve the question.
3. Validation of response facts: Evaluate whether the generated answer is sufficiently supported by the document chunk provided.
4. Response quality assessment: Measures whether the generated answer actually solves the question well.

This process goes beyond simply searching and generating answers, allowing you to monitor and improve the quality and realism of the generated responses yourself.

[Self-RAG paper shortcut](https://arxiv.org/abs/2310.11511)

***

**Self-RAG main concept theorem**

The paper proposes the following decision process through Self-RAG.

**Determining whether to use Retriever**

* input: `x (question)` or `(x (question), y (generation))`
* Output: `yes, no, continue`\
  This step determines whether you want to proceed with an additional search, proceed as it is without a search, or wait more.

**Relevance Assessment (Retrieval Grader)**

* input: ( `x (question)` , `d (chunk)` ) for each `d` in `D`
* Output: `relevant` or `irrelevant`\
  Determine if the searched document chunks are actually useful information to answer questions.

**Validation Grader**

* input: `x (question)` , `d (chunk)` , `y (generation)` for each `d` in `D`
* Output: `{fully supported, partially supported, no support}`\
  Determine whether the generated response reflects facts based on search results, or if Hallucination has occurred.

**Answer Quality Assessment (Answer Grader)**

* input: `x (question)` , `y (generation)`
* Output: `{5, 4, 3, 2, 1}`\
  Evaluate by scoring how much the generated response solves the question.

***

**What to cover in this tutorial**

This tutorial covers the process of using LangGraph to implement some ideas of the Self-RAG strategy.\
You will learn how to build and implement your Self-RAG strategy through the following steps:

* **Retriever** : Search for documents
* **Retrieval Grader** : Evaluate the relevance of the searched document
* **Generate** : Create answers to questions
* **Hallucination Grader** : Validation of the generated answer (whether hallucinated)
* **Answer Grader** : Evaluate relevance to questions in answers
* **Question Re-writer** : Query rewrite
* **Graph creation and execution** : Build and run graphs with defined nodes

***

**Reference**

* [LangGraph official document](https://langchain-ai.github.io/langgraph/)
* [Self-RAG thesis](https://arxiv.org/abs/2310.11511)

### Preferences <a href="#id-1" id="id-1"></a>

```python
# Configuration file for managing API keys as environment variables
from dotenv import load_dotenv

# Load API key information
load_dotenv()
```

```
 True 
```

```python
# Set up LangSmith tracking. https://smith.langchain.com
# !pip install -qU langchain-teddynote
from langchain_teddynote import logging

# Enter a project name.
logging.langsmith("CH17-LangGraph-Use-Cases")
```

```
 Start tracking LangSmith. 
[Project name] 
CH17-LangGraph-Use-Cases 
```

### Basic PDF-based Retrieval Chain creation <a href="#pdf-retrieval-chain" id="pdf-retrieval-chain"></a>

Here, we create a Retrieval Chain based on PDF documents. Retrieval Chain with the simplest structure.

However, LangGraph creates Retriever and Chain separately. Only then can you do detailed processing for each node.

**Reference**

* As covered in the previous tutorial, we omit the detailed description.

**Documents utilized for practice**

Software Policy Institute (SPRi)-December 2023

* Author: Jaeheung Lee (AI Policy Institute Office Liability Institute), Lee Ji-soo (AI Policy Lab Yi Phyang Institute)
* Link: <https://spri.kr/posts/view/23669>
* File name: `SPRI_AI_Brief_2023년12월호_F.pdf`

*Files downloaded for practice `data` Please copy to folder*

```python
from rag.pdf import PDFRetrievalChain

# Load a PDF document.
pdf = PDFRetrievalChain(["data/SPRI_AI_Brief_December 2023_F.pdf"]).create_chain()

# Create a retriever and a chain.
pdf_retriever = pdf.retriever
pdf_chain = pdf.chain
```

### Document Search Evaluator (Retrieval Grader) <a href="#retrieval-grader" id="retrieval-grader"></a>

Defined in advance to proceed with the relevance assessment for documents in future retrieve nodes.

```python
from pydantic import BaseModel, Field
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_teddynote.models import get_model_name, LLMs

# Set the latest model name
MODEL_NAME = get_model_name(LLMs.GPT4o)


# Data model definition: A data model to evaluate the relevance of retrieved documents as a binary score.
class GradeDocuments(BaseModel):
    """A binary score to determine the relevance of the retrieved documents."""

    # A field indicating 'yes' or 'no' whether the document is relevant to the question.
    binary_score: str = Field(
        description="Documents are relevant to the question, 'yes' or 'no'"
    )


# LLM Initialization
llm = ChatOpenAI(model=MODEL_NAME, temperature=0)

# Generating structured output for LLM using the GradeDocuments data model
structured_llm_grader = llm.with_structured_output(GradeDocuments)

# Define system prompt: Define the system role that evaluates whether the retrieved document is relevant to the user's question.
system = """You are a grader assessing relevance of a retrieved document to a user question. \n 
    It does not need to be a stringent test. The goal is to filter out erroneous retrievals. \n
    If the document contains keyword(s) or semantic meaning related to the user question, grade it as relevant. \n
    Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question."""

# Create a chat prompt template
grade_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        ("human", "Retrieved document: \n\n {document} \n\n User question: {question}"),
    ]
)

# Create a search evaluator
retrieval_grader = grade_prompt | structured_llm_grader
```

`retrieval_grader` Run to perform a relevance assessment of the retrieved document.

```python
Copy# question definition
question = "The name of the generative AI developed by Samsung Electronics is?"

# document search
docs = pdf_retriever.invoke(question)

# Extract page content of the second document among the searched documents
doc_txt = docs[1].page_content

# Calling the search evaluator and outputting the results
print(retrieval_grader.invoke({"question": question, "document": doc_txt}))
```

<br>

```python
from typing import List
from typing_extensions import TypedDict, Annotated


# Class definition representing the state of the graph
class GraphState(TypedDict):
    # A string representing a question
    question: Annotated[str, "Question"]
    # A string representing the response generated by LLM.
    generation: Annotated[str, "LLM Generation"]
    # A list of strings representing a list of documents.
    documents: Annotated[List[str], "Retrieved Documents"]
```

* `question` : User-entered questions
* `generation` : generated response
* `documents` : List of documents searched

Defines the state.

### Status definition <a href="#id-5" id="id-5"></a>

```
 'What is the name of the endemic artificial intelligence developed by the Samsung?' 
```

```python
# Call the quetion rewriter

question_rewriter.invoke({"question": question})
```

```python
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser


llm = ChatOpenAI(model=MODEL_NAME, temperature=0)

# Define system prompt
# Define the role of the system that converts the input question into a form optimized for VectorStore search.
system = """You a question re-writer that converts an input question to a better version that is optimized \n 
     for vectorstore retrieval. Look at the input and try to reason about the underlying semantic intent / meaning."""

# Create prompt templates that include system messages and initial questions
re_write_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        (
            "human",
            "Here is the initial question: \n\n {question} \n Formulate an improved question.",
        ),
    ]
)

# Create a question rewriter
question_rewriter = re_write_prompt | llm | StrOutputParser()
```

Generate a user-entered question regroup.

### Question Rewriter <a href="#question-rewriter" id="question-rewriter"></a>

```
 GradeAnswer (binary_score='yes') 
```

```python
# Call the answer evaluator (yes: solves the question, no: does not solve the question)
answer_grader.invoke({"question": question, "generation": generation})
```

```python
from pydantic import BaseModel, Field
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI


class GradeAnswer(BaseModel):
    """A binary score indicating whether the question is addressed."""

    # Assess the relevance of the answer: 'yes' or 'no' (yes: relevant, no: not relevant)
    binary_score: str = Field(
        description="Answer addresses the question, 'yes' or 'no'"
    )


llm = ChatOpenAI(model=MODEL_NAME, temperature=0)

# Binding GradeAnswer to llm
structured_llm_grader = llm.with_structured_output(GradeAnswer)

# Define system prompt
system = """You are a grader assessing whether an answer addresses / resolves a question \n 
     Give a binary score 'yes' or 'no'. Yes' means that the answer resolves the question."""

# Generate prompt
answer_grader_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        ("human", "User question: \n\n {question} \n\n LLM generation: {generation}"),
    ]
)

# create a response evaluator
answer_grader = answer_grader_prompt | structured_llm_grader
```

`yes` In case it is relevant. `no` In case it means it is not relevant.

Evaluate whether the answer generated is a relevant answer to the question.

### Evaluate the relevance of the answer <a href="#id-4" id="id-4"></a>

```
 Groundednesss (binary_score='yes') 
```

```python
# Call the hallucination evaluator (yes: based on facts, no: not based on facts)
groundedness_grader.invoke({"documents": format_docs(docs), "generation": generation})
```

```python
from pydantic import BaseModel, Field
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI


# Data model definition: A data model to evaluate whether the generated answer is based on facts with a binary score.
class Groundednesss(BaseModel):
    """A binary score indicating whether the generated answer is grounded in the facts."""

    # A field indicating 'yes' or 'no' whether the answer is based on facts.
    binary_score: str = Field(
        description="Answer is grounded in the facts, 'yes' or 'no'"
    )


# LLM Initialization
llm = ChatOpenAI(model=MODEL_NAME, temperature=0)

# Setting up LLM with structured output
structured_llm_grader = llm.with_structured_output(Groundednesss)

# define swyswtem prompt
system = """You are a grader assessing whether an LLM generation is grounded in / supported by a set of retrieved facts. \n 
Give a binary score 'yes' or 'no'. 'Yes' means that the answer is grounded in / supported by the set of facts."""

# Create a chat prompt template
groundedness_checking_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        ("human", "Set of facts: \n\n {documents} \n\n LLM generation: {generation}"),
    ]
)

# Create a hallucination evaluator for your answer
groundedness_grader = groundedness_checking_prompt | structured_llm_grader
```

`yes` If so, it means there will be no halusation of the answer. Contrary, `no` If it is, I consider the answer to be halusination.

`groundedness_grader` Generate and generate answers `context` Based on the answer, proceed with the evaluation of the answer.

### Evaluate whether the answer is halusination <a href="#id-3" id="id-3"></a>

```
 The name of the generated AI developed by the Samsung Electronics is'Samsung Gauss'. 

**Source** 
-data/SPRI_AI_Brief_2023 December issue_F.pdf (page 13) 
```

```python
from langchain import hub
from langchain_core.output_parsers import StrOutputParser

# Get prompts from LangChain Hub
prompt = hub.pull("teddynote/rag-prompt")

# Initialize basic LLM, set model name and temperature
llm = ChatOpenAI(model_name=MODEL_NAME, temperature=0)


# Document formatting functions
def format_docs(docs):
    return "\n\n".join(
        [
            f'<document><content>{doc.page_content}</content><source>{doc.metadata["source"]}</source><page>{doc.metadata["page"]+1}</page></document>'
            for doc in docs
        ]
    )


# Create a RAG chain
rag_chain = prompt | llm | StrOutputParser()

# chain execution
generation = rag_chain.invoke({"context": format_docs(docs), "question": question})
print(generation)
```

It's a common Naive RAG chain we know.

The answer generation chain is a chain that generates answers based on the documents retrieved.

### Reply generation chain <a href="#id-2" id="id-2"></a>

```
binary_score='yes'
```

Node definition

* `retrieve` : Document Search
* `grade_documents` : Document evaluation
* `generate` : Create answer
* `transform_query` : Rewrite question

```python
# document search
def retrieve(state):
    print("==== [RETRIEVE] ====")
    question = state["question"]

    # perform a search
    documents = pdf_retriever.invoke(question)
    return {"documents": documents}


# generate answer
def generate(state):
    print("==== [GENERATE] ====")
    question = state["question"]
    documents = state["documents"]

    # create RAG
    generation = rag_chain.invoke({"context": documents, "question": question})
    return {"generation": generation}


# Assessing the relevance of retrieved documents
def grade_documents(state):
    print("==== [GRADE DOCUMENTS] ====")
    question = state["question"]
    documents = state["documents"]

    # Score each document
    filtered_docs = []
    for d in documents:
        score = retrieval_grader.invoke(
            {"question": question, "document": d.page_content}
        )
        grade = score.binary_score
        if grade == "yes":
            print("==== GRADE: DOCUMENT RELEVANT ====")
            filtered_docs.append(d)
        else:
            print("==== GRADE: DOCUMENT NOT RELEVANT ====")
            continue
    return {"documents": filtered_docs}


# Question conversion
def transform_query(state):
    print("==== [TRANSFORM QUERY] ====")
    question = state["question"]

    # Rewrite the question
    better_question = question_rewriter.invoke({"question": question})
    return {"question": better_question}
```

### Conditional edge definition <a href="#id-7" id="id-7"></a>

`decide_to_generate` The function determines whether an answer is generated based on the results of the evaluation of the relevance of the retrieved document.

`grade_generation_v_documents_and_question` The function determines whether the answer is generated based on the results of the evaluation of the relevance of the generated answer to the document and question.

```python
# Decide whether to generate a response
def decide_to_generate(state):
    print("==== [ASSESS GRADED DOCUMENTS] ====")
    state["question"]
    filtered_documents = state["documents"]

    if not filtered_documents:
        # If all documents are irrelevant
        # Create a new query

        print(
            "==== [DECISION: ALL DOCUMENTS ARE NOT RELEVANT TO QUESTION, TRANSFORM QUERY] ===="
        )
        return "transform_query"
    else:
        # Generate an answer if there is a relevant document
        print("==== [DECISION: GENERATE] ====")
        return "generate"


# Evaluate the relevance of generated answers to documents and questions
def grade_generation_v_documents_and_question(state):
    print("==== [CHECK HALLUCINATIONS] ====")
    question = state["question"]
    documents = state["documents"]
    generation = state["generation"]

    score = groundedness_grader.invoke(
        {"documents": documents, "generation": generation}
    )
    grade = score.binary_score

    # Check for hallucinations
    if grade == "yes":
        print("==== [DECISION: GENERATION IS GROUNDED IN DOCUMENTS] ====")
        # Check if the question is resolved
        print("==== [GRADE GENERATION vs QUESTION] ====")
        score = answer_grader.invoke({"question": question, "generation": generation})
        grade = score.binary_score
        if grade == "yes":
            print("==== [DECISION: GENERATION ADDRESSES QUESTION] ====")
            return "relevant"
        else:
            print("==== [DECISION: GENERATION DOES NOT ADDRESS QUESTION] ====")
            return "not relevant"
    else:
        print("==== [DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY] ====")
        return "hallucination"
```

Graph generation

Generate graphs through previously written nodes and edges.

```python
from langgraph.graph import END, StateGraph, START
from langgraph.checkpoint.memory import MemorySaver

# Initialize graph state
workflow = StateGraph(GraphState)

# node definition
workflow.add_node("retrieve", retrieve)  # retrieve
workflow.add_node("grade_documents", grade_documents)  # grade documents
workflow.add_node("generate", generate)  # generatae
workflow.add_node("transform_query", transform_query)  # transform_query

# edge definition
workflow.add_edge(START, "retrieve")
workflow.add_edge("retrieve", "grade_documents")

# Add conditional edges to the document evaluation node
workflow.add_conditional_edges(
    "grade_documents",
    decide_to_generate,
    {
        "transform_query": "transform_query",
        "generate": "generate",
    },
)

# edge definition
workflow.add_edge("transform_query", "retrieve")

# Add conditional edge in answer generation node
workflow.add_conditional_edges(
    "generate",
    grade_generation_v_documents_and_question,
    {
        "hallucination": "generate",
        "relevant": END,
        "not relevant": "transform_query",
    },
)

# compile the graph
app = workflow.compile(checkpointer=MemorySaver())
```

```python
Copyfrom langchain_teddynote.graphs import visualize_graph

visualize_graph(app)
```

Visualize the graph.

Graph execution

Run the graph you created.

```python
from langchain_core.runnables import RunnableConfig
from langchain_teddynote.messages import stream_graph, invoke_graph, random_uuid

# config settings (max recursion count, thread_id)
config = RunnableConfig(recursion_limit=10, configurable={"thread_id": random_uuid()})

# enter your question
inputs = {
    "question": "The name of the generative AI developed by Samsung Electronics is?",
}

# Running the graph
invoke_graph(
    app, inputs, config, ["retrieve", "transform_query", "grade_documents", "generate"]
)
```

```
 ==== [RETRIEVE] ==== 

================================================== 
🔄 Node: retrieve 🔄 
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
page_content=' SPRi AI Brief | 
2023-December 
Samsung Electronics unveils self-developed AI ‘Samsung Gauss ’ 
KEY Contents 
n Create a self-development consisting of 3 models of languages, codes, and images that the Samsung can operate on the on-Device 
AI model ‘Samsung Gauss ’ released 
n Samsung plans to phase out Samsung Gauss in a variety of products, with on-dice operation possible 
Samsung Gauss has the advantage that there is no risk of user information leaking outward 
£Samsung Gauss, Ondice Operation Support, consisting of three models of language, code, and images' metadata={'source':'data/SPRI_AI_Brief_2023 Year 12 _F.pdf','file_path':'data/SPRI_AI_Brief_20231 
(...sway...) 
The purpose of the ∙ framework is to define AGI's performance, general purpose, and level of autonomy to compare and evaluate risks between models, AGI 
To provide a common criterion for measuring progress toward achievement 
n researchers derive the 6-point principle below to establish the necessary criteria for the definition of the AGI concept' metadata={'source':'data/SPRI_AI_Brief_2023 December issue_F.pdf','file_path':'data/SPRI_AI_Brief_202 Month _F 
================================================== 
==== [GRADE DOCUMENTS] ==== 
==== GRADE: DOCUMENT RELEVANT ==== 
==== GRADE: DOCUMENT RELEVANT ==== 
==== GRADE: DOCUMENT RELEVANT ==== 
==== GRADE: DOCUMENT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT RELEVANT ==== 
==== GRADE: DOCUMENT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== [ASSESS GRADED DOCUMENTS] ==== 
==== [DECISION: GENERATE] ==== 

================================================== 
🔄 Node: grade_documents 🔄 
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
page_content=' SPRi AI Brief | 
2023-December 
Samsung Electronics unveils self-developed AI ‘Samsung Gauss ’ 
KEY Contents 
n Create a self-development consisting of 3 models of languages, codes, and images that the Samsung can operate on the on-Device 
AI model ‘Samsung Gauss ’ released 
n Samsung plans to phase out Samsung Gauss in a variety of products, with on-dice operation possible 
(...sway...) 
n Samsung Gauss △A language model that generates text △Code model that generates code △Generates images 
Composed of 3 models of image models 
The ∙ language model consists of a variety of models for cloud and on-device destinations, composing mail, summarizing documents, and translating 
'2.0','Data/SPRI_AI_Brief_2023 Year's December','file_path':'data/SPRI_AI_Brief_2023 Year','page': 12,'total_pages' 
================================================== 
==== [GENERATE] ==== 
==== [CHECK HALLUCINATIONS] ==== 
==== [DECISION: GENERATION IS GROUNDED IN DOCUMENTS] ==== 
==== [GRADE GENERATION vs QUESTION] ==== 
==== [DECISION: GENERATION ADDRESSES QUESTION] ==== 

================================================== 
🔄 Node: generate 🔄 
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
generation: 
The name of the generated AI developed by the Samsung Electronics is'Samsung Gauss'. 

**Source** 
- data/SPRI_AI_Brief_2023 December _F.pdf (page 12) 
================================================== 
```

If there is a persistent failure in evaluating the relevance of a user's question, you may fall into a recursive state as follows:

```python
from langgraph.errors import GraphRecursionError

# config settings (max recursion count, thread_id)
config = RunnableConfig(recursion_limit=10, configurable={"thread_id": random_uuid()})

# enter your question
inputs = {
    "question": "The name of the generative AI developed by Teddy Note is?",
}

try:
    # Running the graph
    stream_graph(
        app,
        inputs,
        config,
        ["retrieve", "transform_query", "grade_documents", "generate"],
    )
except GraphRecursionError as recursion_error:
    print(f"GraphRecursionError: {recursion_error}")
```

```
 ==== [RETRIEVE] ==== 
==== [GRADE DOCUMENTS] ==== 

================================================== 
🔄 Node: grade_documents 🔄 
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== [ASSESS GRADED DOCUMENTS] ==== 
==== [DECISION: ALL DOCUMENTS ARE NOT RELEVANT TO QUESTION, TRANSFORM QUERY] ==== 
==== [TRANSFORM QUERY] ==== 

================================================== 
🔄 Node: transform_query 🔄 
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
What is the name of the generic AI developed by TeddyNote?==== [RETRIEVE] ==== 
==== [GRADE DOCUMENTS] ==== 

================================================== 
🔄 Node: grade_documents 🔄 
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== [ASSESS GRADED DOCUMENTS] ==== 
==== [DECISION: ALL DOCUMENTS ARE NOT RELEVANT TO QUESTION, TRANSFORM QUERY] ==== 
==== [TRANSFORM QUERY] ==== 

================================================== 
🔄 Node: transform_query 🔄 
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
What is the generic AI created by TeddyNote caled?==== [RETRIEVE] ==== 
==== [GRADE DOCUMENTS] ==== 

================================================== 
🔄 Node: grade_documents 🔄 
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== GRADE: DOCUMENT NOT RELEVANT ==== 
==== [ASSESS GRADED DOCUMENTS] ==== 
==== [DECISION: ALL DOCUMENTS ARE NOT RELEVANT TO QUESTION, TRANSFORM QUERY] ==== 
==== [TRANSFORM QUERY] ==== 

================================================== 
🔄 Node: transform_query 🔄 
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
What is the name of the generic AI developed by TeddyNote?==== [RETRIEVE] ==== 
GraphRecursionError: Recursion limit of 10 reached without hitting a stop condition. You can increase the limit by setting the `recursion_limit` config key. 
For troubleshooting, visit: https://python.langchain.com/docs/troubleshooting/errors/GRAPH_RECURSION_LIMIT 
```

A logic correction (flex correction) is needed to escape so as not to fall into this recursive state.
