# 07. Adaptive RAG

## Adaptive RAG <a href="#adaptive-rag" id="adaptive-rag"></a>

This tutorial covers the implementation of Adaptive RAG (Adaptive Retrieval-Augmented Generation). Adaptive RAG is a strategy that combines query analysis with active/self-modifying RAGs to retrieve and generate information from various data sources. This tutorial uses LangGraph to implement routing between web browsing and self-correcting RAGs.

The purpose of this tutorial is for users to understand the concept of Adaptive RAG and learn how to implement it through LangGraph. This allows users to perform web searches for questions related to the latest events, and utilize self-correcting RAGs for questions related to indexes.

**Mainly covered**

* **Create Index** : Index creation and document loading
* **LLMs** : Routing queries and document evaluation using LLM
* **Web Search Tool** : Web search tool settings
* **Construct the Graph** : Graph status and flow definition
* **Compile Graph** : Graph compilation and walkflow building
* **Use Graph** : Graph execution and results verification

***

**Adaptive RAG** has **RAG** As a strategy, (1) [Query analysis ](https://blog.langchain.dev/query-construction/)and (2) [Self-Reflective RAG ](https://blog.langchain.dev/agentic-rag-with-langgraph/)Combine.

[Thesis: Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity ](https://arxiv.org/abs/2403.14403)In, the following routing is performed through query analysis.

* `No Retrieval`
* `Single-shot RAG`
* `Iterative RAG`

Implement this using LangGraph.

In this implementation, we perform the following routing:

* **Web search** : Used for questions related to the latest events
* **Magnetic modification RAG** : Used for questions related to index

![](https://wikidocs.net/images/page/267814/langgraph-adaptive-rag.png)

### Preferences <a href="#id-1" id="id-1"></a>

```python
# API Configuration file for managing API keys as environment variables
from dotenv import load_dotenv

# Load API key information
load_dotenv()
```

```
 True 
```

```python
# LangSmith set up tracking. https://smith.langchain.com
# !pip install -qU langchain-teddynote
from langchain_teddynote import logging

# Enter a project name.
logging.langsmith("CH17-LangGraph-Structures")
```

```
 Start tracking LangSmith. 
[Project name] 
CH17-LangGraph-Structures 
```

### Basic PDF-based Retrieval Chain creation <a href="#pdf-retrieval-chain" id="pdf-retrieval-chain"></a>

Here, we create a Retrieval Chain based on PDF documents. Retrieval Chain with the simplest structure.

However, LangGraph creates Retirever and Chain separately. Only then can you do detailed processing for each node.

**Reference**

* As covered in the previous tutorial, we omit the detailed description.

```python
from rag.pdf import PDFRetrievalChain

# Load a PDF document.
pdf = PDFRetrievalChain(["data/SPRI_AI_Brief_2023년12월호_F.pdf"]).create_chain()

# retriever generation
pdf_retriever = pdf.retriever

# chain generation
pdf_chain = pdf.chain
```

### Query routing and document evaluation <a href="#id-2" id="id-2"></a>

**LLMs** In phase **Query routing** and **Document evaluation** Perform. This course **Adaptive RAG** As an important part of, it contributes to efficient information retrieval and creation.

* **Query routing** : Analyze the user's query and route it to the appropriate source of information. This allows you to set the optimal search path for your query's purpose.
* **Document evaluation** : Evaluate the quality and relevance of the searched documents to increase the accuracy of the final result. This course **LLMs** It is essential to maximize the performance of.

This step **Adaptive RAG** It supports the core functions of and aims to provide accurate and reliable information.

```python
from typing import Literal

from langchain_core.prompts import ChatPromptTemplate
from pydantic import BaseModel, Field
from langchain_openai import ChatOpenAI
from langchain_teddynote.models import get_model_name, LLMs

# Get the latest LLM model name
MODEL_NAME = get_model_name(LLMs.GPT4)


# A data model that routes user queries to the most relevant data source.
class RouteQuery(BaseModel):
    """Route a user query to the most relevant datasource."""

    # Literal type field for selecting data source
    datasource: Literal["vectorstore", "web_search"] = Field(
        ...,
        description="Given a user question choose to route it to web search or a vectorstore.",
    )


# Generating structured output via LLM initialization and function calls
llm = ChatOpenAI(model=MODEL_NAME, temperature=0)
structured_llm_router = llm.with_structured_output(RouteQuery)

# Create prompt templates that include system messages and user questions
system = """You are an expert at routing a user question to a vectorstore or web search.
The vectorstore contains documents related to DEC 2023 AI Brief Report(SPRI) with Samsung Gause, Anthropic, etc.
Use the vectorstore for questions on these topics. Otherwise, use web-search."""

# Creating a prompt template for routing
route_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        ("human", "{question}"),
    ]
)

# Create a question router by combining prompt templates and a structured LLM router.
question_router = route_prompt | structured_llm_router
```

The following tests the results of query routing and then checks the results.

```python
# Questions that require document search
print(
    question_router.invoke(
        {"question": "The name of the generative AI created by Samsung Electronics in AI Brief is?"}
    )
)
```

```
 datasource='vectorstore'
```

```python
# Questions that require web search
print(question_router.invoke({"question": "Find the best dim sum restaurant in Pangyo"}))
```

```
 datasource='web_search'
```

#### Search Evaluator (Retrieval Grader) <a href="#retrieval-grader" id="retrieval-grader"></a>

```python
from pydantic import BaseModel, Field
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate


# Defining a data model for document evaluation
class GradeDocuments(BaseModel):
    """Binary score for relevance check on retrieved documents."""

    binary_score: str = Field(
        description="Documents are relevant to the question, 'yes' or 'no'"
    )


# Generating structured output via LLM initialization and function calls
llm = ChatOpenAI(model=MODEL_NAME, temperature=0)
structured_llm_grader = llm.with_structured_output(GradeDocuments)

# Create prompt templates that include system messages and user questions
system = """You are a grader assessing relevance of a retrieved document to a user question. \n 
    If the document contains keyword(s) or semantic meaning related to the user question, grade it as relevant. \n
    It does not need to be a stringent test. The goal is to filter out erroneous retrievals. \n
    Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question."""

grade_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        ("human", "Retrieved document: \n\n {document} \n\n User question: {question}"),
    ]
)

# Create a document search results evaluator
retrieval_grader = grade_prompt | structured_llm_grader
```

Created `retrieval_grader` Use to evaluate document search results.

```python
# Set user questions
question = "The name of the generative AI created by Samsung Electronics is?"

# Find related documents for your question
docs = pdf_retriever.invoke(question)

# Get the contents of the searched document
retrieved_doc = docs[1].page_content

# Output evaluation results
print(retrieval_grader.invoke({"question": question, "document": retrieved_doc}))
```

```
 binary_score='yes'
```

#### Create RAG chain to generate answers <a href="#rag" id="rag"></a>

```python
from langchain import hub
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI

# Get prompts from LangChain Hub (RAG prompts can be freely modified)
prompt = hub.pull("teddynote/rag-prompt")

# LLM Initialization
llm = ChatOpenAI(model_name=MODEL_NAME, temperature=0)


# Document formatting functions
def format_docs(docs):
    return "\n\n".join(
        [
            f'<document><content>{doc.page_content}</content><source>{doc.metadata["source"]}</source><page>{doc.metadata["page"]+1}</page></document>'
            for doc in docs
        ]
    )


# Create a RAG chain
rag_chain = prompt | llm | StrOutputParser()
```

Now created `rag_chain` Pass the question to create an answer.

```python
# Passing a question to the RAG chain to generate an answer
generation = rag_chain.invoke({"context": format_docs(docs), "question": question})
print(generation)
```

```
 The name of the generated AI created by the Samsung Electronics is'Samsung Gauss'. 

**Source** 
-data/SPRI_AI_Brief_2023 December issue_F.pdf (page 13) 
```

#### Add Hallucination checker to answer <a href="#hallucination" id="hallucination"></a>

```python
# Defining a data model for hallucination checking
class GradeHallucinations(BaseModel):
    """Binary score for hallucination present in generation answer."""

    binary_score: str = Field(
        description="Answer is grounded in the facts, 'yes' or 'no'"
    )


# Initializing LLM via function call
llm = ChatOpenAI(model=MODEL_NAME, temperature=0)
structured_llm_grader = llm.with_structured_output(GradeHallucinations)

# set prompt
system = """You are a grader assessing whether an LLM generation is grounded in / supported by a set of retrieved facts. \n 
    Give a binary score 'yes' or 'no'. 'Yes' means that the answer is grounded in / supported by the set of facts."""

# Create a prompt template
hallucination_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        ("human", "Set of facts: \n\n {documents} \n\n LLM generation: {generation}"),
    ]
)

# Creating a hallucination evaluator
hallucination_grader = hallucination_prompt | structured_llm_grader
```

Created `hallucination_grader` Use to evaluate the hallucinations of the generated answers.

```python
# Assessing whether the generated answers are hallucinogenic using an evaluator
hallucination_grader.invoke({"documents": docs, "generation": generation})
```

```
 GradeHallucinations (binary_score='no') 
```

```python
class GradeAnswer(BaseModel):
    """Binary scoring to evaluate the appropriateness of answers to questions"""

    binary_score: str = Field(
        description="Indicate 'yes' or 'no' whether the answer solves the question"
    )


# Initializing LLM via function call
llm = ChatOpenAI(model=MODEL_NAME, temperature=0)
structured_llm_grader = llm.with_structured_output(GradeAnswer)

# Set prompt
system = """You are a grader assessing whether an answer addresses / resolves a question \n 
     Give a binary score 'yes' or 'no'. Yes' means that the answer resolves the question."""
answer_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        ("human", "User question: \n\n {question} \n\n LLM generation: {generation}"),
    ]
)

# Create an answer evaluator by combining prompt templates and structured LLM evaluators
answer_grader = answer_prompt | structured_llm_grader
```

```python
# Evaluate whether the generated answer solves the question using an evaluator.
answer_grader.invoke({"question": question, "generation": generation})
```

```
 GradeAnswer (binary_score='yes') 
```

#### Query Rewriter <a href="#query-rewriter" id="query-rewriter"></a>

```python
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

# LLM Initialization
llm = ChatOpenAI(model=MODEL_NAME, temperature=0)

# Define Query Rewriter prompt (you can modify it freely)
system = """You a question re-writer that converts an input question to a better version that is optimized \n 
for vectorstore retrieval. Look at the input and try to reason about the underlying semantic intent / meaning."""

# Creating a Query Rewriter Prompt Template
re_write_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        (
            "human",
            "Here is the initial question: \n\n {question} \n Formulate an improved question.",
        ),
    ]
)

# Create a Query Rewriter
question_rewriter = re_write_prompt | llm | StrOutputParser()
```

Created `question_rewriter` Pass the question to create an improved question.

```python
# Create improved questions by submitting your questions to the question rewriter
question_rewriter.invoke({"question": question})
```

```
 'What is the name of the generated AI developed by the Samsung?' 
```

#### Web search tools <a href="#id-3" id="id-3"></a>

**Web search tools** has **Adaptive RAG** An important component of, used to retrieve the latest information. This tool helps users get quick and accurate answers to questions related to the latest events.

* **Settings** : Set up a web search tool to prepare you for the latest information.
* **Search** : Search for relevant information on the web based on your query.
* **Results analysis** : Analyze the searched results to provide the best information for your questions.

```python
from langchain_teddynote.tools.tavily import TavilySearch

# Create a web search tool
web_search_tool = TavilySearch(max_results=3)
```

Run the web search tool to check the results.

```python
# Calling a web search tool
result = web_search_tool.search("Please tell me the URL of the Teddy Note Wikidocs Langchain tutorial")
print(result)
```

```
 [{'title':'langchain + PDF document summary, Map-Reduce (7) -Tedinot','url':'https://teddylee777.github.io/langchain/langchain-tutorial-07/','content':'🔥Notification🔥 ① Tedinot YouTube-Go to see! ② LangChain Korean Tutorial Shortcuts 👀 ③ Langchain Notes Free Electronic Books (wikidocs) Shortcuts 🙌 ④ RAG Unlawful Notes LangChain Lecture Opens Shortcuts 🙌 ⑤ Seoul PyTorch Deep-Running Lecture Shortcut < Langchain + PDF document summary, Map-Reduce (7)','score': 0.9747731,'raw_content':'🔥 notification 🔥\n① Tedinot YouTube -\n Go to go!\n② LangChain Korean Tutorial\n Shortcut 👀\n③ Langchain Notes Free Electronic Books (wikidocs)\n Shortcut 🙌\n Langchain + PDF Document, Map-Reduce... '}]
```

```python
# Check the first result in web search results
result[0]
```

```
 {'title':'Langchain + PDF document summary, Map-Reduce (7) -Tedinot','url':'https://teddylee777.github.io/langchain/langchain-tutorial-07/','content':'🔥Notification🔥 ① Tedinot YouTube-Go to see! ② LangChain Korean Tutorial Shortcuts 👀 ③ Langchain Notes Free Electronic Books (wikidocs) Shortcuts 🙌 ④ RAG Unlawful Notes LangChain Lecture Opens Shortcuts 🙌 ⑤ Seoul PyTorch Deep-Running Lecture Shortcut < Langchain + PDF document summary, Map-Reduce (7)','score': 0.9747731,'raw_content':'🔥 notification 🔥\n① Tedinot YouTube -\n Go to go!\n② LangChain Korean Tutorial\n Shortcut 👀\n③ Langchain Notes Free Electronic Books (wikidocs)\n Shortcut 🙌\n Langchain + PDF Document, Map-Reduce (7)... '} 
```

### Graph configuration <a href="#id-4" id="id-4"></a>

#### Graph status definition <a href="#id-5" id="id-5"></a>

```python
from typing import List
from typing_extensions import TypedDict, Annotated


# Defining the state of a graph
class GraphState(TypedDict):
    """
    A data model representing the state of a graph

    Attributes:
        question: question
        generation: LLM Generated Answers
        documents: Document List
    """

    question: Annotated[str, "User question"]
    generation: Annotated[str, "LLM generated answer"]
    documents: Annotated[List[str], "List of documents"]
```

### Graph flow definition <a href="#id-6" id="id-6"></a>

**Graph flow** By defining **Adaptive RAG** Clarify how it works. At this stage, you set the state and transition of the graph to increase the efficiency of query processing.

* **Status definition** : Clearly define each state of the graph to track the progress of the query.
* **Switch settings** : Set the transition between states so that the query proceeds along the appropriate path.
* **Flow optimization** : Optimize the flow of graphs to improve the accuracy of information retrieval and creation.

#### Node definition <a href="#id-7" id="id-7"></a>

```python
from langchain_core.documents import Document


# document search node
def retrieve(state):
    print("==== [RETRIEVE] ====")
    question = state["question"]

    # Perform a document search
    documents = pdf_retriever.invoke(question)
    return {"documents": documents, "question": question}


# Generate Answer Node
def generate(state):
    print("==== [GENERATE] ====")
    # Get questions and document search results
    question = state["question"]
    documents = state["documents"]

    # Generate RAG answers
    generation = rag_chain.invoke({"context": documents, "question": question})
    return {"documents": documents, "question": question, "generation": generation}


# Document Relevance Evaluation Node
def grade_documents(state):
    print("==== [CHECK DOCUMENT RELEVANCE TO QUESTION] ====")
    # Get questions and document search results
    question = state["question"]
    documents = state["documents"]

    # Calculate relevance score for each document
    filtered_docs = []
    for d in documents:
        score = retrieval_grader.invoke(
            {"question": question, "document": d.page_content}
        )
        grade = score.binary_score
        if grade == "yes":
            print("---GRADE: DOCUMENT RELEVANT---")
            # Add relevant documents
            filtered_docs.append(d)
        else:
            # Skip irrelevant documents
            print("---GRADE: DOCUMENT NOT RELEVANT---")
            continue
    return {"documents": filtered_docs, "question": question}


# question rewriwte node
def transform_query(state):
    print("==== [TRANSFORM QUERY] ====")
    # ww
    question = state["question"]
    documents = state["documents"]

    # 질문 재작성
    better_question = question_rewriter.invoke({"question": question})
    return {"documents": documents, "question": better_question}


# web search node
def web_search(state):
    print("==== [WEB SEARCH] ====")
    # get question and document search results
    question = state["question"]

    # perform a web search
    web_results = web_search_tool.invoke({"query": question})
    web_results_docs = [
        Document(
            page_content=web_result["content"],
            metadata={"source": web_result["url"]},
        )
        for web_result in web_results
    ]

    return {"documents": web_results_docs, "question": question}
```

#### Edge definition <a href="#id-8" id="id-8"></a>

```python
# question routing node
def route_question(state):
    print("==== [ROUTE QUESTION] ====")
    # bring question
    question = state["question"]
    # question routing
    source = question_router.invoke({"question": question})
    # node routing based on question routing results
    if source.datasource == "web_search":
        print("==== [ROUTE QUESTION TO WEB SEARCH] ====")
        return "web_search"
    elif source.datasource == "vectorstore":
        print("==== [ROUTE QUESTION TO VECTORSTORE] ====")
        return "vectorstore"


# document relevance evaluation node
def decide_to_generate(state):
    print("==== [DECISION TO GENERATE] ====")
    # get question and document search results
    question = state["question"]
    filtered_documents = state["documents"]

    if not filtered_documents:
        # rewrite wwquestion if all documents are irrelevant
        print(
            "==== [DECISION: ALL DOCUMENTS ARE NOT RELEVANT TO QUESTION, TRANSFORM QUERY] ===="
        )
        return "transform_query"
    else:
        # Generate an answer if there is a relevant document
        print("==== [DECISION: GENERATE] ====")
        return "generate"


def hallucination_check(state):
    print("==== [CHECK HALLUCINATIONS] ====")
    # Get questions and document search results
    question = state["question"]
    documents = state["documents"]
    generation = state["generation"]

    # hallucination assessment
    score = hallucination_grader.invoke(
        {"documents": documents, "generation": generation}
    )
    grade = score.binary_score

    # Hallucination 
    if grade == "yes":
        print("==== [DECISION: GENERATION IS GROUNDED IN DOCUMENTS] ====")

        # Assessing the relevance of answers
        print("==== [GRADE GENERATED ANSWER vs QUESTION] ====")
        score = answer_grader.invoke({"question": question, "generation": generation})
        grade = score.binary_score

        # Processing according to relevance assessment results
        if grade == "yes":
            print("==== [DECISION: GENERATED ANSWER ADDRESSES QUESTION] ====")
            return "relevant"
        else:
            print("==== [DECISION: GENERATED ANSWER DOES NOT ADDRESS QUESTION] ====")
            return "not relevant"
    else:
        print("==== [DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY] ====")
        return "hallucination"
```

#### Graph compilation <a href="#id-9" id="id-9"></a>

**Graph compilation** In phase **Adaptive RAG** Build your workflow and make it viable. This process defines the overall flow of query processing by connecting each node and edge of the graph.

* **Node definition** : Define each node to clarify the state and transition of the graph.
* **Edge setting** : Set the edge between the nodes so that the query proceeds along the appropriate path.
* **Build a walkflow** : Build&#x20;

  * the entire flow of graphs to maximize the efficiency of information retrieval and creation.

  ```python
  from langgraph.graph import END, StateGraph, START
  from langgraph.checkpoint.memory import MemorySaver

  # Initialize graph state
  workflow = StateGraph(GraphState)

  # node definition
  workflow.add_node("web_search", web_search)  # web search
  workflow.add_node("retrieve", retrieve)  # document search
  workflow.add_node("grade_documents", grade_documents)  # document evaluation
  workflow.add_node("generate", generate)  # generate answer
  workflow.add_node("transform_query", transform_query)  # query transformation

  # graph build
  workflow.add_conditional_ewdges(
      START,
      route_question,
      {
          "web_search": "web_search",  # routing with ewb search
          "vectorstore": "retrieve",  # routing to vector store
      },
  )
  workflow.add_edge("web_search", "generate")  # generate answers after web search
  workflow.add_edge("retrieve", "grade_documents")  # 문서 검색 후 평가
  workflow.add_conditional_edges(
      "grade_documents",
      decide_to_generate,
      {
          "transform_query": "transform_query",  # 100% satisfaction guaranteed
          "generate": "generate",  # ability to generate answer
      },
  )
  workflow.add_edge("transform_query", "retrieve")  # Retrieve documents after query transformationw
  workflow.add_conditional_edges(
      "generate",
      hallucination_check,
      
          "hallucination": "generate",  # Retrieve documents after query transformation
          "relevant": END,  # pass the relevance of the answer
          "not relevant": "transform_query",  # transform query whwen pass fails to pass relevance of answer
      },
  )

  # compile graph
  app = workflow.compile(checkpointer=MemorySaver())
  ```

  ```python
  Copyfrom langchain_teddynote.graphs import visualize_graph

  visualize_graph(app)
  ```

### Using graph <a href="#id-10" id="id-10"></a>

**Using graph** In phase **Adaptive RAG** Check the query processing results through the execution of. This process processes the query along each node and edge of the graph to produce the final result.

* **Graph execution** : Run the defined graph to follow the flow of the query.
* **Check results** : Review the results generated after running the graph to ensure that the query was properly processed.
* **Results analysis** : Analyze the generated results to evaluate whether they meet the purpose of the query.

```python
from langchain_teddynote.messages import stream_graph
from langchain_core.runnables import RunnableConfig
import uuid

# config settings (max recursion count, thread_id)
config = RunnableConfig(recursion_limit=20, configurable={"thread_id": uuid.uuid4()})

# enter your question
inputs = {
    "question": "The name of the generative AI developed by Samsung Electronics is?",
}

# running the graph
stream_graph(app, inputs, config, ["agent", "rewrite", "generate"])
```

```
 ==== [ROUTE QUESTION] ==== 
==== [ROUTE QUESTION TO VECTORSTORE] ==== 
==== [RETRIEVE] ==== 
==== [CHECK DOCUMENT RELEVANCE TO QUESTION] ==== 
---GRADE: DOCUMENT RELEVANT--- 
---GRADE: DOCUMENT RELEVANT--- 
---GRADE: DOCUMENT RELEVANT--- 
---GRADE: DOCUMENT RELEVANT--- 
---GRADE: DOCUMENT NOT RELEVANT--- 
---GRADE: DOCUMENT NOT RELEVANT--- 
---GRADE: DOCUMENT NOT RELEVANT--- 
---GRADE: DOCUMENT RELEVANT--- 
---GRADE: DOCUMENT NOT RELEVANT--- 
---GRADE: DOCUMENT NOT RELEVANT--- 
==== [DECISION TO GENERATE] ==== 
==== [DECISION: GENERATE] ==== 
==== [GENERATE] ==== 

================================================== 
🔄 Node: generate🔄 
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
The name of the generated AI developed by the Samsung Electronics is'Samsung Gauss'. 

**Source** 
- data/SPRI_AI_Brief_2023 December issue_F.pdf (page 12)==== [CHECK HALLUCINATIONS] ==== 
==== [DECISION: GENERATION IS GROUNDED IN DOCUMENTS] ==== 
==== [GRADE GENERATED ANSWER vs QUESTION] ==== 
==== [DECISION: GENERATED ANSWER ADDRESSES QUESTION] ==== 
```

```python
# enter your question
inputs = {
    "question": "Who will win the 2024 Nobel Prize in Literature?",
}

# running the graph
stream_graph(app, inputs, config, ["agent", "rewrite", "generate"])
```

```
 ==== [ROUTE QUESTION] ==== 
==== [ROUTE QUESTION TO WEB SEARCH] ==== 
==== [WEB SEARCH] ==== 
==== [GENERATE] ==== 

================================================== 
🔄 Node: generate🔄 
- - - - - - - - - - - - - - - - - - - - - - - - - - - -  
The 2024 Nobel Prize for Literature is Korean writer Han Kang. 

**Source** 
-http://www.newsnjeju.com/news/articleView.html?idxno=176273 
-https://www.ytn.co.kr/_ln/0104_202410102023354769 
-https://imnews.imbc.com/replay/2024/nwtoday/article/6645026==== [CHECK HALLUCINATIONS] ==== 
_36523.html==== [DECISION: GENERATION IS GROUNDED IN DOCUMENTS] ==== 
==== [GRADE GENERATED ANSWER vs QUESTION] ==== 
==== [DECISION: GENERATED ANSWER ADDRESSES QUESTION] ==== 
```

<br>
