06. Embedding-based evaluation (embedding_distance)

Embedding based Evaluator (embedding_distance)

Generates an evaluator that measures the distance between the answer and the correct answer.

# installation
# !pip install -U langsmith langchain-teddynote
# Configuration file for managing API KEY as environment variable
from dotenv import load_dotenv

# Load API KEY information
load_dotenv()
 True 
# LangSmith set up tracking. https://smith.langchain.com
# !pip install -qU langchain-teddynote
from langchain_teddynote import logging

# Enter a project name.
logging.langsmith("CH16-Evaluations")
 Start tracking LangSmith. 
[Project name] 
CH16-Evaluations 

Define functions for RAG performance testing

We will create a RAG system to use for testing.

ask_question Generate a function with the name Lee. Input inputs Ra receives a dickery, answer Ra returns the dictionary.

Embedding street based Evaluator

If multiple Embedding models are used for one metric, the results are calculated as average values.

(Example) - cosine : BGE-m3 - euclidean : OpenAI, Upstage

euclidean In the case, the average value of each model is calculated.

Last updated