14. Automating evaluation using online evaluation

Online Evaluators

Sometimes you want to evaluate the products recorded in your project.

# 설치
# !pip install -qU langsmith langchain-teddynote
# API KEY를 환경변수로 관리하기 위한 설정 파일
from dotenv import load_dotenv

# API KEY 정보로드
load_dotenv()
 True 
# LangSmith 추적을 설정합니다. https://smith.langchain.com
# !pip install -qU langchain-teddynote
from langchain_teddynote import logging

# 프로젝트 이름을 입력합니다.
logging.langsmith("CH16-Auto-Evaluation-Test")
 Start tracking LangSmith. 
[Project name] 
CH16-Auto-Evaluation-Test 

Chain setting for online Evaluation

Run the test chain to see if Runs reflects the results.

Online LLM-as-judge creation

Online LLM-as-judge creation

Secrets & API Keys designation (OpenAI API Key)

Provider, Model, Prompt Settings

Provider, Model, Prompt Settings

For facts, specify output.context (change to suit your settings)

answer designates output.answer (changed to suit your settings)

Preview ensures that data is entered in the correct place

Last updated