# 05. LangChain Expression Language (LCEL)

### Basic example: Prompt + Model + Output parser <a href="#id-1" id="id-1"></a>

The most basic and common use example is linking prompt templates and models together. To see how this works, let's create a Chain that asks for each country's capital.

```python
# API KEY를 환경변수로 관리하기 위한 설정 파일
from dotenv import load_dotenv

# API KEY 정보로드
load_dotenv()
```

```
 True 
```

```python
# LangSmith 추적을 설정합니다. https://smith.langchain.com
# !pip install -qU langchain-teddynote
from langchain_teddynote import logging

# 프로젝트 이름을 입력합니다.
logging.langsmith("CH01-Basic")
```

```
 Start tracking LangSmith. 
[Project name] 
CH01-Basic 
```

### Use of prompt templates <a href="#id-2" id="id-2"></a>

`PromptTemplate`

* Template used to create a complete prompt string using the user's input variable
* Usage
* `template` : Template string. Braces within this string `{}` Indicates a variable.
* `input_variables` : Defines the name of the variable to be in braces as a list.

`input_variables`

* input\_variables is a list that defines the name of a variable used in PromptTemplate.

```python
from langchain_teddynote.messages import stream_response  # 스트리밍 출력
from langchain_core.prompts import PromptTemplate
```

`from_template()` Create PromptTemplate objects using methods

```python
# template 정의
template = "{country}의 수도는 어디인가요?"

# from_template 메소드를 이용하여 PromptTemplate 객체 생성
prompt_template = PromptTemplate.from_template(template)
prompt_template
```

```
 PromptTemplate (input_variables=['country'], template='{country}) 
```

```python
# prompt 생성
prompt = prompt_template.format(country="대한민국")
prompt
```

```
 'Where is the capital of Korea?' 
```

```python
# prompt 생성
prompt = prompt_template.format(country="미국")
prompt
```

```
 'Where is the capital of the United States?' 
```

```python
from langchain_openai import ChatOpenAI

model = ChatOpenAI(
    model="gpt-3.5-turbo",
    max_tokens=2048,
    temperature=0.1,
)
```

<br>

### Chain creation <a href="#chain" id="chain"></a>

#### LCEL (LangChain Expression Language) <a href="#lcellangchain-expression-language" id="lcellangchain-expression-language"></a>

![](https://wikidocs.net/images/page/233344/lcel.png)

Here we use LCEL to combine various components into a single chain

```ini
chain = prompt | model | output_parser
```

`|` The symbol [unix pipe operator ](https://en.wikipedia.org/wiki/Pipeline_\(Unix\))Similar to, it connects different components and passes the output of one component to the input of the next component.

In this chain, user input is passed to the prompt template, then the prompt template output is passed to the model. If you look at each component individually, you can understand what is going on.

```python
# prompt 를 PromptTemplate 객체로 생성합니다.
prompt = PromptTemplate.from_template("{topic} 에 대해 쉽게 설명해주세요.")

model = ChatOpenAI()

chain = prompt | model
```

#### invoke() call <a href="#invoke" id="invoke"></a>

* Pass the input in the form of a python dictionary (Key: value)
* When invoke() function is called, it passes the input value.

```python
# input 딕셔너리에 주제를 '인공지능 모델의 학습 원리'으로 설정합니다.
input = {"topic": "인공지능 모델의 학습 원리"}
```

```python
# prompt 객체와 model 객체를 파이프(|) 연산자로 연결하고 invoke 메서드를 사용하여 input을 전달합니다.
# 이를 통해 AI 모델이 생성한 메시지를 반환합니다.
chain.invoke(input)
```

```
 The learning principle of the AIMessage (content=' artificial model is to learn patterns using data. The model accepts input data and internally weights it to output the desired result. During the learning process, the model uses input and correct answer data to calculate the error and update the weights in the direction to minimize this error. This repetitive learning allows the model to learn patterns from input data to predict accurate results. '4fens':'completion_tokens': 214,'prompt_tokens': 33,'total_tokens': 247<TAG1 
```

Below is an example of outputting streaming.

```python
# 스트리밍 출력을 위한 요청
answer = chain.stream(input)
# 스트리밍 출력
stream_response(answer)
```

```
The learning principle of an artificial intelligence model is the process of accepting data as input to learn patterns and making predictions or classifications based on this.  

The learning process largely uses an artificial neural network consisting of an input layer, a hidden layer, and an output layer. A structure that receives data from the input layer and outputs the result through the silky layer into the output layer. 

At this time, the model proceeds learning in the direction of adjusting weights and minimizing errors through a given data. To do this, we perform predictions for a given data, calculate the error compared to the actual value, and then update the weights to reduce this error. 

This repetitive process allows models to learn patterns between data and make accurate predictions about new data. Models learned in this way can make generalized predictions about new data. 
```

#### Output Parser <a href="#output-parser" id="output-parser"></a>

```python
from langchain_core.output_parsers import StrOutputParser

output_parser = StrOutputParser()
```

Add an output parser to Chain.

```python
# 프롬프트, 모델, 출력 파서를 연결하여 처리 체인을 구성합니다.
chain = prompt | model | output_parser
```

```python
# chain 객체의 invoke 메서드를 사용하여 input을 전달합니다.
input = {"topic": "인공지능 모델의 학습 원리"}
chain.invoke(input)
```

```
 'The learning principle of the artificial model is to receive data as input to learn patterns. Models are learned to receive input data and output the desired results internally while adjusting weights. At this time, the model learns the relationship between input data and output data to predict output for new input data. This process is done repeatedly, and the model gradually improves accuracy through learning. In this way, the AI model can improve the ability to judge and predict based on a given data.' 
```

```python
# 스트리밍 출력을 위한 요청
answer = chain.stream(input)
# 스트리밍 출력
stream_response(answer)
```

```
 The learning principle of an artificial intelligence model is the process of learning patterns using data. First, the model accepts and processes input data, at which time compares input data with correct answer data to calculate the error. To minimize this error, the model gradually learns accurate patterns while adjusting weights and bias. Repeating these processes to learn so that the model can make accurate predictions about the data is a key principle of the artificial model. 
```

#### Apply by changing the template <a href="#id-3" id="id-3"></a>

* Anything below the prompt **change** You can test it.
* `model_name` You can also change it to test it.

```python
template = """
당신은 영어를 가르치는 10년차 영어 선생님입니다. 상황에 [FORMAT]에 영어 회화를 작성해 주세요.

상황:
{question}

FORMAT:
- 영어 회화:
- 한글 해석:
"""

# 프롬프트 템플릿을 이용하여 프롬프트를 생성합니다.
prompt = PromptTemplate.from_template(template)

# ChatOpenAI 챗모델을 초기화합니다.
model = ChatOpenAI(model_name="gpt-4-turbo")

# 문자열 출력 파서를 초기화합니다.
output_parser = StrOutputParser()
```

```python
# 체인을 구성합니다.
chain = prompt | model | output_parser
```

```python
# 완성된 Chain을 실행하여 답변을 얻습니다.
# 스트리밍 출력을 위한 요청
answer = chain.stream({"question": "저는 식당에 가서 음식을 주문하고 싶어요"})
# 스트리밍 출력
stream_response(answer)
```

```
 English conversation: 
-Hello, could I see the menu, please?  
-I'd like to order the grilled salmon and a side of mashed potatoes. 
- Could I have a glass of water as well? 
-Thank you! 

Hanul interpretation: 
-Hello, can I see the menu version? 
-I want to order grilled salmon and mashid potato. 
-Can you give me a glass of water? 
- Thank you! 
```

```python
# 이번에는 question 을 '미국에서 피자 주문'으로 설정하여 실행합니다.
# 스트리밍 출력을 위한 요청
answer = chain.stream({"question": "미국에서 피자 주문"})
# 스트리밍 출력
stream_response(answer)
```

```
 English conversation: 
-Employee: "Hello, Tony's Pizza. How can I help you?" 
-Customer: "Hi, I'd like to place an order for delivery, please." 
-Employee: "Sure thing! What would you like to order?" 
-Customer: "I'll have a large pepperoni pizza with extra cheese and a side of garlic bread." 
-Employee: "Anything to drink?" 
-Customer: "Yes, a 2-liter bottle of Coke, please." 
-Employee: "Alright, your total comes to $22.50. Can I have your delivery address?" 
- Customer: "It's 742 Evergreen Terrace." 
-Employee: "Thank you. Your order will be be there in about 30-45 minutes. Is there anything else I can help you with?" 
-Customer: "No, that's everything. Thank you!" 
-Employee: "Thank you for choosing Tony's Pizza. Have a great day!" 

Hanul interpretation: 
-Employee: "Hello, this is Tony's pizza. How can I help you?" 
-Customer: "Hello, I want to order delivery." 
-Employee: "Yes, what would you order?" 
-Customer: "Add cheese to a large size pepperoni pizza and give me a garlic bread." 
-Employee: "Would you like a drink?" 
-Customer: "Yes, give me a bottle of coke 2 liters." 
-Employee: "Okay, the sum is $22.50. Could you please provide the delivery address?" 
-Customer: "742 Evergreen Terrace." 
-Employee: "Thank you. The food you ordered will arrive in approximately 30-45 minutes. Need other help?" 
-Customer: "No, this is it. Thank you!" 
-Employee: "Thank you for choosing Tony's pizza. Have a nice day!" 
```

<br>
