# 03. StructuredOuputParser

## StructuredOutputParser <a href="#structuredoutputparser" id="structuredoutputparser"></a>

This output parser answers to LLM `dict` It can be used when you want to return multiple fields that are configured in format and held in pairs of key/value.

Pydantic/JSON parsers are more powerful, but this is useful for less powerful models (e.g., models with lower intelligence (low parameter count) than GPT and Claude models with intelligence like local models).

**Reference**

Local model `Pydantic` Parser often doesn't work, so as an alternative `StructuredOutputParser` You can use

```
from dotenv import load_dotenv

load_dotenv()
```

```
True
```

```
# LangSmith Set up tracking. https://smith.langchain.com
# !pip install langchain-teddynote
from langchain_teddynote import logging

# Enter a project name.
logging.langsmith("CH03-OutputParser")
```

```
Start tracking LangSmith. 
[Project name] 
CH03-OutputParser
```

```
from langchain.output_parsers import ResponseSchema, StructuredOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
```

* `ResponseSchema` The class is used to define a response schema that contains answers to the user's questions and a description of the source used (website).
* `StructuredOutputParser` for `response_schemas` Initialize using, structuring the output according to the defined response schema.

```
# Answers to your questions
response_schemas = [
    ResponseSchema(name="answer", description="Answers to your questions"),
    ResponseSchema(
        name="source",
        description="Sources used to answer users' questions `source`, `It must be `website address`",
    ),
]
# Initialize a structured output parser based on the response schema.
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
```

You will now receive a string containing instructions on how the response should be formatted (schemas), inserting a defined schema into the prompt.

```
# Parse output format instructions.
format_instructions = output_parser.get_format_instructions()
prompt = PromptTemplate(
    # Set up your template to best answer your users' questions.
    template="answer the users question as best as possible.\n{format_instructions}\n{question}",
    # As input variable 'question' I use.
    input_variables=["question"],
    # As a partial variable 'format_instructions'I use.
    partial_variables={"format_instructions": format_instructions},
)
```

```
model = ChatOpenAI(temperature=0)  # ChatOpenAI Initialize model
chain = prompt | model | output_parser  # Connecting prompts, models, and output parsers
```

```
# Ask what is the capital of South Korea.
chain.invoke({"question": "What is the capital of South Korea?"})
```

```
 {'answer':'Seoul','source':'https://ko.wikipedia.org/wiki/%EC%84%9C%EC%9A%B8'} 
```

`chain.stream` Using the method, "What is the achievement of the King of the Three Kingdoms?" Ra receives a stream response to the question.

```
for s in chain.stream({"question": "What are King Sejong's achievements?"}):
    # Streaming Output
    print(s)
```

```
 {'answer':'The King of the Three Kingdoms has a variety of achievements, including creating Hangeul and developing culture.','source':'https://ko.wikipedia.org/wiki/%EC%84%B8%EC%A2%85%EB%8C%80%EC%99%95'}
```
