# 04. JSON output parser (JsonOutputParser)

## JsonOutputParser <a href="#jsonoutputparser" id="jsonoutputparser"></a>

This output parser allows the user to specify the desired JSON schema, which results in by viewing data from the LLM to suit that schema.

In order for LLM to process data accurately and efficiently to generate the desired form of JSON, the capacity of the model (meaning intelligence here). Yes. Please note that llama-70B is larger than llama-8B) and this should be enough.

```
from dotenv import load_dotenv

load_dotenv()
```

```
True
```

```
# LangSmith Set up tracking. https://smith.langchain.com
# !pip install langchain-teddynote
from langchain_teddynote import logging

# Enter a project name.
logging.langsmith("CH03-OutputParser")
```

```
 Start tracking LangSmith. 
[Project name] 
CH03-OutputParser 
```

```
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import JsonOutputParser
from pydantic import BaseModel, Field
from langchain_openai import ChatOpenAI
```

```
# OpenAI The new season.
model = ChatOpenAI(temperature=0, model_name="gpt-4o")
```

Define the desired output structure.

```
# Define the desired data structure.
class Topic(BaseModel):
    description: str = Field(description="A brief description of the topic")
    hashtags: str = Field(description="Keywords in hashtag format (2 or more)")
```

`JsonOutputParser` Set the parser using, and inject the instruction clause into the prompt template.

```
# Write a query
question = "Tell us about the seriousness of global warming."

# Set up the parser and inject instructions into the prompt template.
parser = JsonOutputParser(pydantic_object=Topic)


prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a friendly AI assistant. Answer questions concisely."),
        ("user", "#Format: {format_instructions}\n\n#Question: {question}"),
    ]
)

prompt = prompt.partial(format_instructions=parser.get_format_instructions())

chain = prompt | model | parser  # Make up a chain.

chain.invoke({"question": question})  # Execute a query by calling a chain
```

```
{'description':'Global warming is a phenomenon in which the average temperature of the Earth continues to rise, mainly due to greenhouse gas emissions caused by human activity. This causes polar glaciers to melt, sea levels to rise, and natural disasters from climate change are frequent.','hashtags':'#globalization #climate change #greenhouse gas' } 
```

### Pydantic use <a href="#pydantic" id="pydantic"></a>

You can use this feature without Pydantic. In this case, I request to return JSON, but it does not provide specific information on how the schema should be.

```
# Write a query
question = "Tell us about global warming. Please put a description of global warming in `description` and related keywords in `hashtags`."

# JSON Initialize the output parser
parser = JsonOutputParser()

# Set a prompt template.
prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a friendly AI assistant. Answer questions concisely."),
        ("user", "#Format: {format_instructions}\n\n#Question: {question}"),
    ]
)

# Inject instructions into the prompt.
prompt = prompt.partial(format_instructions=parser.get_format_instructions())

# Create a chain that connects prompts, models, and parsers
chain = prompt | model | parser

# Execute a query by calling a chain
response = chain.invoke({"question": question})

# Check the output.
print(response)
```

```
{'description':'Global warming refers to the rise in the average temperature of the Earth due to an increase in the concentration of greenhouse gases in the atmosphere. This leads to a variety of environmental issues, including climate change, sea level rise, and a decrease in polar glaciers.','hashtags': ['#globalization','#climate change','#greenhouse gas','#sea level rise','#environmental problems'] }
```
