02. Caching (Cache)

Caching

LangChain offers a selective caching layer above LLM.

This is useful for two reasons.

  • For LLM providers if multiple requests for the same completion Reduce the number of API calls to reduce costs You can.

  • For LLM providers Reduce the number of API calls to speed up your application There is.

# API KEY a configuration file for managing environment variables
from dotenv import load_dotenv

# API KEY Load information
load_dotenv()
true
# LangSmith Set up tracking. https://smith.langchain.com
# !pip install langchain-teddynote
from langchain_teddynote import logging

# Enter a project name.
logging.langsmith("CH04-Models")
 Start tracking LangSmith. 
[Project name] 
CH04-Models

Generates models and prompts

InMemoryCache

Save the answer to the same question using the inmemory cache, and return the answer stored in the cache.

SQLite Cache

Last updated