03. ConversationTokenBufferMemory

ConversationTokenBufferMemory

ConversationTokenBufferMemory Keep the buffer in memory for the history of recent conversations, not the number of conversations Token length Use to determine when to flush the conversation.

# API KEY A configuration file for managing environment variables
from dotenv import load_dotenv

# API KEY Load information
load_dotenv()
True
  • max_token_limit : Set the length of the maximum token to store the conversation.

from langchain.memory import ConversationTokenBufferMemory
from langchain_openai import ChatOpenAI


# LLM Create a model
llm = ChatOpenAI()

# memory settings
memory = ConversationTokenBufferMemory(
    llm=llm, max_token_limit=150, return_messages=True  # 최대 토큰 길이를 50개로 제한
)

Add random conversations.

Length of maximum token 150 Let's set it up and see how it works when you save the conversation.

Last updated