This tutorial will show you how you can dynamically set various options when calling Chain.
Dynamic configuration can be done in the following two ways.
first, configurable_fields This is a method. This method allows you to configure specific fields of executable objects.
second, configurable_alternatives This is a method. This method allows you to list alternatives to specific executable objects that can be set during runtime.
configurable_fields
configurable_fields means the field that defines the set value of the system.
Dynamic attribute assignment
ChatOpenAI When using, we model_name You can adjust the same settings as.
model_name Is a property used to specify the version of GPT. For example, gpt-4o , gpt-4o-mini You can select a model by setting the back.
If fixed model_name When you want to dynamically designate a model rather than: ConfigurableField You can use to convert to property values that can be set dynamically.
# Configuration file for managing API keys as environment variablesfrom dotenv import load_dotenv# Load API key informationload_dotenv()
configurable_fields Using methods model_name Specifies properties as dynamic configurable fields.
model.invoke() On call config={"configurable": {"키": "값"}} You can dynamic it in format.
this time gpt-4o-mini I will try using the model. Check out the model that changed to the output.
model Object with_config() Using methods configurable You can also set parameters. The way it works with the previous one is the same.
You can also use this function in the same way when using it as part of a chain.
HubRunnable: Change the settings of LangChain Hub
HubRunnable Using facilitates the conversion of prompts registered in Hub.
Separately with_config Without designation prompt.invoke() When I call the method, I set it up first "rlm/rag-prompt" Pull the registered prompt on the hub.
Configurable Alternatives: Alternative setting of the Runnable object itself
It constitutes an alternative to Runnable that can be set at runtime.
Configurable alternatives
ChatAnthropic The configurable language model of gives you the flexibility to apply to a variety of tasks and contexts.
Set the parameters you set for the model to a ConfigurableField object to change the Config value dynamically.
model : Specifies the default language model to use.
temperature : Values between 0 and 1, controlling the randomness of sampling. The lower the value, the more decisive and repetitive the output, and the higher the value, the more diverse and creative the output.
How to set up alternatives for LLM objects
Let's take a look at how to do this using the Large Language Model (LLM).
[Note]
ChatAnthropic API KEY must be issued and set to use the model.
Link: https://console.anthropic.com/dashboard
Uncomment below and set API KEY, .env Set to file.
ANTHROPIC_API_KEY Set environmental variables.
chain.invoke() The method is the default LLM ChatAnthropic Call the chain using.
chain.with_config(configurable={"llm": "model"}) Use llm You can specify different models.
Change the settings of the chain to use the language model gpt4o Specify as.
Change the settings of the chain to use the language model anthropic Specify as.
How to set an alternative for prompts
Prompts can also do something similar to the previous LLM alternative setup method.
If there are no settings changes, the default prompt is entered.
with_config Call another prompt.
this time eng Use prompts to request translation. The input variable to pass at this time input is.
Change all prompt & LLM
You can configure several things using prompts and LLMs.
Here is an example of doing this using both prompt and LLM.
Save settings
You can easily save the configured chain as a separate object. For example, after configuring a customized chain for a specific task, you can easily utilize it in similar tasks in the future by saving it as a reusable object.
# LangSmith set up tracking. https://smith.langchain.com
# !pip install langchain-teddynote
from langchain_teddynote import logging
# Enter a project name.
logging.langsmith("LCEL-Advanced")
from langchain.prompts import PromptTemplate
from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatOpenAI(temperature=0, model_name="gpt-4o")
model.invoke("What is the capital of South Korea?").__dict__
{'content':'The capital of Korea is Seoul. '1':'additional_kwargs': None{'fusal':'3','response_metadata': }'usage_metadata': {'input_tokens': 15,'output_tokens': 28,'total_tokens': 43}}
model = ChatOpenAI(temperature=0).configurable_fields(
model_name=ConfigurableField( # model_name is a field originally from ChatOpenAI.
id="gpt_version", # Set the id of model_name.
name="Version of GPT", # sets the name of model_name.
# Sets the description of model_name.
description="Official model name of GPTs. ex) gpt-4o, gpt-4o-mini",
)
)
model.invoke(
"What is the capital of South Korea?",
# Set gpt_version to gpt-3.5-turbo.
config={"configurable": {"gpt_version": "gpt-3.5-turbo"}},
).__dict__
model.invoke(
# Set gpt_version to gpt-4o-mini.
"What is the capital of South Korea?",
config={"configurable": {"gpt_version": "gpt-4o-mini"}},
).__dict__
# Create a prompt template from a template.
prompt = PromptTemplate.from_template("{x} Select a random number greater than the above.")
chain = (
prompt | model
) # Create a chain by connecting prompts and models. The output of the prompt is passed to the input of the model.
chain.invoke({"x": 0}).__dict__ # Call the chain and pass 0 to the input variable "x"..
# You can call a chain by specifying settings when calling the chain.
chain.with_config(configurable={"gpt_version": "gpt-4o"}).invoke({"x": 0}).__dict__
{'content':'There are many ways to choose the random number above. Here I will show you an example of using a Python programming language to select a random number greater than 0.\n\n```python\nimport random\n\n# 0.\nrandom_number = random.random() # 0.0 or more returns a random number less than 1.0.\nwhile random_numberThe code above n uses the `random.random()` function to generate a random number between 0.0 and 1.0. This function always returns a value greater than 0, so no additional conditional statement is required.\n\n If you want to select an integer greater than 0 within a certain range, you can use the `random.randint()` function. For example, if you want to select a random number between 1 and 100, you can do this:\n\n```python\nimport random\n\n# Select a random number between 1 and 100.\nrandom_number = random.randint(1, 100)\n\n` '1':'Fol':'2,'total_tokens': 279},'model_name':'gpt-4o-2024-05-13','system_fingerprint':'fp_157b381f 'ai','name': None,'id':'run-127c5be7-31b5-42dd-a595-bede51b65fe4-0','example': False,'tool_calls': [],'invalid_tool_calls': [],'usage 'ai','name': None,'id':'run-127c5be7-31b5-42dd-a595-bede51b65fe4-0','example': False,'tool_calls': [],'invalid_tool_calls': [],'usage
from langchain.runnables.hub import HubRunnable
prompt = HubRunnable("teddynote/rag-prompt-korean").configurable_fields(
# ConfigurableField that sets the owner repository commit
owner_repo_commit=ConfigurableField(
# ID of the field
id="hub_commit",
# Name of the field
name="Hub Commit",
# Description of the field
description="Korean RAG prompt by teddynote",
)
)
prompt
/Users/teddy/Library/Caches/pypoetry/virtualenvs/langchain-kr-lwwSZlnu-py3.11/lib/python3.11/site-packages/langchain_core/_api/beta_decorator.py:87arn: LangChain It is actively being working on, so the API may change.
warn_beta(
RunnableConfigurableFields (default=HubRunnable (bound=ChatPromptTemplate (input_varibles='context','question'], metadata={'lc_hub_lc Your mission is to answer the question given in a given context.\n Search and then use context to answer the question. If you can't find the answer in a given context, if you don't know the answer, answer `You can't find the information about the question in the given information.'' Answer me by njanglo. However, please use technical terms or names as they are without translation. Don't narrate the answer, just answer the question. Let's think step-by-step.")), HumanMessagePromptTemplate (prompt=PromptTemplate (input_varibles=['context','question'], template='#Question: \n{question}\n\ is_shared=False)}) is_shared=False)}) is_shared=False)})
# Call the invoke method of the prompt object, passing the "question" and "context" parameters.
prompt.invoke({"question": "Hello", "context": "World"}).messages
[SystemMessage (content=" You are a friendly AI assistant performing Question-Answering. Your mission is to answer the question given in a given context.\n Search and then use the context to answer the question. If you can't find the answer in a given context, if you don't know the answer, answer `You can't find the information about the question in the given information.'' Answer me by njanglo. However, please use technical terms or names as they are without translation. Don't narrate the answer, just answer the question. Let's think step-by-step."), HumanMessage (content='#Question: \nHello \n\n#Context: \nWorld \n\n#Answer:')]
prompt.with_config(
# hub_commit 을 teddynote/simple-summary-korean set to.
configurable={"hub_commit": "teddynote/simple-summary-korean"}
).invoke({"context": "Hello"})
ChatPromptValue (messages= [HumanMessage (content=' Summarize the following sentence based on the given. Be sure to write the answer in Koreanglo\n\nCONTEXT: Hello\n\nSUMMARY:')])
# import os
# os.environ["ANTHROPIC_API_KEY"] = "ANTHROPIC API KEY를 입력합니다."
from langchain.prompts import PromptTemplate
from langchain_anthropic import ChatAnthropic
from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI
llm = ChatAnthropic(
temperature=0, model="claude-3-5-sonnet-20240620"
).configurable_alternatives(
# Give this field an id.
# When constructing the final executable object, you can use this id to construct this field.
ConfigurableField(id="llm"),
# Set the primary key.
# If this key is specified, the default LLM (ChatAnthropic) initialized above will be used.
default_key="anthropic",
# Adds a new option named 'openai', which is equivalent to `ChatOpenAI()`.
openai=ChatOpenAI(model="gpt-4o-mini"),
# Adds a new option named 'gpt4', which is equivalent to `ChatOpenAI(model="gpt-4")`.
gpt4o=ChatOpenAI(model="gpt-4o"),
# You can add more configuration options here.
)
prompt = PromptTemplate.from_template("{topic} 에 대해 간단히 설명해주세요.")
chain = prompt | llm
# Call it Anthropic by default.
chain.invoke({"topic": "new jeans"}).__dict__
{'content':'NewJeans is a group of five-member girls in Korea that debuted on July 22, 2022. The main features are:\n\n1. Affiliate: ADOR (subsidiary of HYBE Labels)\n\n2. Members: Minji, Hani, Daniel, Harin, Hyein\n\n3. Debut: "Attention", "Hype Boy", "Cookie"\n\n4. Features:\n -Fresh image of 10 members\n - Y2K, modernly disastrous music and style of 90-year sensibility \n- Unique marketing strategy (this sudden debut without prior publicity)\n\n5. Main achievements:\n -J debut album \'New Jeans\' Billboard 200 chart entry\n -Multiple sound chart 1st record\n -Each newcomer award\n\nNewjins has quickly gained popularity since its debut and has established itself as a group representing the 4th generation K-pop.','additional_kwargs': {}usage': {'input_tokens': 30,'output_tokens': 390}},'type': 'ai','name': None,'id':'run-68b3570d-a0d4-4074-9d69-cbf40b3caf8b-0','example': False,'tool_calls': [],'invalid_tool_calls': [],'usage
# Called by changing the settings of the chain.
chain.with_config(configurable={"llm": "openai"}).invoke({"topic": "new jeans"}).__dict__
{'content':'NewJeans is a group of Korean girls who debuted in August 2022, belonging to Hive's subsidiary Adore (ADOR). The group consists of Minji, Hani, Daniel, Harin and Hyein. New Gins quickly gained popularity with their unique musical style and trendy fashion, and has released several hits including "Attention", "Hype Boy", "Cookie", and "Ditto". These are noted for their fresh images and personalities in the K-pop scene, forming a global fan base. '4pai':'ditional_kwargs': None{'fusal':'3','response_metadata': }'completion_tokens': 17 run-12312712-a1f8-4f9e-ace2-e800224b0511-0','example': False,'tool_calls': [],'invalid_tool_calls': [],'usage_metadata': {'input
# Called by changing the settings of the chain.
chain.with_config(configurable={"llm": "gpt4o"}).invoke({"topic": "뉴진스"}).__dict__
{'content':'NewJeans is a Korean girl group, debuting in 2022. The group was planned and produced on a label under Hive (HYBE) called ADOR (Language), a project led by producer Min Heejin. New Gins is noted for its unique musical style, fashion sense, and gorgeous performance. '4fens':'3G1>'fusal':'2fens','3G1',tool_calls': [],'invalid_tool_calls': [],' usage_metadata': {'input_tokens': 18,'output_tokens': 132,'total_tokens': 150}}
# Called by changing the settings of the chain.
chain.with_config(configurable={"llm": "anthropic"}).invoke(
{"topic": "new jeans"}
).__dict__
{'content':'NewJeans is a group of five-member girls in Korea that debuted on July 22, 2022. The main features are:\n\n1. Affiliate: ADOR (subsidiary of HYBE Labels)\n\n2. Members: Minji, Hani, Daniel, Harin, Hyein\n\n3. Debut: "Attention", "Hype Boy", "Cookie"\n\n4. Features:\n - Y2K, a modern and disastrous concept of 90-year sensibility\n-pursuit of natural and comfortable images\n -Excellent vocal and performance skills\n\n5. Key achievements:\n -Interest Billboard 200 chart as debut album\n -Achieve multiple music broadcasts 1st place\n -Each newcomer award\n\n6. Popular factors:\n -Unique marketing strategy\n -Addictive songs and choreography\n\nNew Gins quickly gained popularity after their debut and has established itself as a group representing the 4th generation K-pop.','additional_kwargs': {},'response_metadata': {'id' 418}},'type':'ai','name': None,'id':'run-d53613a7-3c9c-45e6-9cef-7f2208e61408-0','example': False,'tool_calls': [],'invalid
# Initialize the language model and set the temperature to 0.
llm = ChatOpenAI(temperature=0)
prompt = PromptTemplate.from_template(
"{country} where is the capital of?" # Basic prompt template
).configurable_alternatives(
# Give this field an id.
ConfigurableField(id="prompt"),
# Set the primary key.
default_key="capital",
# Adds a new option called 'area'.
area=PromptTemplate.from_template("{country} What is the area of?"),
# Adds a new option called 'population'.
population=PromptTemplate.from_template("{country} What is the population of?"),
# Add a new option called 'eng'.
eng=PromptTemplate.from_template("{input} Please translate this into English."),
# You can add more configuration options here.
)
# Create a chain by connecting prompts and language models.
chain = prompt | llm
# Call the chain without changing config.
chain.invoke({"country": "korea"})
AIMessage (content=' The capital of Korea is Seoul.', additional_kwargs={'fusal': None}, response_metadata={'completion_tokens': 16,'p
# Call with_config to change the settings of the chain.
chain.with_config(configurable={"prompt": "area"}).invoke({"country": "korea"})
AIMessage (content=' The total area of Korea is about 100,363 km²', additional_kwargs={'fusal': None}', response_metadata={'*G1>
# Call with_config to change the chain's settings.
chain.with_config(configurable={"prompt": "population"}).invoke({"country": "korea"})
AIMessage (content=' As of September 2021, the population of Korea is about 50 million.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'complet_tok 32,'total_tokens': 56})
# Call with_config to change the settings of the chain.
chain.with_config(configurable={"prompt": "eng"}).invoke({"input": "Apples are delicious!"})
AIMessage (content='Apples are Delicious!', additional_kwargs={'fusal': None}, response_metadata={'pletion_tokens': 5,'prompt
llm = ChatAnthropic(
temperature=0, model="claude-3-5-sonnet-20240620"
).configurable_alternatives(
# Give this field an id.
# When constructing the final executable object, you can use this id to construct this field.
ConfigurableField(id="llm"),
# Set the primary key.
# If this key is specified, the default LLM (ChatAnthropic) initialized above will be used.
default_key="anthropic",
# 'Adds a new option called 'openai', which `ChatOpenAI(model="gpt-4o-mini")`s the same as .
openai=ChatOpenAI(model="gpt-4o-mini"),
# Adds a new option named 'gpt4', which is equivalent to `ChatOpenAI(model="gpt-4o")`.
gpt4=ChatOpenAI(model="gpt-4o"),
# You can add more configuration options here.
)
prompt = PromptTemplate.from_template(
"{company} "Describe it in 20 characters or less." # Basic prompt template
).configurable_alternatives(
# Give this field an id.
ConfigurableField(id="prompt"),
# Set the primary key.
default_key="description",
# Adds a new option called 'founder'.
founder=PromptTemplate.from_template("{company} 의 창립자는 누구인가요?"),
# Adds a new option called 'competitor'.
competitor=PromptTemplate.from_template("{company} 의 경쟁사는 누구인가요?"),
# You can add more configuration options here.
)
chain = prompt | llm
# You can configure it by specifying the setting values with with_config.
chain.with_config(configurable={"prompt": "founder", "llm": "openai"}).invoke(
# Request processing for the company you provided.
{"company": "apple"}
).__dict__
{'content':'The founders of Apple are Steve Jobs, Steve Wozniak, and Ronald Wayne. These three founded the Apple Computer Company in 1976. '1','Ten' and'_fens':'3:','Fonel_usage':',run-513cf68d-c626-4ec2-a8c6-f468f6d33e7f-0','example': False,'tool_calls': [],'invalid_tool_calls': [],'usage_metadata': {'input_tokens': 17,'output_tokens': 117,'total_tokens': 134}}
# If you only want to configure one
chain.with_config(configurable={"llm": "anthropic"}).invoke(
{"company": "apple"}
).__dict__
# If you only want to configure one
chain.with_config(configurable={"prompt": "competitor"}).invoke(
{"company": "apple"}
).__dict__
{'content':'Apple's main competitors are:\n\n1. Samsung: Compete on smartphones, tablets, wearable devices, etc.\n\n2. Google: Compete in mobile operating systems (Android), cloud services, AI, etc.\n\n3. Microsoft: competing in computer operating systems, cloud services, tablets, etc.\n\n4. Amazon: Compete in cloud services, smart speakers, digital content, etc.\n\n5. Huawei: Competition on smartphones, telecommunications equipment, etc.\n\n6. Sony: Competition on sound equipment, game consoles, etc.\n\n7. Dell, HP, Lenovo: competing in the computer hardware sector\n\n8. Sporty, Netflix: Competition in the streaming service sector\n\n9. '4:'Net','Net','4G','G','Gn', etc., these companies are competing with Apple in their respective fields. 'run-81eace51-776f-4ba0-bfd4-64fe08db9f03-0','example': False,'tool_calls': [],'invalid_tool_calls': [],'usage_metadata': {'input_t
# If you only want to configure one
chain.invoke({"company": "apple"}).__dict__
# Save the generated chain to a separate variable by changing the settings with with_config.
gpt4_competitor_chain = chain.with_config(
configurable={"llm": "gpt4", "prompt": "competitor"}
)
# Call the chain.
gpt4_competitor_chain.invoke({"company": "apple"}).__dict__
{'content':'Apple's main competitors span a variety of industries. Looking at some of the major competitors:\n\n1. **Smartphone and Tablet**:\n - **Samsung Electronics**: Galaxy series smartphones and tablets compete with Apple's iPhone and iPad.\n - **Huawei**: China's leading smartphone manufacturer competes with Apple in the global market.\n -**Google**: Compete with Apple's iPhone with pixel smartphone.\n\n2. **Computer and Laptop**:\n -**Microsoft**: Compete against Apple's MacBook and iPad with surface series laptops and tablets.\n -**Del, HP, Lenovo**: These companies each have a variety of PCs and laptops Compete with Apple's MacBook through the family.\n\n3. **Operator and Software**:\n -**Google**: Android operating system competes with Apple's iOS.\n -**Microsoft**: Windows operating system competes with Apple's macOS.\n\n4. **Services and Content**:\n -**Netsflix, Amazon Prime Video, Disney Plus**: These streaming services compete with Apple TV+.\n -**Sportifi**: This music streaming service competes with Apple Music .\n\n5. **Wearable devices**:\n -** Samsung**: Galaxy Watch series competes with Apple's Apple Watch.\n -** Pitbit**: Compete with Apple's Apple Watch with fitness tracker and smartwatch.\n\n6. **Cloud Service**:\n -**Amazon Web Service (AWS), '1','4prom','4fens':'3'run-b95ae3ba-54c8-4203-9fba-b9f26df0e7c4-0','example': False,'tool_calls': [],'invalid_tool_calls': [],'usage_metadata': {'input_tok '4fens':'Ten','4fens':'Ten','G','Ten','4fens':run-b95ae3ba-54c8-4203-9fba-b9f26df0e7c4-0','example': False,'tool_calls': [],'invalid_tool_calls': [],'usage_metadata': {'input_tok '4fens':'Ten','4fens':'Ten','G','Ten','4fens':run-b95ae3ba-54c8-4203-9fba-b9f26df0e7c4-0','example': False,'tool_calls': [],'invalid_tool_calls': [],'usage_metadata': {'input_tok '4page':'Tag1>'fusal': None{,'response_metadata': }'token_usage': {'output_tokens': 476,'total_tokens': 492}} '4page':'Tag1>'fusal': None{,'response_metadata': }'token_usage': {'output_tokens': 476,'total_tokens': 492}} '4pokens':'476,'prompt_tokens': 16,'total_tokens': 492{,'model_name':'gpt-4o-2024-05-13','system_fpr '4pokens':'476,'prompt_tokens': 16,'total_tokens': 492{,'model_name':'gpt-4o-2024-05-13','system_fpr 'stop','logprobs': None},'type':'ai','name': None,'id':'run-b95ae3ba-54c8-4203-9fba-b9f26df0e7c4-0','example': False,'tool_calls': 'stop','logprobs': None},'type':'ai','name': None,'id':'run-b95ae3ba-54c8-4203-9fba-b9f26df0e7c4-0','example': False,'tool_calls':