10. Multi-agent for research introducing the STORM concept
Multi-agent for research introducing the STORM concept
The purpose of this tutorial is to cover how to build research automation systems using LangGraph.
Research is a labor-intensive task that is often delegated to analysts. AI has considerable potential to support this research process. This tutorial covers how to build custom AI-based research and report generation workflows.
In this tutorial, we aim to customize the research process by building a lightweight multi-agent system. Users provide research topics, and the system creates a team of AI analysts focusing on each subtopic.
In this process Human-in-the-loop Use to subdivide sub-topics before research begins.
STORM thesis According to, Similar topic lookup Wow Simulation of conversations from various perspectives This allows you to increase the frequency of use of reference sources and the density of information.
Mainly covered - LangGraph's main theme : Memory, Human-in-the-loop, Controllability - Goals of research automation : Build custom research processes - Source selection : Select input source for study - plan : Providing topics and creating a team of AI analysts - LLM utilization : In-depth interview with expert AI - Research course : Collect information and conduct interviews in parallel - Output format : Insight integrated into the final report - Settings : Preferences and API key settings - Analyst creation : Analyst creation and review through Human-In-The-Loop - Conduct an interview : Create questions and collect answers - Parallel interview : Parallelization of interviews through Map-Reduce - Final report preparation : Introduction and conclusion of the report This tutorial covers three themes:
Memory
Human-in-the-loop
Controllability
Now we will combine these concepts to cover research automation, one of AI's most popular applications.
Research is a labor-intensive task that is often delegated to analysts. AI has considerable potential to support this research process. However, research needs customization. Raw LLM outputs are often not suitable for real decision-making walkflows.
Custom AI base Research and report generation Workflows are a promising way to solve this.
True Preferences
# Configuration file for managing API keys as environment variables
from dotenv import load_dotenv
# Load API key information
load_dotenv()Analyst Generation: Human-In-The-Loop
Analyst creation :
Human-In-The-LoopUtilize to create and review analysts.
The following defines the status of tracking a set of analysts created through the Analyst class.
Analyst creation node definition
Next, we will define the Analyst creation node.
The code below implements a logic that creates various analysts for a given research topic. Each analyst has a unique role and affiliation, and provides a professional perspective on the subject.
Graph generation
Now create an analyst creation graph.
Graph execution for analyst creation
__interrupt__ When outputted, it is ready to receive human feedback.
Now bring the status below to provide human feedback.
update_state() Inject human feedback through. At this time human_analyst_feedback Save feedback on the key.
Also as_node Specifies the node to receive feedback through the factor.
None When the value is given as input, then the graph proceeds.
Reference
When you want to resume,
NoneResume the graph by assigning values.
again __interrupt__ When outputted, it is ready to receive human feedback.
It is also possible to reconcile the analyst's persona created by providing human feedback again, the same as the previous method.
But if there is no additional feedback None You can end the analyst creation task by assigning values.
Outputs the final result.
final_state.next Indicates the node to run next in the graph. Here's all the work done, so empty tuple This output.
Conduct an interview
Create question
Analysts ask questions to experts.
The following defines the node that creates the interview question.
Tool definition
Experts answer questions by collecting information in parallel from multiple sources.
Various tools are available, including web document scraping, VectorDB, web browsing, and Wikipedia search.
This tutorial uses Arxiv, Tavily search.
Format and output document search results.
Node generation
Create an interview graph
Define and run the graphs that conduct the interview.
Graph execution
Now run the graph and output the results.
Interview with langgraph
Send()Parallelized using functions, whichmapCorresponds to the step.Interview results
reduceIt is incorporated into the body of the report at the stage.Add an introduction and a final step in writing an introduction to the final report.
Interview in parallel (map-reduce)
Output results in markdown format.
Last updated