Complex tasks often exceed the capabilities of a single agent. LangGraph: Stateful Agent Workflows supports multi-agent architectures through subgraph composition, supervisor patterns, and shared state. This section shows you how to build teams of specialized agents that collaborate within a single graph, communicate through well-defined interfaces, and scale to production using LangGraph: Stateful Agent Workflows Platform.
1. Why Multi-Agent Systems?
A single agent with many tools can become unwieldy. As the tool set grows, the LLM's ability to select the right tool degrades, and the system prompt becomes a sprawling instruction manual. Multi-agent architectures solve this by decomposing complex tasks into specialized sub-agents, each with a focused tool set and system prompt. The result is better tool selection, clearer reasoning, and more maintainable code.
LangGraph: Stateful Agent Workflows provides two main patterns for multi-agent systems: subgraph composition (embedding one graph inside another) and the supervisor pattern (a coordinating agent that delegates to specialized workers).
2. Subgraph Composition
A subgraph is a compiled LangGraph: Stateful Agent Workflows graph used as a node inside a parent graph. The parent graph invokes the subgraph like any other node, passing in state and receiving the subgraph's output. This enables modular design: each subgraph can be developed, tested, and debugged independently.
from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
# Shared state type
class TeamState(TypedDict):
messages: Annotated[list, add_messages]
research_notes: str
draft: str
llm = ChatOpenAI(model="gpt-4o-mini")
# --- Research subgraph ---
class ResearchState(TypedDict):
messages: Annotated[list, add_messages]
research_notes: str
def gather_sources(state: ResearchState) -> dict:
sys_msg = SystemMessage(content="Find relevant sources for the user's topic.")
resp = llm.invoke([sys_msg] + state["messages"])
return {"research_notes": resp.content}
research_builder = StateGraph(ResearchState)
research_builder.add_node("gather", gather_sources)
research_builder.add_edge(START, "gather")
research_builder.add_edge("gather", END)
research_subgraph = research_builder.compile()
# --- Writing subgraph ---
class WritingState(TypedDict):
messages: Annotated[list, add_messages]
research_notes: str
draft: str
def write_draft(state: WritingState) -> dict:
notes = state.get("research_notes", "No notes available.")
sys_msg = SystemMessage(
content=f"Write a short article using these research notes:\n{notes}"
)
resp = llm.invoke([sys_msg] + state["messages"])
return {"draft": resp.content}
writing_builder = StateGraph(WritingState)
writing_builder.add_node("write", write_draft)
writing_builder.add_edge(START, "write")
writing_builder.add_edge("write", END)
writing_subgraph = writing_builder.compile()
# --- Parent graph ---
parent_builder = StateGraph(TeamState)
parent_builder.add_node("research", research_subgraph)
parent_builder.add_node("writing", writing_subgraph)
parent_builder.add_edge(START, "research")
parent_builder.add_edge("research", "writing")
parent_builder.add_edge("writing", END)
team_graph = parent_builder.compile()
result = team_graph.invoke({
"messages": [HumanMessage(content="Write about quantum computing advances.")]
})
print("Research:", result["research_notes"][:200])
print("Draft:", result["draft"][:200])
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
@tool
def search_web(query: str) -> str:
"""Search the web for information."""
return f"Results for: {query}"
@tool
def calculate(expression: str) -> str:
"""Evaluate a mathematical expression."""
return str(eval(expression))
# Create specialized agents with one line each
search_agent = create_react_agent(
ChatOpenAI(model="gpt-4o-mini"),
tools=[search_web],
state_modifier="You are a web research specialist."
)
math_agent = create_react_agent(
ChatOpenAI(model="gpt-4o-mini"),
tools=[calculate],
state_modifier="You are a math specialist."
)
# Use them as subgraphs in a parent graph
parent = StateGraph(TeamState)
parent.add_node("search_team", search_agent)
parent.add_node("math_team", math_agent)
# ... add edges and routing as needed
research_notes channel flows from the research subgraph to the writing subgraph through the shared parent state.Now compose these subgraphs into a parent graph that runs research first and then writing.
┌───────┐ ┌────────────────────┐ ┌────────────────────┐ ┌─────┐
│ START │───▶│ research (sub) │───▶│ writing (sub) │───▶│ END │
└───────┘ │ ┌──────────────┐ │ │ ┌──────────────┐ │ └─────┘
│ │ gather │ │ │ │ write │ │
│ └──────────────┘ │ │ └──────────────┘ │
└────────────────────┘ └────────────────────┘
│
research_notes flows through
from typing import Literal
from langgraph.graph import StateGraph, START, END
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
class SupervisorState(TypedDict):
messages: Annotated[list, add_messages]
next_worker: str
llm = ChatOpenAI(model="gpt-4o-mini")
def supervisor(state: SupervisorState) -> dict:
"""Decide which worker to call next, or finish."""
sys_msg = SystemMessage(
content="You are a supervisor managing a research team. "
"Based on the conversation, decide the next step. "
"Reply with exactly one of: 'researcher', 'writer', 'FINISH'."
)
resp = llm.invoke([sys_msg] + state["messages"])
return {"next_worker": resp.content.strip()}
def researcher(state: SupervisorState) -> dict:
sys_msg = SystemMessage(content="You are a research specialist. Provide facts.")
resp = llm.invoke([sys_msg] + state["messages"])
return {"messages": [resp]}
def writer(state: SupervisorState) -> dict:
sys_msg = SystemMessage(content="You are a writing specialist. Draft content.")
resp = llm.invoke([sys_msg] + state["messages"])
return {"messages": [resp]}
def route_supervisor(state: SupervisorState) -> Literal["researcher", "writer", "__end__"]:
next_step = state.get("next_worker", "FINISH")
if "researcher" in next_step.lower():
return "researcher"
elif "writer" in next_step.lower():
return "writer"
return END
builder = StateGraph(SupervisorState)
builder.add_node("supervisor", supervisor)
builder.add_node("researcher", researcher)
builder.add_node("writer", writer)
builder.add_edge(START, "supervisor")
builder.add_conditional_edges("supervisor", route_supervisor)
builder.add_edge("researcher", "supervisor") # report back
builder.add_edge("writer", "supervisor") # report back
supervisor_graph = builder.compile()
result = supervisor_graph.invoke(
{"messages": [HumanMessage(content="Write a blog post about climate change.")]},
config={"recursion_limit": 20}
)
print(result["messages"][-1].content)
research_notes) flow automatically.When a subgraph's state type shares key names with the parent state, LangGraph: Stateful Agent Workflows maps them automatically. If the key names differ, you can wrap the subgraph in a regular function node that performs the mapping manually. This keeps subgraphs decoupled from the parent's state schema.
3. The Supervisor Pattern
In the supervisor pattern, a coordinating agent (the supervisor) examines the current task and delegates work to specialized worker agents. The supervisor runs in a loop: it reads the current state, decides which worker to call next (or whether to finish), and routes execution accordingly.
┌─────────────┐
┌───▶│ researcher │───┐
│ └─────────────┘ │
┌───────────┐ │ ┌─────┐
│ supervisor│◀──────────────────┤──▶│ END │
└───────────┘ │ └─────┘
│ ┌─────────────┐ │
└───▶│ writer │───┘
└─────────────┘
class CollaborativeState(TypedDict):
messages: Annotated[list, add_messages]
research_findings: list[str] # structured data channel
review_comments: list[str] # another structured channel
final_output: str
def researcher_v2(state: CollaborativeState) -> dict:
# Researcher writes to research_findings
return {
"research_findings": ["Finding 1: ...", "Finding 2: ..."],
"messages": [("assistant", "Research complete. 2 findings added.")]
}
def writer_v2(state: CollaborativeState) -> dict:
# Writer reads research_findings and produces a draft
findings = state.get("research_findings", [])
draft = f"Article based on {len(findings)} findings: ..."
return {
"final_output": draft,
"messages": [("assistant", "Draft written.")]
}
def reviewer(state: CollaborativeState) -> dict:
# Reviewer reads the draft and provides comments
draft = state.get("final_output", "")
return {
"review_comments": ["Consider adding more detail to section 2."],
"messages": [("assistant", "Review complete.")]
}
The supervisor pattern differs from flat conditional routing (covered in Section M.2) in a critical way: the supervisor can invoke workers multiple times and in any order. A flat router makes a single decision and terminates. The supervisor loop enables iterative refinement where the researcher gathers more data, the writer revises, and the supervisor coordinates until the output meets quality standards.
4. Shared State and Cross-Agent Communication
Agents in a multi-agent graph communicate through shared state channels. The simplest approach is a shared message list, but you can also define dedicated channels for structured data exchange.
# langgraph.json
{
"dependencies": ["langchain-openai", "langgraph"],
"graphs": {
"supervisor": "./agents/supervisor.py:supervisor_graph",
"research": "./agents/research.py:research_subgraph"
},
"env": ".env"
}
5. Prebuilt Multi-Agent Helpers
LangGraph: Stateful Agent Workflows provides prebuilt utilities that simplify common multi-agent patterns. The create_react_agent function, for instance, builds a complete tool-calling agent in a single line, which you can then embed as a subgraph.
create_react_agent to build specialized agents quickly. Each prebuilt agent is a compiled graph that can be embedded directly as a subgraph node.If you are evaluating multi-agent frameworks, Appendix N covers CrewAI, which provides a higher-level API for agent teams with built-in role definitions, task delegation, and collaboration protocols. LangGraph: Stateful Agent Workflows gives you more control over the execution graph at the cost of more boilerplate, while CrewAI prioritizes ease of setup for common team patterns.
6. LangGraph: Stateful Agent Workflows Platform Deployment
LangGraph: Stateful Agent Workflows Platform (formerly LangGraph: Stateful Agent Workflows Cloud) provides a managed runtime for deploying LangGraph: Stateful Agent Workflows agents as HTTP services. It handles checkpointing, streaming, background execution, and horizontal scaling so you can focus on agent logic rather than infrastructure.
To deploy a graph to LangGraph: Stateful Agent Workflows Platform, you define a langgraph.json configuration file that specifies your graph entry points.
langgraph.json configuration file for LangGraph: Stateful Agent Workflows Platform. Each entry in graphs maps a name to a Python module path and the compiled graph variable. The platform uses this to expose each graph as an API endpoint.Key features of LangGraph: Stateful Agent Workflows Platform include:
- Managed checkpointing. The platform handles all persistence automatically using a built-in PostgreSQL backend.
- Streaming API. Clients receive real-time updates as nodes execute, with support for both server-sent events and WebSocket protocols.
- Background runs. Long-running graphs execute asynchronously. Clients poll for results or receive webhooks on completion.
- Cron scheduling. Graphs can be triggered on a schedule for recurring tasks such as daily report generation.
- Studio integration. LangGraph: Stateful Agent Workflows Studio provides a visual debugger that connects to the platform, letting you inspect state, replay checkpoints, and step through execution in a graphical interface.
You can develop and test your graphs locally using MemorySaver and then deploy the same code to LangGraph: Stateful Agent Workflows Platform without changes. The platform automatically provisions the persistence layer, API endpoints, and scaling infrastructure. The langgraph dev CLI command runs a local development server that mirrors the platform's API for testing.
7. Design Guidelines for Multi-Agent Systems
Building effective multi-agent systems requires careful architectural decisions. Keep these guidelines in mind:
- Single responsibility. Each agent should have a focused role with a small, relevant tool set. An agent that does everything is just a monolithic agent with extra overhead.
- Explicit contracts. Define clear state channels for inter-agent communication. Avoid relying on message parsing for structured data exchange.
- Bounded loops. Always set recursion limits on supervisor patterns. Without them, a supervisor can delegate indefinitely.
- Independent testing. Build and test each subgraph in isolation before composing them. This makes debugging far easier when issues arise in the composed system.
- Graceful degradation. If a worker agent fails, the supervisor should handle the failure (retry, skip, or escalate) rather than letting the entire graph crash.
8. Summary
This section covered multi-agent architectures in LangGraph: Stateful Agent Workflows. You learned how to compose subgraphs into parent graphs, build supervisor-based teams that delegate iteratively, define structured communication channels between agents, use prebuilt helpers for rapid agent creation, and deploy multi-agent systems to LangGraph: Stateful Agent Workflows Platform. Combined with the graph fundamentals from Section M.1, conditional routing from Section M.2, human oversight from Section M.3, and persistence from Section M.4, you now have a complete toolkit for building production-grade agent systems with LangGraph: Stateful Agent Workflows.