Building Conversational AI with LLMs and Agents
Appendix M: LangGraph: Stateful Agent Workflows

Multi-Agent Graphs and Subgraphs

Big Picture

Complex tasks often exceed the capabilities of a single agent. LangGraph: Stateful Agent Workflows supports multi-agent architectures through subgraph composition, supervisor patterns, and shared state. This section shows you how to build teams of specialized agents that collaborate within a single graph, communicate through well-defined interfaces, and scale to production using LangGraph: Stateful Agent Workflows Platform.

1. Why Multi-Agent Systems?

A single agent with many tools can become unwieldy. As the tool set grows, the LLM's ability to select the right tool degrades, and the system prompt becomes a sprawling instruction manual. Multi-agent architectures solve this by decomposing complex tasks into specialized sub-agents, each with a focused tool set and system prompt. The result is better tool selection, clearer reasoning, and more maintainable code.

LangGraph: Stateful Agent Workflows provides two main patterns for multi-agent systems: subgraph composition (embedding one graph inside another) and the supervisor pattern (a coordinating agent that delegates to specialized workers).

2. Subgraph Composition

A subgraph is a compiled LangGraph: Stateful Agent Workflows graph used as a node inside a parent graph. The parent graph invokes the subgraph like any other node, passing in state and receiving the subgraph's output. This enables modular design: each subgraph can be developed, tested, and debugged independently.

from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage

# Shared state type
class TeamState(TypedDict):
    messages: Annotated[list, add_messages]
    research_notes: str
    draft: str

llm = ChatOpenAI(model="gpt-4o-mini")

# --- Research subgraph ---
class ResearchState(TypedDict):
    messages: Annotated[list, add_messages]
    research_notes: str

def gather_sources(state: ResearchState) -> dict:
    sys_msg = SystemMessage(content="Find relevant sources for the user's topic.")
    resp = llm.invoke([sys_msg] + state["messages"])
    return {"research_notes": resp.content}

research_builder = StateGraph(ResearchState)
research_builder.add_node("gather", gather_sources)
research_builder.add_edge(START, "gather")
research_builder.add_edge("gather", END)
research_subgraph = research_builder.compile()

# --- Writing subgraph ---
class WritingState(TypedDict):
    messages: Annotated[list, add_messages]
    research_notes: str
    draft: str

def write_draft(state: WritingState) -> dict:
    notes = state.get("research_notes", "No notes available.")
    sys_msg = SystemMessage(
        content=f"Write a short article using these research notes:\n{notes}"
    )
    resp = llm.invoke([sys_msg] + state["messages"])
    return {"draft": resp.content}

writing_builder = StateGraph(WritingState)
writing_builder.add_node("write", write_draft)
writing_builder.add_edge(START, "write")
writing_builder.add_edge("write", END)
writing_subgraph = writing_builder.compile()
# --- Parent graph ---
parent_builder = StateGraph(TeamState)
parent_builder.add_node("research", research_subgraph)
parent_builder.add_node("writing", writing_subgraph)
parent_builder.add_edge(START, "research")
parent_builder.add_edge("research", "writing")
parent_builder.add_edge("writing", END)

team_graph = parent_builder.compile()

result = team_graph.invoke({
    "messages": [HumanMessage(content="Write about quantum computing advances.")]
})
print("Research:", result["research_notes"][:200])
print("Draft:", result["draft"][:200])
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool

@tool
def search_web(query: str) -> str:
    """Search the web for information."""
    return f"Results for: {query}"

@tool
def calculate(expression: str) -> str:
    """Evaluate a mathematical expression."""
    return str(eval(expression))

# Create specialized agents with one line each
search_agent = create_react_agent(
    ChatOpenAI(model="gpt-4o-mini"),
    tools=[search_web],
    state_modifier="You are a web research specialist."
)

math_agent = create_react_agent(
    ChatOpenAI(model="gpt-4o-mini"),
    tools=[calculate],
    state_modifier="You are a math specialist."
)

# Use them as subgraphs in a parent graph
parent = StateGraph(TeamState)
parent.add_node("search_team", search_agent)
parent.add_node("math_team", math_agent)
# ... add edges and routing as needed
Code Fragment M.5.2: Composing the research and writing subgraphs into a parent graph. The research_notes channel flows from the research subgraph to the writing subgraph through the shared parent state.

Now compose these subgraphs into a parent graph that runs research first and then writing.

Research: Quantum computing has seen significant advances in 2025. Google's Willow chip achieved 105 qubits with error correction below the fault-tolerant threshold. IBM's Condor processor ... Draft: # Quantum Computing: A New Era of Computation Quantum computing is rapidly transitioning from laboratory curiosity to practical tool. In 2025, several breakthroughs have brought us closer ...
┌───────┐    ┌────────────────────┐    ┌────────────────────┐    ┌─────┐
│ START │───▶│  research (sub)    │───▶│  writing (sub)     │───▶│ END │
└───────┘    │  ┌──────────────┐  │    │  ┌──────────────┐  │    └─────┘
             │  │   gather     │  │    │  │    write     │  │
             │  └──────────────┘  │    │  └──────────────┘  │
             └────────────────────┘    └────────────────────┘
                                  │
                       research_notes flows through
        
from typing import Literal
from langgraph.graph import StateGraph, START, END
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage

class SupervisorState(TypedDict):
    messages: Annotated[list, add_messages]
    next_worker: str

llm = ChatOpenAI(model="gpt-4o-mini")

def supervisor(state: SupervisorState) -> dict:
    """Decide which worker to call next, or finish."""
    sys_msg = SystemMessage(
        content="You are a supervisor managing a research team. "
                "Based on the conversation, decide the next step. "
                "Reply with exactly one of: 'researcher', 'writer', 'FINISH'."
    )
    resp = llm.invoke([sys_msg] + state["messages"])
    return {"next_worker": resp.content.strip()}

def researcher(state: SupervisorState) -> dict:
    sys_msg = SystemMessage(content="You are a research specialist. Provide facts.")
    resp = llm.invoke([sys_msg] + state["messages"])
    return {"messages": [resp]}

def writer(state: SupervisorState) -> dict:
    sys_msg = SystemMessage(content="You are a writing specialist. Draft content.")
    resp = llm.invoke([sys_msg] + state["messages"])
    return {"messages": [resp]}

def route_supervisor(state: SupervisorState) -> Literal["researcher", "writer", "__end__"]:
    next_step = state.get("next_worker", "FINISH")
    if "researcher" in next_step.lower():
        return "researcher"
    elif "writer" in next_step.lower():
        return "writer"
    return END

builder = StateGraph(SupervisorState)
builder.add_node("supervisor", supervisor)
builder.add_node("researcher", researcher)
builder.add_node("writer", writer)

builder.add_edge(START, "supervisor")
builder.add_conditional_edges("supervisor", route_supervisor)
builder.add_edge("researcher", "supervisor")  # report back
builder.add_edge("writer", "supervisor")      # report back

supervisor_graph = builder.compile()

result = supervisor_graph.invoke(
    {"messages": [HumanMessage(content="Write a blog post about climate change.")]},
    config={"recursion_limit": 20}
)
print(result["messages"][-1].content)
Code Fragment M.5.1: Two independent subgraphs: one for research and one for writing. Each has its own state type, nodes, and edges. They can be tested in isolation before being composed into a parent graph.
Figure M.5.1: Subgraph composition. Each box with "(sub)" is an entire compiled graph embedded as a single node in the parent. State channels shared between parent and child (such as research_notes) flow automatically.
Note: State Mapping Between Parent and Subgraph

When a subgraph's state type shares key names with the parent state, LangGraph: Stateful Agent Workflows maps them automatically. If the key names differ, you can wrap the subgraph in a regular function node that performs the mapping manually. This keeps subgraphs decoupled from the parent's state schema.

3. The Supervisor Pattern

In the supervisor pattern, a coordinating agent (the supervisor) examines the current task and delegates work to specialized worker agents. The supervisor runs in a loop: it reads the current state, decides which worker to call next (or whether to finish), and routes execution accordingly.

Based on the research and writing, here is the completed blog post about climate change. The article covers recent scientific findings, policy developments, and actionable steps for organizations to reduce their carbon footprint...
              ┌─────────────┐
         ┌───▶│ researcher  │───┐
         │    └─────────────┘   │
┌───────────┐                   │   ┌─────┐
│ supervisor│◀──────────────────┤──▶│ END │
└───────────┘                   │   └─────┘
         │    ┌─────────────┐   │
         └───▶│   writer    │───┘
              └─────────────┘
        
class CollaborativeState(TypedDict):
    messages: Annotated[list, add_messages]
    research_findings: list[str]   # structured data channel
    review_comments: list[str]     # another structured channel
    final_output: str

def researcher_v2(state: CollaborativeState) -> dict:
    # Researcher writes to research_findings
    return {
        "research_findings": ["Finding 1: ...", "Finding 2: ..."],
        "messages": [("assistant", "Research complete. 2 findings added.")]
    }

def writer_v2(state: CollaborativeState) -> dict:
    # Writer reads research_findings and produces a draft
    findings = state.get("research_findings", [])
    draft = f"Article based on {len(findings)} findings: ..."
    return {
        "final_output": draft,
        "messages": [("assistant", "Draft written.")]
    }

def reviewer(state: CollaborativeState) -> dict:
    # Reviewer reads the draft and provides comments
    draft = state.get("final_output", "")
    return {
        "review_comments": ["Consider adding more detail to section 2."],
        "messages": [("assistant", "Review complete.")]
    }
Code Fragment M.5.3: The supervisor pattern. The supervisor node decides which worker to invoke. Each worker reports back to the supervisor, which can delegate further work or end the workflow. The recursion limit prevents infinite delegation loops.
Figure M.5.2: The supervisor pattern. Workers cycle back to the supervisor after completing their tasks. The supervisor decides when to terminate the loop.
Key Insight: Supervisor versus Flat Routing

The supervisor pattern differs from flat conditional routing (covered in Section M.2) in a critical way: the supervisor can invoke workers multiple times and in any order. A flat router makes a single decision and terminates. The supervisor loop enables iterative refinement where the researcher gathers more data, the writer revises, and the supervisor coordinates until the output meets quality standards.

4. Shared State and Cross-Agent Communication

Agents in a multi-agent graph communicate through shared state channels. The simplest approach is a shared message list, but you can also define dedicated channels for structured data exchange.

# langgraph.json
{
    "dependencies": ["langchain-openai", "langgraph"],
    "graphs": {
        "supervisor": "./agents/supervisor.py:supervisor_graph",
        "research": "./agents/research.py:research_subgraph"
    },
    "env": ".env"
}
Code Fragment M.5.4: Structured cross-agent communication through dedicated state channels. Each agent reads from and writes to specific channels, creating a clear data contract between agents.

5. Prebuilt Multi-Agent Helpers

LangGraph: Stateful Agent Workflows provides prebuilt utilities that simplify common multi-agent patterns. The create_react_agent function, for instance, builds a complete tool-calling agent in a single line, which you can then embed as a subgraph.

Code Fragment M.5.5: Using create_react_agent to build specialized agents quickly. Each prebuilt agent is a compiled graph that can be embedded directly as a subgraph node.
Note: Comparing with CrewAI

If you are evaluating multi-agent frameworks, Appendix N covers CrewAI, which provides a higher-level API for agent teams with built-in role definitions, task delegation, and collaboration protocols. LangGraph: Stateful Agent Workflows gives you more control over the execution graph at the cost of more boilerplate, while CrewAI prioritizes ease of setup for common team patterns.

6. LangGraph: Stateful Agent Workflows Platform Deployment

LangGraph: Stateful Agent Workflows Platform (formerly LangGraph: Stateful Agent Workflows Cloud) provides a managed runtime for deploying LangGraph: Stateful Agent Workflows agents as HTTP services. It handles checkpointing, streaming, background execution, and horizontal scaling so you can focus on agent logic rather than infrastructure.

To deploy a graph to LangGraph: Stateful Agent Workflows Platform, you define a langgraph.json configuration file that specifies your graph entry points.

Code Fragment M.5.6: A langgraph.json configuration file for LangGraph: Stateful Agent Workflows Platform. Each entry in graphs maps a name to a Python module path and the compiled graph variable. The platform uses this to expose each graph as an API endpoint.

Key features of LangGraph: Stateful Agent Workflows Platform include:

Key Insight: Local Development, Cloud Deployment

You can develop and test your graphs locally using MemorySaver and then deploy the same code to LangGraph: Stateful Agent Workflows Platform without changes. The platform automatically provisions the persistence layer, API endpoints, and scaling infrastructure. The langgraph dev CLI command runs a local development server that mirrors the platform's API for testing.

7. Design Guidelines for Multi-Agent Systems

Building effective multi-agent systems requires careful architectural decisions. Keep these guidelines in mind:

8. Summary

This section covered multi-agent architectures in LangGraph: Stateful Agent Workflows. You learned how to compose subgraphs into parent graphs, build supervisor-based teams that delegate iteratively, define structured communication channels between agents, use prebuilt helpers for rapid agent creation, and deploy multi-agent systems to LangGraph: Stateful Agent Workflows Platform. Combined with the graph fundamentals from Section M.1, conditional routing from Section M.2, human oversight from Section M.3, and persistence from Section M.4, you now have a complete toolkit for building production-grade agent systems with LangGraph: Stateful Agent Workflows.