Building Conversational AI with LLMs and Agents
Appendix M: LangGraph: Stateful Agent Workflows

Graph Fundamentals: Nodes, Edges, and State

Big Picture

LangGraph: Stateful Agent Workflows models agentic workflows as directed graphs. Every graph consists of nodes (functions that transform state), edges (transitions between nodes), and a shared state object that flows through the entire execution. Understanding these three primitives is the foundation for building reliable, observable agent systems. This section walks you through the StateGraph API, typed state definitions, reducers, and basic graph compilation and execution.

1. Why Graphs for Agents?

Traditional chain-based orchestration treats LLM pipelines as linear sequences. This works well for simple tasks, but real-world agents need to branch, loop, and coordinate multiple steps. LangGraph: Stateful Agent Workflows addresses this by exposing a graph abstraction where each step is a node and transitions are explicit edges. The result is a workflow that is easy to visualize, debug, and extend.

If you have already worked through the LangChain appendix (Appendix L), you will notice that LangGraph: Stateful Agent Workflows builds on top of LangChain's model and tool abstractions while replacing the implicit chain execution with an explicit, inspectable graph.

Key Insight: Graphs versus Chains

A chain is a special case of a graph: a linear sequence with no branches or cycles. LangGraph: Stateful Agent Workflows generalizes this so you can express retry loops, conditional routing, parallel fan-out, and human-in-the-loop patterns within a single, unified API.

2. Installing LangGraph: Stateful Agent Workflows

Before writing your first graph, install the core package and the OpenAI integration used in this appendix.

pip install langgraph langchain-openai
from typing import TypedDict, Annotated
from langgraph.graph.message import add_messages

class AgentState(TypedDict):
    """State that flows through every node in our graph."""
    messages: Annotated[list, add_messages]  # append-only message list
    summary: str                              # latest conversation summary
from typing import TypedDict, Annotated

def keep_last_n(existing: list, new: list, *, n: int = 10) -> list:
    """Reducer that appends new items, then trims to the last n."""
    combined = existing + new
    return combined[-n:]

class WindowedState(TypedDict):
    events: Annotated[list, keep_last_n]  # sliding window of events
    status: str                            # overwrite semantics (no reducer)
Code Fragment M.1.1: Installing LangGraph: Stateful Agent Workflows and the OpenAI provider. You can substitute langchain-anthropic or another provider if you prefer a different model backend.

LangGraph: Stateful Agent Workflows requires Python 3.9 or later. If you need help setting up a virtual environment, see Section D.2.

3. Defining State with TypedDict

Every LangGraph: Stateful Agent Workflows graph operates on a shared state object. The recommended way to define this state is with Python's TypedDict, which provides both documentation and type-checking support. Each key in the TypedDict represents a channel of data that nodes can read from and write to.

The following example defines a minimal conversational state containing a list of messages and a summary string.

from langgraph.graph import StateGraph, START, END
from langchain_openai import ChatOpenAI

# 1. Define the state
class ChatState(TypedDict):
    messages: Annotated[list, add_messages]

# 2. Create node functions
llm = ChatOpenAI(model="gpt-4o-mini")

def chatbot(state: ChatState) -> dict:
    """Call the LLM with the current message history."""
    response = llm.invoke(state["messages"])
    return {"messages": [response]}

# 3. Build the graph
graph_builder = StateGraph(ChatState)
graph_builder.add_node("chatbot", chatbot)

# 4. Wire up the edges
graph_builder.add_edge(START, "chatbot")
graph_builder.add_edge("chatbot", END)

# 5. Compile into a runnable
graph = graph_builder.compile()
Code Fragment M.1.2: A typed state definition using TypedDict. The Annotated wrapper on messages attaches a reducer, which controls how updates are merged.

3.1 State Channels

Each key in the state is called a channel. When a node returns a dictionary, LangGraph: Stateful Agent Workflows matches returned keys to channels and applies the update. Keys that the node does not return remain unchanged. This selective update model means nodes only need to produce the data they are responsible for.

3.2 Reducers

By default, a returned value for a channel overwrites whatever was there before. Reducers change this behavior. The built-in add_messages reducer, for example, appends new messages to the existing list instead of replacing it. You can write custom reducers for any merge logic you need.

The following code shows how to create a custom reducer that keeps only the last N items.

from langchain_core.messages import HumanMessage

# Synchronous invocation
result = graph.invoke({
    "messages": [HumanMessage(content="What is LangGraph: Stateful Agent Workflows?")]
})
print(result["messages"][-1].content)

# Streaming: yields state updates after each node
for event in graph.stream({
    "messages": [HumanMessage(content="Explain graph theory briefly.")]
}):
    for node_name, node_output in event.items():
        print(f"[{node_name}]", node_output["messages"][-1].content)
Code Fragment M.1.3: A custom reducer that implements a sliding window. When a node returns new events, keep_last_n appends them and trims the list to the most recent 10 items.
Warning: Mutable Default State

Never use mutable default values (such as [] or {}) directly in your TypedDict. Python shares mutable defaults across instances, which can cause state to leak between graph invocations. Instead, let LangGraph: Stateful Agent Workflows initialize each channel from the input you pass at invocation time.

4. Building Your First Graph

With the state defined, you can construct a graph using the StateGraph class. The workflow has three steps: add nodes, add edges, and compile.

from langgraph.graph import StateGraph, START, END
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage

class IntentState(TypedDict):
    messages: Annotated[list, add_messages]
    intent: str

llm = ChatOpenAI(model="gpt-4o-mini")

def classify_intent(state: IntentState) -> dict:
    """Classify the user's message as 'question' or 'command'."""
    sys_msg = SystemMessage(
        content="Classify the user message as 'question' or 'command'. "
                "Reply with one word only."
    )
    response = llm.invoke([sys_msg] + state["messages"])
    return {"intent": response.content.strip().lower()}

def generate_response(state: IntentState) -> dict:
    """Generate a response based on the classified intent."""
    intent = state["intent"]
    sys_msg = SystemMessage(
        content=f"The user's intent is '{intent}'. Respond helpfully."
    )
    response = llm.invoke([sys_msg] + state["messages"])
    return {"messages": [response]}

# Build the graph
builder = StateGraph(IntentState)
builder.add_node("classify", classify_intent)
builder.add_node("respond", generate_response)
builder.add_edge(START, "classify")
builder.add_edge("classify", "respond")
builder.add_edge("respond", END)

intent_graph = builder.compile()

# Run it
output = intent_graph.invoke({
    "messages": [HumanMessage(content="Summarize the last quarter's report.")]
})
print("Intent:", output["intent"])
print("Response:", output["messages"][-1].content)
Code Fragment M.1.4: A minimal single-node graph. The START sentinel marks the entry point and END marks the exit. After compilation, the graph is a standard LangChain Runnable.

4.1 Nodes

A node is any Python callable that receives the current state and returns a partial state update (a dictionary). Nodes can call LLMs, run tools, query databases, or perform arbitrary computation. The only requirement is that the returned dictionary keys must match channels declared in the state.

4.2 Edges

Edges define the transitions between nodes. The simplest form, add_edge(a, b), creates an unconditional transition: after node a finishes, node b runs. The special constants START and END represent the entry and exit points of the graph. Conditional edges, covered in Section M.2, allow dynamic routing based on state.

4.3 Compilation

Calling compile() validates the graph structure, checks for unreachable nodes, and returns an executable CompiledGraph. This compiled object implements the LangChain Runnable interface, so you can call invoke, stream, batch, and ainvoke on it.

5. Running and Streaming a Graph

Once compiled, a graph can be invoked synchronously or streamed token by token. The following example demonstrates both patterns.

LangGraph: Stateful Agent Workflows is a library for building stateful, multi-step agent workflows as directed graphs. It extends LangChain by providing explicit control flow through nodes and edges, with support for cycles, branching, and persistence. [chatbot] Graph theory is a branch of mathematics that studies the properties and relationships of graphs, which are structures made up of vertices (nodes) connected by edges (links). It has applications in computer science, network analysis, social networks, and optimization problems.
Code Fragment M.1.5: Invoking and streaming a compiled graph. The stream method yields one event per node execution, making it straightforward to display incremental progress in a UI.
Note: Async Support

Every compiled graph also exposes ainvoke and astream for use with Python's asyncio. If your application already uses an async web framework (FastAPI, for example), prefer the async variants to avoid blocking the event loop.

6. A Multi-Node Example

Real graphs have more than one node. The example below builds a two-step pipeline: the first node classifies user intent, and the second node generates a response based on that classification.

Intent: command Response: I'd be happy to help summarize the last quarter's report. Could you provide the report or point me to its location so I can generate a summary?
┌───────┐     ┌──────────┐     ┌──────────┐     ┌─────┐
│ START │────▶│ classify  │────▶│ respond  │────▶│ END │
└───────┘     └──────────┘     └──────────┘     └─────┘
                  │                   ▲
                  │  writes: intent   │  reads: intent
                  └───────────────────┘
        
# Print a Mermaid diagram (paste into mermaid.live to render)
print(intent_graph.get_graph().draw_mermaid())

# Or save as a PNG (requires graphviz installed)
from IPython.display import Image, display
display(Image(intent_graph.get_graph().draw_mermaid_png()))
Code Fragment M.1.6: A two-node graph that first classifies intent and then generates a tailored response. The intent channel, written by the first node, is read by the second.
Figure M.1.1: Dataflow in the two-node intent classification graph. The intent channel acts as a communication bridge between the classify and respond nodes.
Key Insight: Graphs Are Runnables

Because compiled graphs implement the Runnable interface, they compose naturally with other LangChain components. You can embed a graph inside a chain, use it as a tool, or nest one graph inside another (see Section M.5 on subgraphs).

7. Visualizing Your Graph

LangGraph: Stateful Agent Workflows provides built-in visualization that renders the graph structure as a Mermaid diagram or a PNG image. This is invaluable for debugging and documentation.

%%{init: {'flowchart': {'curve': 'linear'}}}%% graph TD; __start__([__start__]) --> classify; classify --> respond; respond --> __end__([__end__]);
Code Fragment M.1.7: Visualizing the graph structure. The Mermaid output can be rendered in any Markdown viewer that supports Mermaid diagrams; the PNG variant is useful in Jupyter notebooks.

8. Summary

This section introduced the core building blocks of LangGraph: Stateful Agent Workflows. You learned how to define typed state with channels and reducers, create node functions that read and write state, connect nodes with edges, compile a graph into an executable runnable, and stream results incrementally. In Section M.2, you will see how conditional edges and cycles turn these simple graphs into powerful agentic loops.