Building Conversational AI with LLMs and Agents
Appendix M: LangGraph: Stateful Agent Workflows

Conditional Routing and Cycles

Big Picture

Static edges move execution along a fixed path, but real agents need to make decisions at runtime. LangGraph: Stateful Agent Workflows's conditional edges let you route execution based on state, while cycles allow nodes to loop back for retries, iterative refinement, or tool-calling loops. This section covers routing functions, branching logic, cycle construction, and safeguards against infinite loops.

1. Conditional Edges

A conditional edge replaces a fixed transition with a routing function. Instead of always going from node A to node B, the graph calls your function to decide which node runs next. The routing function receives the current state and returns the name of the target node (or a list of names for parallel fan-out).

The following example routes a user request to either a search tool or a calculator tool based on the LLM's classification.

from typing import TypedDict, Annotated, Literal
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage

class RouterState(TypedDict):
    messages: Annotated[list, add_messages]
    route: str

llm = ChatOpenAI(model="gpt-4o-mini")

def classify(state: RouterState) -> dict:
    sys_msg = SystemMessage(
        content="Classify the request as 'search' or 'calculate'. "
                "Reply with one word."
    )
    resp = llm.invoke([sys_msg] + state["messages"])
    return {"route": resp.content.strip().lower()}

def search_tool(state: RouterState) -> dict:
    return {"messages": [("assistant", "Searching the web...")]}

def calculator_tool(state: RouterState) -> dict:
    return {"messages": [("assistant", "Running calculation...")]}

# Routing function: inspects state, returns the next node name
def pick_tool(state: RouterState) -> Literal["search", "calculator"]:
    if state["route"] == "search":
        return "search"
    return "calculator"

builder = StateGraph(RouterState)
builder.add_node("classify", classify)
builder.add_node("search", search_tool)
builder.add_node("calculator", calculator_tool)

builder.add_edge(START, "classify")
builder.add_conditional_edges("classify", pick_tool)
builder.add_edge("search", END)
builder.add_edge("calculator", END)

router_graph = builder.compile()
                          ┌──────────┐
                     ┌───▶│  search   │───┐
┌───────┐  ┌──────────┐  └──────────┘   │  ┌─────┐
│ START │─▶│ classify  │                 ├─▶│ END │
└───────┘  └──────────┘  ┌──────────┐   │  └─────┘
                     └───▶│calculator│───┘
                          └──────────┘
        
builder.add_conditional_edges(
    "classify",
    pick_tool,
    path_map={"web": "search", "math": "calculator"}
)
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import ToolNode
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool

@tool
def get_weather(city: str) -> str:
    """Look up the current weather for a city."""
    # Simulated response
    return f"The weather in {city} is 22°C and sunny."

tools = [get_weather]
llm = ChatOpenAI(model="gpt-4o-mini").bind_tools(tools)

class AgentState(TypedDict):
    messages: Annotated[list, add_messages]

def call_model(state: AgentState) -> dict:
    response = llm.invoke(state["messages"])
    return {"messages": [response]}

def should_continue(state: AgentState) -> str:
    """Route back to tools if the LLM made tool calls, otherwise end."""
    last_message = state["messages"][-1]
    if last_message.tool_calls:
        return "tools"
    return END

# Build the cyclic graph
builder = StateGraph(AgentState)
builder.add_node("agent", call_model)
builder.add_node("tools", ToolNode(tools))

builder.add_edge(START, "agent")
builder.add_conditional_edges("agent", should_continue)
builder.add_edge("tools", "agent")  # cycle back

agent = builder.compile()
Code Fragment M.2.1: A conditional routing graph. The pick_tool function reads the route channel and returns the name of the node to execute next. LangGraph: Stateful Agent Workflows calls this function automatically after the classify node finishes.
Figure M.2.1: A conditional routing graph with two possible paths after classification. The routing function determines which branch executes.

1.1 The add_conditional_edges API

The full signature is add_conditional_edges(source, path, path_map=None). The path argument is your routing function. An optional path_map dictionary maps return values to node names when they differ. For example, if your function returns "web" but the node is named "search", you can write:

import json

class RetryState(TypedDict):
    messages: Annotated[list, add_messages]
    parsed_output: dict
    attempts: int

def generate_json(state: RetryState) -> dict:
    """Ask the LLM to produce structured JSON."""
    response = llm.invoke(state["messages"])
    attempts = state.get("attempts", 0) + 1
    try:
        parsed = json.loads(response.content)
        return {"messages": [response], "parsed_output": parsed, "attempts": attempts}
    except json.JSONDecodeError:
        error_msg = ("assistant", "Output was not valid JSON. Trying again...")
        return {"messages": [response, error_msg], "parsed_output": {}, "attempts": attempts}

def check_output(state: RetryState) -> str:
    """Retry if parsing failed and we have attempts remaining."""
    if state["parsed_output"] and state["parsed_output"] != {}:
        return END
    if state["attempts"] >= 3:
        return END  # give up after 3 tries
    return "generate"

builder = StateGraph(RetryState)
builder.add_node("generate", generate_json)
builder.add_edge(START, "generate")
builder.add_conditional_edges("generate", check_output)

retry_graph = builder.compile()
Code Fragment M.2.2: Using path_map to decouple routing function return values from node names. This is useful when routing logic is reused across graphs that name their nodes differently.
Note: Type Hints Help Validation

Adding a Literal return type to your routing function (e.g., Literal["search", "calculator"]) allows LangGraph: Stateful Agent Workflows to validate the graph at compile time, catching typos and missing nodes before the graph ever runs.

2. Cycles: The Agent Loop Pattern

The most powerful feature of LangGraph: Stateful Agent Workflows is its ability to express cycles. A cycle occurs when an edge points back to an earlier node, creating a loop. This is the foundation of the ReAct agent loop: the LLM decides whether to call a tool, the tool runs, and control returns to the LLM to decide again.

The following example builds a tool-calling agent that loops until the LLM decides it has a final answer.

┌───────┐     ┌────────┐  tool_calls?  ┌────────┐
│ START │────▶│ agent  │──────────────▶│ tools  │
└───────┘     └────────┘               └────────┘
                  │  ▲                      │
                  │  └──────────────────────┘
                  │  no tool_calls
                  ▼
              ┌─────┐
              │ END │
              └─────┘
        
from langchain_core.messages import HumanMessage

# Set a recursion limit to prevent infinite loops
result = agent.invoke(
    {"messages": [HumanMessage(content="What is the weather in Paris?")]},
    config={"recursion_limit": 25}
)
print(result["messages"][-1].content)
Code Fragment M.2.3: A ReAct-style agent loop. The should_continue function checks whether the LLM's last response contains tool calls. If so, the graph cycles back through the tools node; otherwise it exits.
Figure M.2.2: The ReAct agent loop. Execution cycles between agent and tools until the LLM stops requesting tool calls.

3. Loop Detection and Recursion Limits

Cycles introduce the risk of infinite loops. If the LLM keeps requesting tools without converging on a final answer, the graph will run indefinitely. LangGraph: Stateful Agent Workflows protects against this with a configurable recursion limit.

The weather in Paris is currently 22°C and sunny. Enjoy the beautiful day!
def fan_out(state: RouterState) -> list[str]:
    """Run both tools in parallel."""
    return ["search", "calculator"]

builder = StateGraph(RouterState)
builder.add_node("classify", classify)
builder.add_node("search", search_tool)
builder.add_node("calculator", calculator_tool)
builder.add_node("merge", lambda state: state)  # no-op merge point

builder.add_edge(START, "classify")
builder.add_conditional_edges("classify", fan_out)
builder.add_edge("search", "merge")
builder.add_edge("calculator", "merge")
builder.add_edge("merge", END)

parallel_graph = builder.compile()
Code Fragment M.2.4: Invoking the agent with a recursion limit of 25 steps. If the graph exceeds this limit, LangGraph: Stateful Agent Workflows raises a GraphRecursionError instead of looping forever.
Warning: Setting Recursion Limits

The default recursion limit is 25. For complex agents with many tool calls, you may need to increase this value. However, a very high limit can lead to runaway costs and long execution times. Monitor your agent's loop count during development and set the limit to a reasonable multiple of the expected maximum.

4. Retry Patterns

Cycles are also useful for implementing retry logic. If a node fails or produces unsatisfactory output, you can route back to it for another attempt. The following pattern retries an LLM call when the output fails validation.

Code Fragment M.2.5: A retry loop that re-invokes the LLM up to three times if the response is not valid JSON. The attempts counter prevents infinite retries.

5. Parallel Branching

A routing function can return a list of node names to trigger parallel execution. LangGraph: Stateful Agent Workflows runs all target nodes concurrently and merges their state updates using the configured reducers before continuing.

Code Fragment M.2.6: Parallel fan-out. The routing function returns both node names, causing them to run concurrently. A merge node collects results before the graph exits.
Key Insight: Reducers Matter for Parallel Branches

When parallel branches write to the same state channel, the reducer determines how the values are combined. Without a reducer, the last branch to finish overwrites earlier results. Use an append-style reducer (like add_messages) when you want to keep outputs from all branches.

6. Summary

Conditional edges and cycles transform LangGraph: Stateful Agent Workflows from a simple pipeline tool into a full agent orchestrator. You learned how to route execution dynamically with add_conditional_edges, build ReAct-style tool-calling loops, guard against infinite cycles with recursion limits, implement retry patterns, and fan out to parallel branches. In Section M.3, you will see how to pause graph execution and inject human decisions into the loop.