Fully autonomous agents are powerful, but many production workflows require human oversight at critical decision points. LangGraph: Stateful Agent Workflows's interrupt mechanism lets you pause graph execution before or after a node, present intermediate results to a human, and resume execution with the human's input. This section covers interrupt configuration, approval workflows, tool-call confirmation, and techniques for injecting dynamic human input into a running graph.
1. Why Human-in-the-Loop?
Even the best LLMs make mistakes. When an agent is about to send an email, execute a database query, or make a purchase, a human checkpoint can prevent costly errors. LangGraph: Stateful Agent Workflows supports this natively through its interrupt system, which works hand-in-hand with the checkpointing mechanism described in Section M.4.
Common use cases include approving tool calls before execution, reviewing generated content before sending, correcting agent reasoning mid-workflow, and providing additional context that the agent cannot access on its own.
The interrupt mechanism relies on checkpointing to persist the graph's state at the moment of interruption. When the graph resumes, it loads state from the checkpoint rather than re-executing all previous nodes. You must provide a checkpointer when compiling the graph. See Section M.4 for detailed coverage of checkpointing backends.
2. Using interrupt_before
The simplest way to add human oversight is to interrupt before a sensitive node runs. The graph pauses, you inspect the state, and then resume to let the node execute.
from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.checkpoint.memory import MemorySaver
from langchain_core.messages import HumanMessage
class ApprovalState(TypedDict):
messages: Annotated[list, add_messages]
approved: bool
def draft_email(state: ApprovalState) -> dict:
return {"messages": [("assistant", "Draft: Dear client, here is the report...")]}
def send_email(state: ApprovalState) -> dict:
# In production, this would call an email API
return {"messages": [("assistant", "Email sent successfully.")]}
builder = StateGraph(ApprovalState)
builder.add_node("draft", draft_email)
builder.add_node("send", send_email)
builder.add_edge(START, "draft")
builder.add_edge("draft", "send")
builder.add_edge("send", END)
# Compile with interrupt_before on the send node
checkpointer = MemorySaver()
graph = builder.compile(
checkpointer=checkpointer,
interrupt_before=["send"]
)
thread_config = {"configurable": {"thread_id": "email-review-1"}}
# First invocation: runs "draft", then pauses before "send"
result = graph.invoke(
{"messages": [HumanMessage(content="Send the quarterly report.")]},
config=thread_config
)
print("Paused state:", result["messages"][-1])
# Inspect the state
snapshot = graph.get_state(thread_config)
print("Next node to run:", snapshot.next)
# Resume execution (the "send" node will now run)
final = graph.invoke(None, config=thread_config)
print("Final:", final["messages"][-1])
graph_after = builder.compile(
checkpointer=checkpointer,
interrupt_after=["draft"]
)
interrupt_before=["send"]. Execution will pause after the draft node completes and before send begins, giving a human the chance to review the draft.When you invoke this graph, it runs the draft node and then stops. The following code shows how to inspect the paused state and resume execution.
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import ToolNode
from langgraph.checkpoint.memory import MemorySaver
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langchain_core.messages import HumanMessage
@tool
def delete_file(path: str) -> str:
"""Delete a file from the filesystem."""
# Dangerous operation: requires approval
return f"Deleted {path}"
@tool
def read_file(path: str) -> str:
"""Read a file from the filesystem."""
return f"Contents of {path}: ..."
tools = [delete_file, read_file]
llm = ChatOpenAI(model="gpt-4o-mini").bind_tools(tools)
class ToolApprovalState(TypedDict):
messages: Annotated[list, add_messages]
def agent(state: ToolApprovalState) -> dict:
return {"messages": [llm.invoke(state["messages"])]}
def should_continue(state: ToolApprovalState) -> str:
if state["messages"][-1].tool_calls:
return "tools"
return END
builder = StateGraph(ToolApprovalState)
builder.add_node("agent", agent)
builder.add_node("tools", ToolNode(tools))
builder.add_edge(START, "agent")
builder.add_conditional_edges("agent", should_continue)
builder.add_edge("tools", "agent")
# Interrupt before any tool execution
approved_agent = builder.compile(
checkpointer=MemorySaver(),
interrupt_before=["tools"]
)
invoke runs until the interrupt; the second invoke with None resumes from the checkpoint.3. Using interrupt_after
Sometimes you want the sensitive node to run first and then pause so the human can review its output. The interrupt_after parameter works identically to interrupt_before but pauses after the specified node completes.
thread_config = {"configurable": {"thread_id": "approval-demo-1"}}
# Run the agent (it will pause before tools)
result = approved_agent.invoke(
{"messages": [HumanMessage(content="Delete the file /tmp/old_data.csv")]},
config=thread_config
)
# Inspect what the agent wants to do
snapshot = approved_agent.get_state(thread_config)
pending_calls = snapshot.values["messages"][-1].tool_calls
print("Pending tool calls:", pending_calls)
# Option A: Approve by resuming as-is
# final = approved_agent.invoke(None, config=thread_config)
# Option B: Modify the tool arguments before resuming
from langchain_core.messages import AIMessage, ToolMessage
# Replace the last message with corrected tool calls
corrected_msg = AIMessage(
content="",
tool_calls=[{
"id": pending_calls[0]["id"],
"name": "read_file", # downgrade to read instead of delete
"args": {"path": "/tmp/old_data.csv"}
}]
)
approved_agent.update_state(
thread_config,
{"messages": [corrected_msg]},
as_node="agent" # pretend this came from the agent node
)
# Resume with the corrected state
final = approved_agent.invoke(None, config=thread_config)
print(final["messages"][-1].content)
interrupt_after to pause after the draft node. This produces the same pause point as the earlier example but makes the intent clearer: the draft is complete and ready for review.4. Tool Call Confirmation
A common pattern in agent systems is to let the LLM decide which tools to call, but require human approval before any tool actually executes. This combines the agent loop from Section M.2 with an interrupt on the tool node.
from langgraph.types import interrupt, Command
class InteractiveState(TypedDict):
messages: Annotated[list, add_messages]
user_feedback: str
def generate_plan(state: InteractiveState) -> dict:
return {"messages": [("assistant", "Plan: Step 1, Step 2, Step 3")]}
def review_plan(state: InteractiveState) -> dict:
"""Pause and ask the human for feedback on the plan."""
feedback = interrupt(
"Please review the plan above and provide feedback or type 'approve'."
)
return {"user_feedback": feedback}
def execute_plan(state: InteractiveState) -> dict:
fb = state.get("user_feedback", "")
if "approve" in fb.lower():
return {"messages": [("assistant", "Executing the plan now.")]}
return {"messages": [("assistant", f"Revising plan based on: {fb}")]}
builder = StateGraph(InteractiveState)
builder.add_node("generate", generate_plan)
builder.add_node("review", review_plan)
builder.add_node("execute", execute_plan)
builder.add_edge(START, "generate")
builder.add_edge("generate", "review")
builder.add_edge("review", "execute")
builder.add_edge("execute", END)
interactive_graph = builder.compile(checkpointer=MemorySaver())
When a graph pauses at an interrupt, you must either resume it (approve) or update the state to reject the action and redirect flow. Leaving a paused graph without resuming it will consume memory in the checkpointer. In production systems, implement a timeout that automatically rejects pending approvals after a configurable period.
5. Modifying State Before Resuming
The human reviewing an interrupted graph is not limited to a binary approve/reject decision. You can update the state before resuming, which lets the human correct tool arguments, add context messages, or change any channel value.
delete_file call with a safer read_file call. The as_node parameter tells LangGraph: Stateful Agent Workflows which node "produced" the updated state, so the graph resumes from the correct point.6. Dynamic Human Input with the interrupt Function
For even more flexibility, you can call the interrupt() function directly inside a node. This pauses execution at that exact point and returns a value to the caller. When the graph resumes, the interrupt() call receives the human's response.
interrupt() function inside a node for inline human interaction. The value passed to interrupt() is displayed to the human; the value provided at resume time becomes the return value of interrupt().When resuming from an interrupt() call, you pass a Command(resume=value) to the graph's invoke method. The value becomes the return value of the interrupt() function inside the paused node, allowing seamless two-way communication between the human and the graph.
┌──────────┐ ┌──────────┐ ┌──────────┐
│ generate │───▶│ review │───▶│ execute │
└──────────┘ └──────────┘ └──────────┘
│
⏸ interrupt()
│
Human provides
feedback
│
▼ resumes
interrupt(). The graph pauses inside the review node, waits for human feedback, and then continues to the execute node.7. Best Practices for Human-in-the-Loop Workflows
When designing systems with human checkpoints, consider the following guidelines:
- Minimize interrupt frequency. Too many pauses create a frustrating user experience. Reserve interrupts for high-stakes decisions where errors are costly or irreversible.
- Provide clear context. The value you pass to
interrupt()or the state visible at the pause point should give the human everything they need to make a decision. - Implement timeouts. In production, set a maximum wait time for human responses. If the timeout expires, either reject the action or escalate to a different handler.
- Log all decisions. Record whether the human approved, rejected, or modified the action. This audit trail is valuable for debugging and compliance.
- Use persistent checkpointers. In-memory checkpointing (
MemorySaver) loses state when the process restarts. For production workflows that may wait hours for human input, use a database-backed checkpointer such asPostgresSaver(see Section M.4).
Human oversight exists on a spectrum. At one end, every action requires approval. At the other end, the agent runs fully autonomously. Most production systems fall somewhere in between: the agent handles routine tasks independently but escalates risky or ambiguous situations to a human. LangGraph: Stateful Agent Workflows's interrupt system makes it easy to move along this spectrum by adding or removing interrupt points without restructuring the graph.
8. Summary
This section showed how to weave human decisions into LangGraph: Stateful Agent Workflows workflows. You learned to use interrupt_before and interrupt_after for pause points around nodes, build tool-call approval workflows, modify graph state before resuming, and use the inline interrupt() function for two-way human interaction. In Section M.4, you will explore the checkpointing system that makes all of this possible, along with persistence backends for production deployment.