Human-in-the-Loop
Pause execution for human input. Resume when ready.
@interrupt- Decorator to create a pause point (like@nodebut pauses onNone)PauseInfo - Metadata about the pause (what value is surfaced, where to resume)
Handlers - Auto-resolve interrupts programmatically (testing, automation)
The Problem
Many workflows need human judgment at key points: approving a draft, confirming a destructive action, providing feedback on generated content. You need a way to pause the graph, surface a value to the user, and resume with their response.
This page focuses on the graph-level pause/resume pattern:
a run pauses at an interrupt
the caller inspects the pause payload
the caller later resumes the workflow with a response
That makes interrupts a strong fit for:
interactive apps
review UIs
multi-step assistants
application-managed human approval flows
For longer-lived event-driven orchestration, checkpointing gives you the persistence foundation, but more of the surrounding runtime shell still lives in your app today.
Basic Pause and Resume
The @interrupt decorator creates a pause point. Inputs come from the function signature, outputs from output_name. When the handler returns None, execution pauses.
The key flow:
Run the graph. Execution pauses at the interrupt.
Inspect
result.pause.valueto see what the user needs to review.Resume by passing the response via
result.pause.response_key.
This is intentionally different from event-driven workflow systems like Inngest, DBOS, or Restate, where the runtime often owns external event delivery directly. In hypergraph, the pause/resume primitive is already there; the application typically decides how the later response gets routed back into the run.
RunResult Properties
When paused, the RunResult has:
result.paused
True when execution is paused
result.pause.value
The input value surfaced to the caller
result.pause.node_name
Name of the interrupt node that paused
result.pause.output_param
Output parameter name
result.pause.response_key
Key to use in the values dict when resuming
Multi-Turn Chat with Human Input
Combine @interrupt with agentic loops for a multi-turn conversation where the user provides input each turn.
Key patterns:
.bind(messages=[])pre-fills the seed input so.run({})works with no valuesInterrupt as first step: the graph pauses immediately, asking the user for input
emit="turn_done"+wait_for="turn_done": ensuresshould_continuesees the fully updated messagesEach resume replays the graph, providing all previous responses
Alternative: Explicit Edges
The same chat loop without emit/wait_for signals. When multiple nodes produce messages, explicit edges declare the topology directly:
No ordering signals needed — the edge list makes execution order unambiguous. Both add_query and add_response produce messages, but the edges declare which runs first.
Persistent Multi-Turn with Checkpointer
The examples above are stateless — each resume replays from scratch, passing all previous responses. For production multi-turn workflows, use a checkpointer to persist state between calls. Each .run() only needs the new user input.
Key differences from the stateless pattern:
workflow_ididentifies the conversation — same ID resumes, different ID starts freshshared=["messages"]accumulates the message list across the cycleentrypoint="add_user_message"skipswait_for_useron the first call (no need to pause before the user has spoken)Each
.run()only needs the newuser_input, not all previous messages
When the Conversation Ends
When should_continue routes to END, the workflow completes. Further .run() calls with the same workflow_id raise WorkflowAlreadyCompletedError:
Inspecting Checkpoint History
The checkpointer records every node execution. You can inspect the full step log:
A two-turn conversation produces steps like:
Notice the interrupt appears twice per turn: first as paused (waiting), then as completed (resolved with the user's input on the next .run() call).
Full example: See
examples/chat_app.pyfor a complete FastAPI integration with durable multi-turn chat, error handling, and checkpoint inspection endpoint.
Auto-Resolve with Handlers
For testing or automation, the handler function resolves the interrupt without human input. Return a value to auto-resolve, return None to pause.
Conditional Pause
Return None to pause, return a value to auto-resolve:
Async Handlers
Handlers can be async:
Multiple Sequential Interrupts
A graph can have multiple interrupts. Execution pauses at each one in topological order.
Each resume call replays the graph from the start, providing previously-collected responses as input values. The interrupt detects that its output is already in the state and skips the pause.
Nested Graph Interrupts
Interrupts inside nested graphs propagate the pause to the outer graph. The node_name is prefixed with the nested graph's name.
The response_key uses dot-separated paths for nested interrupts: "inner.y" means the output y inside the graph node inner.
Think of response_key as a resume slot identifier. It is precise and stable, but it is primarily a runtime-facing detail. In user-facing applications, you will often wrap it behind your own inbox, form, or webhook layer.
Checkpointed Pauses
For durable pause/resume across process restarts, pair interrupts with a checkpointer and a workflow_id:
This is the current durable HITL story:
checkpoint state stores the paused execution
workflow_ididentifies the workflow instanceresponse_keyidentifies the waiting output slot
If your application needs approval inboxes, event matching, or webhook-driven resume, build those on top of this pause primitive today.
Runner Compatibility
Only AsyncRunner supports interrupts. SyncRunner raises IncompatibleRunnerError at runtime if the graph contains interrupt nodes.
Similarly, AsyncRunner.map() does not support interrupts — a graph with interrupts cannot be used with map().
The same restriction applies to GraphNode.map_over(...): a nested graph that contains interrupts cannot be wrapped in map_over(). If you need batched human-in-the-loop processing today, orchestrate the batch at the application layer rather than nesting an interrupting graph inside map_over().
With emit/wait_for
InterruptNode supports ordering signals like FunctionNode:
API Reference
@interrupt Decorator
@interrupt DecoratorThe @interrupt decorator is the preferred way to create an interrupt node. Like @node, inputs come from the function signature, outputs from output_name, and types from annotations.
output_name
str | tuple[str, ...]
Name(s) for output value(s) — required
rename_inputs
dict[str, str] | None
Mapping to rename inputs {old: new}
cache
bool
Enable result caching (default: False)
emit
str | tuple[str, ...] | None
Ordering-only output name(s)
wait_for
str | tuple[str, ...] | None
Ordering-only input name(s)
hide
bool
Whether to hide from visualization
InterruptNode Constructor
InterruptNode can also be created directly (like FunctionNode):
output_name is required — InterruptNode(func) without it raises TypeError.
Properties:
inputs
tuple[str, ...]
Input parameter names (from function signature)
outputs
tuple[str, ...]
All output names (data + emit)
data_outputs
tuple[str, ...]
Data-only outputs (excluding emit)
is_interrupt
bool
Always True
cache
bool
Whether caching is enabled (default: False)
hide
bool
Whether hidden from visualization
wait_for
tuple[str, ...]
Ordering-only inputs
is_async
bool
True if handler is async
is_generator
bool
True if handler yields
definition_hash
str
SHA256 hash of function source
Methods:
with_name(name)
InterruptNode
New instance with a different name
with_inputs(**kwargs)
InterruptNode
New instance with renamed inputs
with_outputs(**kwargs)
InterruptNode
New instance with renamed outputs
PauseInfo
Properties:
response_key
str
Key to use when resuming (first output). Top-level: output_param. Nested: dot-separated path (e.g., "inner.decision")
response_keys
dict[str, str]
Map of all output names to resume keys (for multi-output interrupts)
Last updated