Agentic Loops

Agentic workflows iterate until a goal is achieved. The graph cycles back to earlier nodes based on runtime conditions.

When to Use

  • Multi-turn conversations: User asks, system responds, user follows up

  • Iterative refinement: Generate, evaluate, improve until quality threshold

  • Tool-using agents: Call tools, observe results, decide next action

  • Retry patterns: Attempt, check result, retry if needed

The Core Pattern

Use @route to decide whether to continue or stop:

from hypergraph import Graph, node, route, END, SyncRunner

@node(output_name="draft")
def generate(prompt: str, feedback: str = "") -> str:
    """Generate content, incorporating any feedback."""
    full_prompt = f"{prompt}\n\nFeedback to address: {feedback}" if feedback else prompt
    return llm.generate(full_prompt)

@node(output_name="score")
def evaluate(draft: str) -> float:
    """Score the draft quality (0-1)."""
    return quality_model.score(draft)

@node(output_name="feedback")
def critique(draft: str, score: float) -> str:
    """Generate feedback for improvement."""
    if score >= 0.8:
        return ""  # Good enough
    return critic_model.generate(f"Critique this draft:\n{draft}")

@route(targets=["generate", END])
def should_continue(score: float, attempts: int) -> str:
    """Decide whether to continue refining."""
    if score >= 0.8:
        return END  # Quality achieved
    if attempts >= 5:
        return END  # Max attempts reached
    return "generate"  # Keep refining

@node(output_name="attempts")
def count_attempts(attempts: int = 0) -> int:
    """Track iteration count."""
    return attempts + 1

# Build the loop
refinement_loop = Graph([
    generate,
    evaluate,
    critique,
    count_attempts,
    should_continue,
])

# Run until done
runner = SyncRunner()
result = runner.run(refinement_loop, {"prompt": "Write a haiku about Python"})

print(f"Final draft: {result['draft']}")
print(f"Final score: {result['score']}")
print(f"Attempts: {result['attempts']}")

How It Works

  1. generate creates a draft

  2. evaluate scores it

  3. critique provides feedback

  4. should_continue decides:

    • Return END → graph completes

    • Return "generate" → loop back

The END Sentinel

END is a special value that terminates execution:

Important: Always include END in your targets when you want the option to stop.

Multi-Turn Conversation

A conversation loop that continues until the user says goodbye:

Shared State for Chat Loops

When multiple nodes read and write messages, auto-wiring may struggle with ambiguity. Use shared to declare messages as shared state — auto-wiring handles everything else, and you only add ordering edges for the gaps:

See Shared State for details.

Tool-Using Agent

An agent that decides which tool to call:

Quality Gate Pattern

Ensure output meets quality standards before proceeding:

Ordering with emit/wait_for

In cyclic graphs, you sometimes need a node to wait for another node to finish — even when there's no direct data dependency. Use emit and wait_for to enforce execution order.

The problem: In a chat loop, should_continue reads messages. But accumulate also reads messages and produces the updated version. Without ordering, should_continue might see stale messages from the previous turn.

The fix: accumulate emits a signal when it finishes. should_continue waits for that signal.

How it works:

  • emit="turn_done" declares an ordering-only output. A sentinel value is auto-produced when accumulate runs — your function doesn't return it.

  • wait_for="turn_done" declares an ordering-only input. should_continue won't run until turn_done exists and is fresh (produced since should_continue last ran).

  • emit names appear in node.outputs but not in node.data_outputs. They're filtered from the final result.

When to use emit/wait_for vs data edges:

  • If node B needs node A's output value → use a data edge (parameter matching)

  • If node B just needs to run after node A → use emit/wait_for

Validation: emit names must not overlap with output_name, and wait_for names must not overlap with function parameters. Referencing a nonexistent emit/output in wait_for raises GraphConfigError at build time.

Entry Points

When a parameter is both an input and output of a cycle (like history or iteration), it becomes an entrypoint parameter — an initial value needed to start the first iteration. Provide these in the values dict when calling runner.run():

You can check what entrypoints a graph has via graph.inputs.entrypoints. This returns a dict mapping node names to the cycle parameters they need:

Pick ONE entrypoint per cycle and provide its parameters. For full details, see InputSpec.

Tracking State Across Iterations

Use a node to accumulate state:

Provide initial values when running:

Shared Outputs in a Cycle

When multiple nodes produce the same output name in a cycle — like two nodes both producing messages — the graph needs to know their execution order. There are two approaches: ordering signals (emit/wait_for) and explicit edges.

With emit/wait_for

Use emit/wait_for to prove ordering between same-name producers. This keeps auto-inference for edge wiring while adding ordering constraints:

With Explicit Edges

When cycles share common output names like messages, df, or state, explicit edges let you declare the topology directly instead of inventing signal names. Pass edges to Graph() to disable auto-inference and wire edges manually:

Each edge is a (source, target) tuple. Values are auto-detected from the intersection of source outputs and target inputs. When there's no overlap, the edge becomes ordering-only (structural dependency, no data flow).

You can also use 3-tuples to specify values explicitly: (source, target, "messages") or (source, target, ["messages", "response"]).

When to use which approach:

Situation
Use

DAGs (no cycles)

Auto-inference (no edges needed)

Cycles with unique output names

Auto-inference + emit/wait_for for ordering

Cycles with shared output names (messages, state)

Explicit edges

For more on the edges parameter, see the Graph API reference.

Preventing Infinite Loops

Hypergraph detects potential infinite loops at runtime:

Best practices:

  1. Always have a termination condition (max attempts, quality threshold)

  2. Include END in your route targets

  3. Track iteration count and bail out if needed

What's Next?

Last updated