Integrate with LLMs

Patterns for using OpenAI, Anthropic, and other LLM providers with hypergraph.

Anthropic Claude

Setup

pip install anthropic
import os
from anthropic import Anthropic

client = Anthropic(api_key=os.environ.get("ANTHROPIC_API_KEY"))

Basic Message

from hypergraph import node

@node(output_name="response")
def generate(prompt: str, system: str = "") -> str:
    """Generate a response using Claude Sonnet 4.5."""

    message = client.messages.create(
        model="claude-sonnet-4-5-20250929",
        max_tokens=1024,
        system=system,
        messages=[{"role": "user", "content": prompt}],
    )

    return message.content[0].text

Streaming

Multi-Turn Conversation

Model Options

Model
Use Case

claude-opus-4-5-20251101

Complex reasoning, analysis, coding

claude-sonnet-4-5-20250929

Balanced performance and cost

claude-haiku-4-5

Fast, cost-efficient for simple tasks


OpenAI GPT

Setup

Basic Response (Responses API)

Streaming

Multi-Turn with State

The Responses API supports stateful conversations:

With Tools

Model Options

Model
Use Case

gpt-5.2

Latest, best for coding and agentic tasks

gpt-5-mini

Faster, cost-efficient

o3

Reasoning model for complex problems


RAG Pattern

Combine retrieval with LLM generation:


Structured Outputs

With Anthropic

With OpenAI


Error Handling


Dependency Injection with .bind()

Best practice: Use .bind() to provide shared LLM clients at the graph level instead of global variables or function defaults.

Why use .bind() instead of function defaults?

  1. Shared state — Bound values are intentionally shared across runs (no deep-copy)

  2. Non-copyable objects — Many clients use thread locks internally and can't be deep-copied

  3. Testability — Easy to swap in mock clients for testing

  4. Lifecycle control — You manage when the client is created and destroyed

Testing LLM Nodes

With .bind(), testing is straightforward:

What's Next?

Last updated