Batch Processing
The Pattern: Think Singular, Scale with Map
# 1. Write logic for ONE item
@node(output_name="features")
def extract(document: str) -> dict:
return analyze(document)
# 2. Build a graph
graph = Graph([extract])
# 3. Scale to many items
results = runner.map(graph, {"document": documents}, map_over="document")Basic Usage
Map Over Multiple Parameters
Zip Mode (Default)
Product Mode
Fixed Parameters
Async Batch Processing
Nested Graphs with Map
Working with MapResult
Error Handling
Fail-Fast (Default)
Continue on Error
Error Handling in Nested Graphs
runner.map() vs map_over
When to use runner.map()
When to use map_over
Checkpointing with map()
Run Lineage: Resume vs Fork
Resume (same workflow_id, no new values)
Fork (new workflow_id, optional overrides)
Resuming Batches
CLI batch execution
When to Use Map vs Loop
What's Next?
Last updated