Core Concepts
The mental model behind Kitaru's durable execution primitives
Kitaru gives you durable execution for AI agent workflows using a small set of Python primitives. You write normal Python — Kitaru handles persistence, observability, and recovery underneath.
This section explains the ideas you need to understand before using Kitaru effectively.
Core ideas
| Concept | What it is |
|---|---|
| Flow | The outer durable boundary around your workflow |
| Checkpoint | A unit of work inside a flow whose output is persisted |
| Execution | A single run of a flow, identified by a unique ID |
| Structured metadata | Key-value data you attach to executions and checkpoints with kitaru.log() |
| Runtime log storage | Where runtime logs are sent (configured separately from structured metadata) |
| Active stack | The default execution target used when no per-run stack=... override is passed |
What you can use today
Kitaru's current release includes:
@flow— mark a function as a durable workflow@checkpoint— mark a function as a persisted work unitkitaru.log()— attach structured metadata to the current scopekitaru.wait()— pause a flow until external input is suppliedkitaru.llm()— make tracked model calls with prompt/response capturekitaru.connect()— connect to a Kitaru serverkitaru.configure()— set process-local runtime defaultskitaru.save()/kitaru.load()— persist and load named artifacts in checkpointskitaru.list_stacks()/kitaru.current_stack()/kitaru.use_stack()— manage the default stackKitaruClient— inspect executions, fetch logs, resolve waits, retry, replay, and browse artifactsFlowHandle— interact with a running or finished execution
All of the primitives listed here ship today. Some capabilities are backend-dependent — for example, runtime log retrieval requires a server-backed connection — but they are part of the supported Kitaru surface.