Kitaru
Getting Started

Examples

Runnable Kitaru examples and what each one demonstrates

The Kitaru repo includes runnable examples that demonstrate each primitive. Each example is a standalone project — clone the repo, cd into the example you want, and run it directly.

Get the examples

git clone https://github.com/zenml-io/kitaru.git
cd kitaru/examples/features/basic_flow
kitaru init
python first_working_flow.py

Run kitaru init once before your first example to set up the local execution environment.

Every example uses imports relative to its own directory, so always cd into the example folder before running.

Connection context

Examples use whatever Kitaru connection context is already active.

  • If you are just trying Kitaru locally, run them as-is.
  • If you already have a deployed Kitaru server and want the examples to use it, connect first and verify the active context before running the example.
kitaru login https://my-server.example.com
kitaru status

Browse by goal

I want to...ExampleRun
Start with the smallest runnable flowfeatures/basic_flow/python first_working_flow.py
See structured metadata loggingfeatures/basic_flow/python flow_with_logging.py
Persist and reload data across executionsfeatures/basic_flow/python flow_with_artifacts.py
Seed, inspect, compact, and purge durable memoryfeatures/memory/python flow_with_memory.py
Set runtime configuration defaultsfeatures/basic_flow/python flow_with_configuration.py
Inspect and manage past executionsfeatures/execution_management/python client_execution_management.py
Pause for human input and resume laterfeatures/execution_management/python wait_and_resume.py
Replay from a checkpoint with overridesfeatures/replay/python replay_with_overrides.py
Track a model call inside a flowfeatures/llm/python flow_with_llm.py
Wrap a PydanticAI agentintegrations/pydantic_ai_agent/python pydantic_ai_adapter.py
Wrap an OpenAI Agents SDK agentintegrations/openai_agents_agent/python openai_agents_adapter.py
Run a multi-agent OpenAI research botend_to_end/openai_research_bot/python research_bot.py "Your query" --max-searches 2
Query executions from an MCP clientfeatures/mcp/python mcp_query_tools.py

For example:

cd examples/features/basic_flow
python first_working_flow.py

Core workflow basics

ExampleDemonstratesRelated docs
features/basic_flow/first_working_flow.pySmallest @flow + @checkpoint exampleQuickstart
features/basic_flow/flow_with_logging.pykitaru.log() metadata at flow and checkpoint scopeLogging
features/basic_flow/flow_with_artifacts.pykitaru.save() and kitaru.load() across executionsArtifacts
features/basic_flow/flow_with_configuration.pykitaru.configure() defaults, overrides, and frozen specsConfiguration

Durable shared state

ExampleDemonstratesRelated docs
features/memory/flow_with_memory.pyOutside-flow seeding, in-flow kitaru.memory, detached post-run execution-scope writes, explicit-scope inspection, and post-run maintenance (compact, purge, audit log) via KitaruClient.memoriesUse Memory

Execution lifecycle and recovery

ExampleDemonstratesRelated docs
features/execution_management/client_execution_management.pyKitaruClient for listing runs, reading details, and loading dataExecution Management
features/execution_management/wait_and_resume.pykitaru.wait() with inline prompt or CLI input/resumeWait, Input, and Resume
features/replay/replay_with_overrides.pyReplay from a checkpoint with overridden inputsReplay and Overrides

LLMs and agent integrations

ExampleDemonstratesRelated docs
features/llm/flow_with_llm.pykitaru.llm() prompt-response tracking with usage metadataTracked LLM Calls
integrations/pydantic_ai_agent/pydantic_ai_adapter.pyWrap a PydanticAI agent with Kitaru replay boundariesPydanticAI Adapter
integrations/openai_agents_agent/openai_agents_adapter.pyWrap an OpenAI Agents SDK agent with Kitaru call-level or runner-call durability in a real API-backed support flow (OPENAI_API_KEY required, default model gpt-5-nano)OpenAI Agents Adapter
end_to_end/openai_research_bot/research_bot.pyRun a multi-agent OpenAI research bot with planner/writer runner checkpoints, submitted search fan-out, and a published report (OPENAI_API_KEY required, default model gpt-5-nano)Research bot section
features/mcp/mcp_query_tools.pyQuery executions and data through the Kitaru MCP serverMCP Server

The LLM and adapter examples require additional dependencies — check each example's README for setup instructions.

If you are learning Kitaru from scratch:

  1. features/basic_flow/first_working_flow.pyQuickstart
  2. features/basic_flow/flow_with_logging.pyLogging
  3. features/basic_flow/flow_with_artifacts.pyArtifacts
  4. features/memory/flow_with_memory.pyUse Memory
  5. features/execution_management/client_execution_management.pyExecution Management
  6. features/execution_management/wait_and_resume.pyWait, Input, and Resume
  7. features/replay/replay_with_overrides.pyReplay and Overrides
  8. features/llm/flow_with_llm.pyTracked LLM Calls
  9. integrations/pydantic_ai_agent/pydantic_ai_adapter.pyPydanticAI Adapter
  10. integrations/openai_agents_agent/openai_agents_adapter.pyOpenAI Agents Adapter
  11. end_to_end/openai_research_bot/research_bot.pyResearch bot section
  12. features/mcp/mcp_query_tools.pyMCP Server

On this page