OpenAI Agents Adapter
Wrap an OpenAI Agents SDK Agent with KitaruRunner so calls are durable and replayable inside Kitaru flows
Kitaru's OpenAI Agents adapter lets you keep your existing OpenAI Agents SDK agent logic while adding Kitaru durability around it.
from agents import Agent
from kitaru.adapters.openai_agents import KitaruRunner
agent = Agent(name="researcher", model=your_model)
runner = KitaruRunner(agent)You run the agent through runner.run(...) or runner.run_sync(...) with an
OpenAIRunRequest.
Install
uv add "kitaru[openai-agents,local]"Then initialize/login as usual:
kitaru init
kitaru login
kitaru statusMinimal flow
from kitaru import flow
from kitaru.adapters.openai_agents import KitaruRunner, OpenAIRunRequest
runner = KitaruRunner(agent, checkpoint_strategy="calls")
@flow
def research(prompt: str) -> str:
result = runner.run_sync(OpenAIRunRequest.start(prompt))
return str(result.final_output)Checkpoint strategy choices
You choose how Kitaru places checkpoints with checkpoint_strategy=.
checkpoint_strategy="calls" (default)
Kitaru catches supported model/tool calls individually.
Use this when you want finer replay units (for example: if call 6 fails, calls 1–5 can come from cache).
checkpoint_strategy="runner_call"
Kitaru places one checkpoint around the outer Runner.run(...) call.
Use this when you want one coarse replay boundary for the whole agent run.
Important guardrail
checkpoint_strategy="calls" must run from flow scope (not from inside another
@checkpoint), because the adapter needs room to open inner checkpoints for
model/tool calls.
Runnable example
This example uses the real OpenAI API (not a stub model), so set your key:
uv sync --extra local --extra openai-agents
export OPENAI_API_KEY='sk-...'
# default model in the example is gpt-5-nano
# optional override: any OpenAI model you have access to
# export OPENAI_AGENTS_MODEL='<another-openai-model>'
uv run examples/integrations/openai_agents_agent/openai_agents_adapter.pyEnd-to-end research bot example
For a larger example, run the OpenAI research bot:
cd examples/end_to_end/openai_research_bot
uv sync --extra local --extra openai-agents
uv run kitaru init
export OPENAI_API_KEY='sk-...'
uv run python research_bot.py "AI agent durability" --max-searches 2The workflow keeps the original research-bot shape:
planner → parallel searches → writer reportThe planner and writer run from flow scope, so their default strategy is
checkpoint_strategy="calls". That gives Kitaru room to show supported inner
OpenAI model/tool calls as child checkpoints.
Each planned search is submitted as its own Kitaru checkpoint. Inside those
search checkpoints, the example intentionally uses
checkpoint_strategy="runner_call", because call-level checkpoints cannot be
nested inside an existing Kitaru checkpoint. In concrete terms: the search
checkpoint is already the sealed replay unit, so the OpenAI runner inside it is
saved as one runner call.
The example also uses a local OpenAI Agents SDK @function_tool named
search_web instead of the hosted WebSearchTool. The local tool calls the
OpenAI Responses API with web_search, which makes the checkpoint trace clearer
with the adapter's current public behavior.
Look for these artifacts in the Kitaru UI:
research_plansearch_summariesdurability_drillfinal_reportresearch_report_metadata
To test the durable-retry story directly, set
KITARU_RESEARCH_BOT_FAIL_AFTER_SEARCHES=1 before running the example. It will
fail after the parallel searches complete. Unset the flag and run
kitaru executions replay <EXECUTION_ID> --from durability_drill_gate; the
replay should reuse the completed planner/search checkpoints and continue into
the writer. retry tries to restart the same failed execution and may be
unavailable on server-backed stacks after a run has concluded.
See also: Replay and overrides.