Kitaru
Guides

PydanticAI Adapter

Wrap existing PydanticAI agents so model/tool activity is tracked inside Kitaru checkpoints

Kitaru's PydanticAI adapter lets you reuse an existing PydanticAI agent with Kitaru durability.

Use it when you want:

  • an explicit outer @checkpoint replay boundary
  • child-event visibility for agent model calls and tool calls
  • optional human input from adapter tools via flow-level kitaru.wait(...)
  • run-level summary metadata (pydantic_ai_run_summaries)

1) Install optional adapter dependency

uv sync --extra local --extra pydantic-ai

2) Wrap an agent

from kitaru import flow, checkpoint
from pydantic_ai import Agent
from pydantic_ai.models.test import TestModel
from kitaru.adapters import pydantic_ai as kp

researcher = kp.wrap(
    Agent(TestModel(), name="researcher"),
    tool_capture_config={"mode": "full"},
    tool_capture_config_by_name={
        "quick_check": {"mode": "metadata_only"},
        "noop": {"mode": "off"},
    },
)

@checkpoint(type="llm_call")
def run_research(topic: str) -> str:
    return researcher.run_sync(f"Research {topic}").output

@flow
def research_flow(topic: str) -> str:
    return run_research(topic)

The outer checkpoint remains the replay boundary. Adapter-internal model/tool calls are tracked as child events under that checkpoint.

3) Tool capture modes

Capture policy is observability-only (it does not change tool execution):

  • full (default): metadata + args/result artifacts + timings
  • metadata_only: metadata + timings, no args/result artifacts
  • off: no adapter child event or artifacts for that tool

You can set a global default with tool_capture_config and override individual tools with tool_capture_config_by_name.

4) Add adapter-level HITL tools (optional)

@kp.hitl_tool(question="Approve publish?", schema=bool)
def approve_publish(summary: str) -> bool:
    # Body is skipped in HITL mode.
    return False

When the agent invokes this tool, the adapter translates it to a flow-level kitaru.wait(...) under the hood.

Runtime behavior and guardrails

  • No nested checkpoint replay boundaries are introduced by adapter internals.
  • Adapter child events stay child events; they do not become standalone durable calls.
  • run() / run_sync() at flow scope (outside checkpoints) use one synthetic checkpoint so tracking still works.
  • Outside a flow, wrapped agents run as plain PydanticAI calls (no Kitaru tracking).
  • Streamed model requests record a transcript artifact (*_stream_transcript) for replay/inspection.

Example in this repository

uv sync --extra local --extra pydantic-ai
uv run examples/pydantic_ai_agent/pydantic_ai_adapter.py
uv run pytest tests/test_phase17_pydantic_ai_example.py

The example prints:

  • execution ID
  • final result
  • child-event count captured under checkpoint metadata
  • run summary count for wrapped agent runs

For the broader catalog, see Examples.

On this page