Kitaru
Agent Native

MCP Server

Query and manage Kitaru executions through Model Context Protocol tools

Kitaru ships an MCP server so assistants can query and manage executions with structured tool calls instead of parsing CLI text output.

Install MCP support

uv add kitaru --extra mcp
pip install "kitaru[mcp]"

If you also want agents to start and stop the local Kitaru server, install the local extra too:

uv add kitaru --extra mcp --extra local
pip install "kitaru[mcp,local]"

Start the server

kitaru-mcp

The server uses stdio transport by default.

Configure in Claude Code

Add this to .mcp.json in your project root:

{
  "mcpServers": {
    "kitaru": {
      "command": "kitaru-mcp",
      "args": []
    }
  }
}

Tool set

Execution tools:

  • kitaru_executions_list
  • kitaru_executions_get
  • kitaru_executions_latest
  • get_execution_logs
  • kitaru_executions_run
  • kitaru_executions_cancel
  • kitaru_executions_input
  • kitaru_executions_retry
  • kitaru_executions_replay

Artifact tools:

  • kitaru_artifacts_list
  • kitaru_artifacts_get

Connection tools:

  • kitaru_start_local_server
  • kitaru_stop_local_server
  • kitaru_status
  • kitaru_stacks_list
  • manage_stack

Starting executions with kitaru_executions_run

The kitaru_executions_run tool requires a target string in the format:

<module_or_file>:<flow_name>

The left side can be an importable module path or a .py filesystem path. The right side is the flow attribute name in that module.

Examples:

examples/basic_flow/first_working_flow.py:my_agent
./examples/basic_flow/first_working_flow.py:my_agent

Pass flow inputs as args (a JSON object) and optionally specify a stack:

{
  "target": "my_app.flows:research_flow",
  "args": {"topic": "durable execution"},
  "stack": "prod-k8s"
}

When stack is provided, the tool passes it to .run(stack=...) so the execution targets that stack.

Example query flow

  1. Call kitaru_executions_list(status="waiting")
  2. Ask the user to confirm an action for a pending wait
  3. Call kitaru_executions_input(exec_id=..., wait=..., value=...) (MCP requires explicit wait; CLI auto-detects)
  4. Re-check state via kitaru_executions_get(exec_id)

To provision or clean up a local stack, use manage_stack(action="create", name="local-dev") or manage_stack(action="delete", name="local-dev", force=True).

Authentication and context

The MCP server reuses the same config/auth context as kitaru CLI and SDK. If you want MCP tools to target a local server, start one first with bare kitaru login or via kitaru_start_local_server(...). If you want MCP tools to target a deployed Kitaru server, connect first with kitaru login <server> before starting kitaru-mcp, or set KITARU_* connection variables in the MCP server environment. If you can run kitaru status, MCP tools use that same connection.

Replay behavior

kitaru_executions_replay starts a new execution and returns:

  • available: true
  • operation: "replay"
  • the serialized replayed execution payload

Use from_ for checkpoint selection, optional flow_inputs for flow parameter overrides, and optional overrides for checkpoint.* overrides.

Replay does not support wait.* overrides. If the replayed execution reaches a wait, resolve it through the normal input flow afterward.

MCP currently exposes kitaru_executions_input but not a separate resume tool. If your backend requires an explicit resume step after input resolution, use the CLI or SDK resume(...) surface.

On this page