Kitaru
Agent Native

MCP Server

Query and manage Kitaru executions, deployments, artifacts, memory, and secret creation through Model Context Protocol tools

Kitaru ships an MCP server so assistants can query and manage executions, deployments, memory, artifacts, and project context with structured tool calls instead of parsing CLI text output.

Install MCP support

uv add kitaru --extra mcp
pip install "kitaru[mcp]"

If you also want agents to start and stop the local Kitaru server, install the local extra too:

uv add kitaru --extra mcp --extra local
pip install "kitaru[mcp,local]"

Start the server

kitaru-mcp

The server uses stdio transport by default.

Configure in Claude Code

kitaru-mcp has to resolve to the Python environment where you installed kitaru[mcp]. Claude Code inherits the PATH of the shell that launched it, not whatever virtualenv you activate later — so either activate your venv before starting Claude, or point command at the absolute path to kitaru-mcp inside that venv (e.g. /path/to/project/.venv/bin/kitaru-mcp). The absolute-path form is the most reliable.

Option 1: project .mcp.json

Add this to .mcp.json in your project root (committed to the repo, so the whole team picks it up):

{
  "mcpServers": {
    "kitaru": {
      "command": "kitaru-mcp",
      "args": []
    }
  }
}

Or, using an absolute venv path:

{
  "mcpServers": {
    "kitaru": {
      "command": "/absolute/path/to/.venv/bin/kitaru-mcp",
      "args": []
    }
  }
}

Option 2: claude mcp add CLI

Claude Code can register the server for you. Scope controls where the registration lives:

# Just you, just this project (default scope: local)
claude mcp add kitaru -- kitaru-mcp

# Shared with the team via .mcp.json in this repo
claude mcp add -s project kitaru -- kitaru-mcp

# Available in every project on your machine
claude mcp add -s user kitaru -- kitaru-mcp

Verify with claude mcp list. If kitaru-mcp isn't on PATH, pass the absolute venv path instead:

claude mcp add -s project kitaru -- /absolute/path/to/.venv/bin/kitaru-mcp

You can also just ask Claude: "add the Kitaru MCP server to this project" — it will run claude mcp add for you.

Tool set

Execution tools:

  • kitaru_executions_list
  • kitaru_executions_get
  • kitaru_executions_latest
  • get_execution_logs
  • kitaru_executions_run
  • kitaru_executions_cancel
  • kitaru_executions_input
  • kitaru_executions_retry
  • kitaru_executions_replay

Deployment tools:

  • kitaru_deployments_deploy
  • kitaru_deployments_invoke
  • kitaru_deployments_list
  • kitaru_deployments_get
  • kitaru_deployments_delete
  • kitaru_deployments_tag
  • kitaru_deployments_untag

Artifact tools:

  • kitaru_artifacts_list
  • kitaru_artifacts_get

Memory tools:

  • kitaru_memory_list
  • kitaru_memory_get
  • kitaru_memory_set
  • kitaru_memory_delete
  • kitaru_memory_history
  • kitaru_memory_compact
  • kitaru_memory_purge
  • kitaru_memory_purge_scope
  • kitaru_memory_compaction_log

Secret tools:

  • kitaru_secrets_create

kitaru_secrets_create returns metadata only: secret ID, name, visibility, key names, and missing-value status. The MCP server intentionally does not expose a secret delete tool; use the CLI or Python SDK for deletion.

Connection tools:

  • kitaru_start_local_server
  • kitaru_stop_local_server
  • kitaru_status
  • kitaru_stacks_list
  • manage_stack

Copy-paste prompts

Use prompts like these in an MCP-capable assistant after you configure the Kitaru MCP server.

Read-only status check:

Check my Kitaru status and list the five latest executions. Summarize anything waiting for input.

Start and watch a flow:

Run `examples/features/basic_flow/first_working_flow.py:research_agent` with topic="durable execution", then watch the execution until it finishes.

Resolve a waiting execution safely:

Find executions waiting for input. If exactly one is waiting, show me the question and ask me for the value before calling the input tool.

Plan and run a replay:

Replay the latest failed execution from the checkpoint before the failing one. Explain the replay plan before running it.

Inspect results from a completed execution:

Get the latest completed execution and show me its response artifacts.

Read durable memory safely:

Read `style/release_notes` from Kitaru memory in scope `repo_docs`. If the value is unavailable, explain the diagnostics instead of overwriting it.

Manage a local stack:

Create a local Kitaru stack named local-dev if it does not already exist, then show me the current Kitaru status.

Deploy and invoke a shared flow route:

Deploy `flows/research.py:research_agent` with topic="durable execution" as a canary deployment, then invoke the canary route and show me the started execution ID.

Starting executions with kitaru_executions_run

The kitaru_executions_run tool requires a target string in the format:

<module_or_file>:<flow_name>

The left side can be an importable module path or a .py filesystem path. The right side is the flow attribute name in that module.

Examples:

examples/features/basic_flow/first_working_flow.py:research_agent
./examples/features/basic_flow/first_working_flow.py:research_agent

Pass flow inputs as args (a JSON object) and optionally specify a stack:

{
  "target": "my_app.flows:research_flow",
  "args": {"topic": "durable execution"},
  "stack": "prod-k8s"
}

When stack is provided, the tool passes it to .run(stack=...) so the execution targets that stack.

Deployment tools

The deployment tools let assistants publish and invoke versioned flow routes without shelling out to kitaru deploy or kitaru invoke.

ToolUse it for
kitaru_deployments_deployCreate a new deployment version from <module_or_file>:<flow_name>
kitaru_deployments_invokeStart a new execution from a deployed flow by default, tag, or version
kitaru_deployments_listList all deployment versions, optionally filtered to one flow
kitaru_deployments_getInspect one deployment by version or tag
kitaru_deployments_deleteDelete one version when no exclusive tag protects it
kitaru_deployments_tagAttach or move a public tag to a version
kitaru_deployments_untagRemove a non-reserved public tag from a version

kitaru_deployments_deploy accepts deployment-time flow inputs plus optional deployment controls:

{
  "target": "flows/research.py:research_agent",
  "inputs": {"topic": "durable execution"},
  "tag": "canary",
  "exclusive": true,
  "stack": "production",
  "image": {
    "requirements": ["kitaru[openai]"],
    "secret_environment_from": ["openai-creds"]
  },
  "cache": false,
  "retries": 1
}

image accepts either a base image string or an object matching kitaru.ImageSettings.

That deploy-time image config is saved into the deployment snapshot. Later kitaru_deployments_invoke calls can override flow inputs, but they do not rewrite the deployment image.

The first deployment of a flow gets the reserved default tag automatically. default is always exclusive and cannot be removed. Non-default tags are shared by default; pass exclusive=true when the tag should move to exactly one version, such as canary, stable, or prod.

kitaru_deployments_invoke is the MCP equivalent of the primary CLI command kitaru invoke. If neither version nor tag is provided, it invokes the reserved default route:

{
  "flow": "research_agent",
  "inputs": {"topic": "serverless routing"}
}

Pin a version or named route when needed:

{
  "flow": "research_agent",
  "tag": "stable",
  "inputs": {"topic": "consumer request"}
}
{
  "flow": "research_agent",
  "version": 2,
  "inputs": {"topic": "reproducible request"}
}

Use the list/get/tag tools for the producer side of a shared flow:

{"flow": "research_agent"}
{"flow": "research_agent", "tag": "stable", "version": 2, "exclusive": true}

Then consumers can invoke by flow name and tag; they do not need the producer's source file path.

For the full deployment model, including auto-versioning, tag exclusivity, serverless routing, and auth context, see Deployments.

Example query flow

  1. Call kitaru_executions_list(status="waiting")
  2. Ask the user to confirm an action for a pending wait
  3. Call kitaru_executions_input(exec_id=..., wait=..., value=...) (MCP requires explicit wait; CLI auto-detects)
  4. Re-check state via kitaru_executions_get(exec_id)

To provision or clean up a local stack, use manage_stack(action="create", name="local-dev") or manage_stack(action="delete", name="local-dev", force=True).

Memory tools

The memory tools give assistants direct structured access to Kitaru's durable key-value memory store.

  • scope and scope_type are required on every scoped memory tool
  • version and strict are available only on kitaru_memory_get

Typical memory query/update flow:

  1. kitaru_memory_list(scope="repo_docs", scope_type="namespace")
  2. kitaru_memory_get(key="style/release_notes", scope="repo_docs", scope_type="namespace")
  3. kitaru_memory_set(key="style/release_notes", value={"tone": "concise"}, scope="repo_docs", scope_type="namespace")
  4. kitaru_memory_history(key="style/release_notes", scope="repo_docs", scope_type="namespace")

Use strict=True when the assistant should fail the tool call if the memory entry exists but its value artifact cannot be loaded from the MCP server's environment:

kitaru_memory_get(key="style/release_notes", scope="repo_docs", scope_type="namespace", strict=True)

Use these tools when an assistant needs durable shared state without parsing CLI output or inventing its own scratchpad format.

Missing entries vs unavailable values

kitaru_memory_get can return three useful shapes:

  • None means no active entry exists for that key and scope.
  • A payload with value_available: true includes the loaded value.
  • A payload with value_available: false means metadata exists, but the value artifact cannot be loaded from the MCP server's environment.

For an unavailable value, the payload includes diagnostics alongside the memory metadata:

{
  "key": "style/release_notes",
  "scope": "repo_docs",
  "scope_type": "namespace",
  "version": 3,
  "artifact_id": "...",
  "value_available": false,
  "value_unavailable": {
    "error_type": "KitaruMemoryArtifactUnavailableError",
    "cause_type": "FileNotFoundError",
    "message": "..."
  }
}

This is different from a missing key: the assistant still has enough metadata to tell the user which memory entry exists and why the value cannot be read here. Set strict=True to raise the typed error instead of returning the unavailable payload.

Memory maintenance tools

The maintenance tools let assistants manage memory growth:

  • kitaru_memory_compact — summarize memory values with an LLM and write the result. Use key for the default single-key current-value workflow, key plus source_mode="history" to summarize one key's full non-deleted history, or keys (a list) with target_key for multi-key merging. Source entries are not deleted.
  • kitaru_memory_purge — physically delete old versions of one key. Set keep to retain the newest N versions, or omit it to delete everything.
  • kitaru_memory_purge_scope — purge old versions across all keys in a scope. Set include_deleted to also remove tombstoned keys entirely.
  • kitaru_memory_compaction_log — read the audit trail of all compact and purge operations for one scope (newest first).

Recommended maintenance sequence:

  1. kitaru_memory_compact(scope="repo_docs", scope_type="namespace", key="notes/preferences")
  2. Inspect the new summary if needed.
  3. kitaru_memory_purge(key="notes/preferences", scope="repo_docs", scope_type="namespace", keep=1)
  4. kitaru_memory_compaction_log(scope="repo_docs", scope_type="namespace")

For a complete memory walkthrough including seeding, flow usage, and cross-surface inspection, see examples/features/memory/flow_with_memory.py and its demo playbook for detailed MCP tool-call sequences.

Authentication and context

The MCP server reuses the same config/auth context as kitaru CLI and SDK. If you want MCP tools to target a local server, start one first with bare kitaru login or via kitaru_start_local_server(...). If you want MCP tools to target a deployed Kitaru server or managed workspace, connect first with kitaru login <server-or-workspace> --api-key <workspace-api-key> before starting kitaru-mcp, or set KITARU_SERVER_URL, KITARU_AUTH_TOKEN, and KITARU_PROJECT in the MCP server environment. If you can run kitaru status, MCP tools use that same connection.

Deployment MCP calls do not use per-deployment tokens. kitaru_deployments_deploy, kitaru_deployments_invoke, and the deployment management tools authorize through the active workspace/project context, just like kitaru deploy, kitaru invoke, and KitaruClient().deployments.invoke(...).

Replay behavior

kitaru_executions_replay starts a new execution and returns:

  • available: true
  • operation: "replay"
  • the serialized replayed execution payload

Use from_ for checkpoint selection, optional flow_inputs for flow parameter overrides, and optional overrides for checkpoint.* overrides.

Replay does not support wait.* overrides. If the replayed execution reaches a wait, resolve it through the normal input flow afterward.

MCP currently exposes kitaru_executions_input but not a separate resume tool. If your backend requires an explicit resume step after input resolution, use the CLI or SDK resume(...) surface.

On this page