MCP Server
Query and manage Kitaru executions, deployments, artifacts, memory, and secret creation through Model Context Protocol tools
Kitaru ships an MCP server so assistants can query and manage executions, deployments, memory, artifacts, and project context with structured tool calls instead of parsing CLI text output.
Install MCP support
uv add kitaru --extra mcppip install "kitaru[mcp]"If you also want agents to start and stop the local Kitaru server, install the
local extra too:
uv add kitaru --extra mcp --extra localpip install "kitaru[mcp,local]"Start the server
kitaru-mcpThe server uses stdio transport by default.
Configure in Claude Code
kitaru-mcp has to resolve to the Python environment where you installed
kitaru[mcp]. Claude Code inherits the PATH of the shell that launched it,
not whatever virtualenv you activate later — so either activate your venv
before starting Claude, or point command at the absolute path to
kitaru-mcp inside that venv (e.g. /path/to/project/.venv/bin/kitaru-mcp).
The absolute-path form is the most reliable.
Option 1: project .mcp.json
Add this to .mcp.json in your project root (committed to the repo, so the
whole team picks it up):
{
"mcpServers": {
"kitaru": {
"command": "kitaru-mcp",
"args": []
}
}
}Or, using an absolute venv path:
{
"mcpServers": {
"kitaru": {
"command": "/absolute/path/to/.venv/bin/kitaru-mcp",
"args": []
}
}
}Option 2: claude mcp add CLI
Claude Code can register the server for you. Scope controls where the registration lives:
# Just you, just this project (default scope: local)
claude mcp add kitaru -- kitaru-mcp
# Shared with the team via .mcp.json in this repo
claude mcp add -s project kitaru -- kitaru-mcp
# Available in every project on your machine
claude mcp add -s user kitaru -- kitaru-mcpVerify with claude mcp list. If kitaru-mcp isn't on PATH, pass the
absolute venv path instead:
claude mcp add -s project kitaru -- /absolute/path/to/.venv/bin/kitaru-mcpYou can also just ask Claude: "add the Kitaru MCP server to this project" —
it will run claude mcp add for you.
Tool set
Execution tools:
kitaru_executions_listkitaru_executions_getkitaru_executions_latestget_execution_logskitaru_executions_runkitaru_executions_cancelkitaru_executions_inputkitaru_executions_retrykitaru_executions_replay
Deployment tools:
kitaru_deployments_deploykitaru_deployments_invokekitaru_deployments_listkitaru_deployments_getkitaru_deployments_deletekitaru_deployments_tagkitaru_deployments_untag
Artifact tools:
kitaru_artifacts_listkitaru_artifacts_get
Memory tools:
kitaru_memory_listkitaru_memory_getkitaru_memory_setkitaru_memory_deletekitaru_memory_historykitaru_memory_compactkitaru_memory_purgekitaru_memory_purge_scopekitaru_memory_compaction_log
Secret tools:
kitaru_secrets_create
kitaru_secrets_create returns metadata only: secret ID, name, visibility, key
names, and missing-value status. The MCP server intentionally does not expose a
secret delete tool; use the CLI or Python SDK for deletion.
Connection tools:
kitaru_start_local_serverkitaru_stop_local_serverkitaru_statuskitaru_stacks_listmanage_stack
Copy-paste prompts
Use prompts like these in an MCP-capable assistant after you configure the Kitaru MCP server.
Read-only status check:
Check my Kitaru status and list the five latest executions. Summarize anything waiting for input.Start and watch a flow:
Run `examples/features/basic_flow/first_working_flow.py:research_agent` with topic="durable execution", then watch the execution until it finishes.Resolve a waiting execution safely:
Find executions waiting for input. If exactly one is waiting, show me the question and ask me for the value before calling the input tool.Plan and run a replay:
Replay the latest failed execution from the checkpoint before the failing one. Explain the replay plan before running it.Inspect results from a completed execution:
Get the latest completed execution and show me its response artifacts.Read durable memory safely:
Read `style/release_notes` from Kitaru memory in scope `repo_docs`. If the value is unavailable, explain the diagnostics instead of overwriting it.Manage a local stack:
Create a local Kitaru stack named local-dev if it does not already exist, then show me the current Kitaru status.Deploy and invoke a shared flow route:
Deploy `flows/research.py:research_agent` with topic="durable execution" as a canary deployment, then invoke the canary route and show me the started execution ID.Starting executions with kitaru_executions_run
The kitaru_executions_run tool requires a target string in the format:
<module_or_file>:<flow_name>The left side can be an importable module path or a .py filesystem path.
The right side is the flow attribute name in that module.
Examples:
examples/features/basic_flow/first_working_flow.py:research_agent
./examples/features/basic_flow/first_working_flow.py:research_agentPass flow inputs as args (a JSON object) and optionally specify a stack:
{
"target": "my_app.flows:research_flow",
"args": {"topic": "durable execution"},
"stack": "prod-k8s"
}When stack is provided, the tool passes it to .run(stack=...) so the
execution targets that stack.
Deployment tools
The deployment tools let assistants publish and invoke versioned flow routes
without shelling out to kitaru deploy or kitaru invoke.
| Tool | Use it for |
|---|---|
kitaru_deployments_deploy | Create a new deployment version from <module_or_file>:<flow_name> |
kitaru_deployments_invoke | Start a new execution from a deployed flow by default, tag, or version |
kitaru_deployments_list | List all deployment versions, optionally filtered to one flow |
kitaru_deployments_get | Inspect one deployment by version or tag |
kitaru_deployments_delete | Delete one version when no exclusive tag protects it |
kitaru_deployments_tag | Attach or move a public tag to a version |
kitaru_deployments_untag | Remove a non-reserved public tag from a version |
kitaru_deployments_deploy accepts deployment-time flow inputs plus optional
deployment controls:
{
"target": "flows/research.py:research_agent",
"inputs": {"topic": "durable execution"},
"tag": "canary",
"exclusive": true,
"stack": "production",
"image": {
"requirements": ["kitaru[openai]"],
"secret_environment_from": ["openai-creds"]
},
"cache": false,
"retries": 1
}image accepts either a base image string or an object matching
kitaru.ImageSettings.
That deploy-time image config is saved into the deployment snapshot. Later
kitaru_deployments_invoke calls can override flow inputs, but they do not
rewrite the deployment image.
The first deployment of a flow gets the reserved default tag automatically.
default is always exclusive and cannot be removed. Non-default tags are shared
by default; pass exclusive=true when the tag should move to exactly one
version, such as canary, stable, or prod.
kitaru_deployments_invoke is the MCP equivalent of the primary CLI command
kitaru invoke. If neither version nor tag is provided, it invokes the
reserved default route:
{
"flow": "research_agent",
"inputs": {"topic": "serverless routing"}
}Pin a version or named route when needed:
{
"flow": "research_agent",
"tag": "stable",
"inputs": {"topic": "consumer request"}
}{
"flow": "research_agent",
"version": 2,
"inputs": {"topic": "reproducible request"}
}Use the list/get/tag tools for the producer side of a shared flow:
{"flow": "research_agent"}{"flow": "research_agent", "tag": "stable", "version": 2, "exclusive": true}Then consumers can invoke by flow name and tag; they do not need the producer's source file path.
For the full deployment model, including auto-versioning, tag exclusivity, serverless routing, and auth context, see Deployments.
Example query flow
- Call
kitaru_executions_list(status="waiting") - Ask the user to confirm an action for a pending wait
- Call
kitaru_executions_input(exec_id=..., wait=..., value=...)(MCP requires explicitwait; CLI auto-detects) - Re-check state via
kitaru_executions_get(exec_id)
To provision or clean up a local stack, use manage_stack(action="create", name="local-dev")
or manage_stack(action="delete", name="local-dev", force=True).
Memory tools
The memory tools give assistants direct structured access to Kitaru's durable key-value memory store.
scopeandscope_typeare required on every scoped memory toolversionandstrictare available only onkitaru_memory_get
Typical memory query/update flow:
kitaru_memory_list(scope="repo_docs", scope_type="namespace")kitaru_memory_get(key="style/release_notes", scope="repo_docs", scope_type="namespace")kitaru_memory_set(key="style/release_notes", value={"tone": "concise"}, scope="repo_docs", scope_type="namespace")kitaru_memory_history(key="style/release_notes", scope="repo_docs", scope_type="namespace")
Use strict=True when the assistant should fail the tool call if the memory
entry exists but its value artifact cannot be loaded from the MCP server's
environment:
kitaru_memory_get(key="style/release_notes", scope="repo_docs", scope_type="namespace", strict=True)
Use these tools when an assistant needs durable shared state without parsing CLI output or inventing its own scratchpad format.
Missing entries vs unavailable values
kitaru_memory_get can return three useful shapes:
Nonemeans no active entry exists for that key and scope.- A payload with
value_available: trueincludes the loadedvalue. - A payload with
value_available: falsemeans metadata exists, but the value artifact cannot be loaded from the MCP server's environment.
For an unavailable value, the payload includes diagnostics alongside the memory metadata:
{
"key": "style/release_notes",
"scope": "repo_docs",
"scope_type": "namespace",
"version": 3,
"artifact_id": "...",
"value_available": false,
"value_unavailable": {
"error_type": "KitaruMemoryArtifactUnavailableError",
"cause_type": "FileNotFoundError",
"message": "..."
}
}This is different from a missing key: the assistant still has enough metadata to
tell the user which memory entry exists and why the value cannot be read here.
Set strict=True to raise the typed error instead of returning the unavailable
payload.
Memory maintenance tools
The maintenance tools let assistants manage memory growth:
kitaru_memory_compact— summarize memory values with an LLM and write the result. Usekeyfor the default single-key current-value workflow,keyplussource_mode="history"to summarize one key's full non-deleted history, orkeys(a list) withtarget_keyfor multi-key merging. Source entries are not deleted.kitaru_memory_purge— physically delete old versions of one key. Setkeepto retain the newest N versions, or omit it to delete everything.kitaru_memory_purge_scope— purge old versions across all keys in a scope. Setinclude_deletedto also remove tombstoned keys entirely.kitaru_memory_compaction_log— read the audit trail of all compact and purge operations for one scope (newest first).
Recommended maintenance sequence:
kitaru_memory_compact(scope="repo_docs", scope_type="namespace", key="notes/preferences")- Inspect the new summary if needed.
kitaru_memory_purge(key="notes/preferences", scope="repo_docs", scope_type="namespace", keep=1)kitaru_memory_compaction_log(scope="repo_docs", scope_type="namespace")
For a complete memory walkthrough including seeding, flow usage, and
cross-surface inspection, see examples/features/memory/flow_with_memory.py and
its demo playbook
for detailed MCP tool-call sequences.
Authentication and context
The MCP server reuses the same config/auth context as kitaru CLI and SDK.
If you want MCP tools to target a local server, start one first with bare
kitaru login or via kitaru_start_local_server(...). If you want MCP tools
to target a deployed Kitaru server or managed workspace, connect first with
kitaru login <server-or-workspace> --api-key <workspace-api-key> before
starting kitaru-mcp, or set KITARU_SERVER_URL, KITARU_AUTH_TOKEN, and
KITARU_PROJECT in the MCP server environment. If you can run kitaru status,
MCP tools use that same connection.
Deployment MCP calls do not use per-deployment tokens. kitaru_deployments_deploy,
kitaru_deployments_invoke, and the deployment management tools authorize through the active
workspace/project context, just like kitaru deploy, kitaru invoke, and
KitaruClient().deployments.invoke(...).
Replay behavior
kitaru_executions_replay starts a new execution and returns:
available: trueoperation: "replay"- the serialized replayed execution payload
Use from_ for checkpoint selection, optional flow_inputs for flow
parameter overrides, and optional overrides for checkpoint.* overrides.
Replay does not support wait.* overrides. If the replayed execution reaches a
wait, resolve it through the normal input flow afterward.
MCP currently exposes kitaru_executions_input but not a separate resume tool.
If your backend requires an explicit resume step after input resolution, use the
CLI or SDK resume(...) surface.