Use Memory
Seed, read, update, inspect, and delete durable memory across Python, client, CLI, and MCP
Memory is Kitaru's durable key-value store for long-lived agent state.
If artifacts are boxes you fetch from a specific past run, memory is the shared cabinet where you keep the latest durable facts under stable keys.
Choose the right surface
All memory surfaces talk to the same stored data, but they do not expose the exact same controls.
| Surface | Scope input | Version reads | Admin ops (purge/compact/log) | Notes |
|---|---|---|---|---|
kitaru.memory | Implicit via memory.configure(...) or active flow | get(version=...) | No | Best inside Python flows and scripts |
KitaruClient.memories | Explicit scope= + scope_type= on scoped calls | Yes | Yes | Best for admin/inspection code |
| CLI | Explicit --scope + --scope-type on scoped commands | No | Yes | Best for shell inspection and maintenance |
| MCP | Explicit scope + scope_type on scoped tools | Yes on get | Yes | Best for assistant/tool use |
The module-level kitaru.memory API does not take per-call scope=
arguments. You configure the active scope once, then call set(), get(),
list(), history(), and delete() against that scope.
Use memory inside a flow
Inside a flow, Kitaru defaults to the flow name as the active memory scope when you have not configured one explicitly.
from kitaru import checkpoint, flow, memory
@checkpoint
def increment_runs(previous_runs: int | None) -> int:
return (previous_runs or 0) + 1
@flow
def research_agent(topic: str) -> None:
previous_runs = memory.get("stats/run_count")
updated_runs = increment_runs(previous_runs)
memory.set("stats/run_count", updated_runs)
memory.set("last_topic", topic)Notice the shape: read memory first, pass the values into a checkpoint, then write memory after the checkpoint produces its result. This is the pattern to follow. Reads before checkpoints mean the agent starts with the latest knowledge. Writes after checkpoints mean memory only records outcomes that the durable work has already verified — no speculative state.
In-flow memory.get(), memory.list(), memory.history(), and
memory.delete() behave like runtime step outputs, not eager Python values. Keep
the memory calls in the flow body, then pass those results into checkpoints when
you need normal Python logic.
If you want a different scope, configure it first:
from kitaru import flow, memory
@flow
def support_agent(customer_id: str, issue: str) -> None:
memory.configure(scope=f"customer/{customer_id}", scope_type="namespace")
memory.set("latest_issue", issue)Seed memory outside a flow
Outside a flow, you must configure a scope before using the module-level API.
from kitaru import memory
memory.configure(scope="repo_docs", scope_type="namespace")
memory.set("style/release_notes", {
"tone": "concise",
"format": "bullets",
})
style = memory.get("style/release_notes")
print(style)This is useful for:
- seeding memory from scripts
- patching stored state by hand
- writing migration or setup utilities
Inspect memory with KitaruClient
Use KitaruClient.memories when you want explicit-scope programmatic control.
from kitaru import KitaruClient
client = KitaruClient()
client.memories.set(
"team/default_model",
{"alias": "fast"},
scope="repo_docs",
scope_type="namespace",
)
entries = client.memories.list(scope="repo_docs", scope_type="namespace")
latest = client.memories.get(
"team/default_model",
scope="repo_docs",
scope_type="namespace",
)
history = client.memories.history(
"team/default_model",
scope="repo_docs",
scope_type="namespace",
)
scopes = client.memories.scopes()The client API is the best fit for admin utilities because every call makes the typed scope explicit.
That explicit scope matters most for execution-scoped memory. For example, a script can write into one execution bucket after the run has finished:
client.memories.set(
"execution/notes",
{"status": "reviewed"},
scope="execution-123",
scope_type="execution",
)
entry = client.memories.get(
"execution/notes",
scope="execution-123",
scope_type="execution",
)
print(entry.scope) # "execution-123" -> membership bucket
print(entry.execution_id) # None for a detached write
print(entry.flow_id) # owning flow, when Kitaru can resolve itThe story here is simple:
scopetells you which execution bucket the entry belongs toexecution_idtells you whether that specific version was produced during a live run- detached post-run writes are valid execution-scope memory; they are just not live-run provenance
Read and update memory from the CLI
The CLI surface mirrors the same storage, but scoped commands require both
--scope and --scope-type.
kitaru memory scopes
kitaru memory list --scope repo_docs --scope-type namespace
kitaru memory get style/release_notes --scope repo_docs --scope-type namespace
kitaru memory set style/release_notes '{"tone":"concise"}' --scope repo_docs --scope-type namespace
kitaru memory history style/release_notes --scope repo_docs --scope-type namespace
kitaru memory delete style/release_notes --scope repo_docs --scope-type namespaceTwo details matter here:
kitaru memory setparses the value as JSON when possible- the CLI
getcommand reads the latest value only; it does not expose a version flag
Read and update memory from MCP
Assistants can use the MCP server's structured memory tools:
kitaru_memory_list(scope, scope_type, prefix=None)kitaru_memory_get(key, scope, scope_type, version=None)kitaru_memory_set(key, value, scope, scope_type)kitaru_memory_delete(key, scope, scope_type)kitaru_memory_history(key, scope, scope_type)
A typical sequence looks like this:
kitaru_memory_list(scope="repo_docs", scope_type="namespace")kitaru_memory_get(key="style/release_notes", scope="repo_docs", scope_type="namespace")kitaru_memory_set(key="style/release_notes", value={"tone": "concise"}, scope="repo_docs", scope_type="namespace")
Use MCP when an assistant needs to inspect or update durable state without shelling out to the CLI.
Delete a key and inspect history
Deletes are soft deletes, so you can still inspect what happened.
from kitaru import memory
memory.configure(scope="repo_docs", scope_type="namespace")
memory.set("draft/status", "draft")
memory.set("draft/status", "approved")
memory.delete("draft/status")
print(memory.get("draft/status")) # None
print(memory.history("draft/status")) # includes all versions + deletion markerThat sequence tells a concrete story:
- the key existed
- it changed over time
- the latest version is a deletion marker (called a tombstone — a version that means "this key was deleted," rather than the data being erased)
- history still preserves the earlier states
Compact memory with an LLM summary
Over time, a memory key can accumulate many versions — or several related keys
can hold overlapping information. compact sends selected memory values to an
LLM and writes the returned summary as a new memory version.
The default single-key workflow is the common one: the current value of one key has gotten too large, so you compact just that latest live value back down.
from kitaru import KitaruClient
client = KitaruClient()
# Single-key mode: compact the current value of one key
result = client.memories.compact(
scope="repo_docs",
scope_type="namespace",
key="conventions/test_runner",
)
print(result.entry.key) # "conventions/test_runner"
print(result.sources_read) # 1If you explicitly want to summarize the full non-deleted version history of one
key instead, set source_mode="history":
result = client.memories.compact(
scope="repo_docs",
scope_type="namespace",
key="conventions/test_runner",
source_mode="history",
)Multi-key mode still summarizes the current value of each listed key into a separate target key:
result = client.memories.compact(
scope="repo_docs",
scope_type="namespace",
keys=["conventions/test_runner", "conventions/python"],
target_key="summaries/conventions",
instruction="Summarize these repo conventions in 2-3 concise bullets.",
)
print(result.entry.key) # "summaries/conventions"
print(result.sources_read) # 2In multi-key mode, target_key is required. The original source keys are left
untouched.
Compact does not delete source entries. The recommended storage-reduction
workflow is: compact, inspect the summary if you want, then run purge
separately.
Purge old versions
Soft deletes preserve history, but sometimes you need to reclaim space. purge
physically deletes artifact versions.
from kitaru import KitaruClient
client = KitaruClient()
# Keep the newest version, delete the rest
result = client.memories.purge(
"conventions/test_runner",
scope="repo_docs",
scope_type="namespace",
keep=1,
)
print(result.versions_deleted) # number of old versions removed
# Purge across an entire scope
result = client.memories.purge_scope(
scope="repo_docs",
scope_type="namespace",
keep=1,
include_deleted=True, # also fully remove tombstoned keys
)Omitting keep deletes all versions, including the current value.
purge_scope skips the internal compaction audit log so the maintenance
trail survives cleanup.
Inspect the compaction audit log
Every compact and purge operation writes an audit record. Read the
maintenance trail with compaction_log:
records = client.memories.compaction_log(
scope="repo_docs",
scope_type="namespace",
)
for record in records: # newest first
print(record.operation, record.scope, record.timestamp)
if record.operation == "compact":
print(f" wrote {record.target_key} v{record.target_version}")
else:
print(f" deleted {record.versions_deleted} versions")Memory maintenance from the CLI
The CLI exposes the same admin operations:
# Compact the current value of one key
kitaru memory compact --scope repo_docs \
--scope-type namespace \
--key conventions/test_runner
# Summarize the full history of one key instead
kitaru memory compact --scope repo_docs \
--scope-type namespace \
--key conventions/test_runner \
--source-mode history
# Compact multiple current values into one summary key
kitaru memory compact --scope repo_docs \
--scope-type namespace \
--keys conventions/test_runner --keys conventions/python \
--target-key summaries/conventions \
--instruction "Summarize in 2-3 bullets."
# Purge old versions of one key, keeping the newest
kitaru memory purge conventions/test_runner --scope repo_docs --scope-type namespace --keep 1
# Purge an entire scope, including tombstoned keys
kitaru memory purge-scope --scope repo_docs --scope-type namespace --keep 1 --include-deleted
# View the compaction audit log
kitaru memory compaction-log --scope repo_docs --scope-type namespaceReindex older execution-scope memory
Most users do not need to do anything here.
If you are starting on a current Kitaru release, new execution-scoped memory is indexed automatically when it is written. That is the normal happy path, and it is the one to optimize for in day-to-day use.
The reindex command is mainly for the smaller group of users who already had execution-scoped memory stored before flow-level discovery tags were added.
When should you run it?
Run kitaru memory reindex if all of the following are true:
- you already have older execution-scoped memory in a project
- you want the UI to discover execution-scope memories by flow membership
- that older data was written before the new indexing tags shipped
Dry run first
Start with the preview:
kitaru memory reindexThis scans memory artifact versions in the active project and reports:
- how many versions were scanned
- how many already have the new tags
- how many historical versions need tag updates
- whether any versions could not be matched back to a flow
Apply the backfill
If the dry run looks good, rerun with --apply:
kitaru memory reindex --applyThis adds any missing:
kitaru:memory:scope_type:*tagskitaru:memory:flow_id:*tags for execution-scoped memory when the flow can be resolved
Important limitations
- Reindexing is project-scoped. Run it separately in each project that has historical memory data.
- The operation is additive only. It does not rewrite memory values, delete anything, or remove existing tags.
- This phase backfills tags only, not
flow_id/flow_namemetadata. So older entries may become discoverable in the UI while still showingnot indexedfor flow metadata in detailed inspection views. - Reindexing improves discovery. It does not retroactively turn detached
writes into live-run provenance, so
execution_idstaysNonefor those historical detached versions. - Rerunning the command is safe. It recomputes which tags are still missing and only adds those.
Memory maintenance from MCP
Assistants can use the MCP memory maintenance tools:
kitaru_memory_compact(scope, scope_type, key=None, keys=None, source_mode="current", target_key=None, instruction=None, model=None, max_tokens=None)kitaru_memory_purge(key, scope, scope_type, keep=None)kitaru_memory_purge_scope(scope, scope_type, keep=None, include_deleted=False)kitaru_memory_compaction_log(scope, scope_type)
These follow the same semantics as the Python client and CLI surfaces.
Restrictions to remember
- Memory is allowed in flow bodies
- Memory is forbidden inside
@checkpoint— checkpoints can be skipped by caching or replay, which would silently lose writes and serve stale reads. See Why memory is forbidden in checkpoints for the full explanation. - Outside a flow, configure the scope first with
memory.configure(...) - Replay may observe newer values and replays of writes create new versions
Runnable example
If you want one script that tells the whole story, run
examples/memory/flow_with_memory.py:
uv run examples/memory/flow_with_memory.pyIt demonstrates:
- outside-flow namespace seeding with
kitaru.memory - in-flow reads, writes, deletes, history, and scope switching
- explicit namespace, flow, and execution inspection with
KitaruClient.memories - detached post-run writes into an execution scope, including the membership-vs-provenance distinction
- post-run memory maintenance: multi-key compaction, purge, and audit log inspection
By default the example renders narrated text output. Use --output json for
the raw structured snapshot.
The example auto-detects whether a default model is configured. If one is available, the walkthrough includes LLM-powered compaction, purge, and audit log inspection. Without a model, those sections are skipped with guidance on how to enable them. See the example README for setup details and the demo playbook for recording recipes.