Kitaru

Logging and Metadata

Attach structured data to executions and checkpoints

Kitaru has two separate observability channels. Understanding the difference between them avoids confusion:

ChannelWhat it doesHow you use it
Structured metadataKey-value data attached to a specific execution or checkpointkitaru.log(key=value) in Python
Runtime logsExecution/checkpoint stdout/stderr retrieval + backend destination configurationkitaru executions logs ..., KitaruClient.executions.logs(...), and kitaru log-store ...

This page focuses on structured metadata via kitaru.log(). For runtime log retrieval and storage backend configuration, see:

Attaching metadata with kitaru.log()

Call kitaru.log() with keyword arguments to attach structured metadata:

from kitaru import checkpoint
import kitaru

@checkpoint
def call_model(prompt: str) -> str:
    response = model.generate(prompt)
    kitaru.log(
        tokens=response.usage.total_tokens,
        cost=response.usage.cost,
        model=response.model,
    )
    return response.text

You can call kitaru.log() multiple times — metadata accumulates rather than replacing previous entries.

How targeting works

kitaru.log() is scope-sensitive. It automatically detects where it is called and attaches metadata to the right target:

inside a checkpoint  →  metadata attaches to that checkpoint
inside a flow only   →  metadata attaches to the execution
outside a flow       →  raises KitaruContextError

This means you can log at both levels in the same workflow:

from kitaru import flow, checkpoint
import kitaru

@checkpoint
def write_draft(topic: str) -> str:
    draft = f"Draft about {topic}."
    kitaru.log(draft_cost={"usd": 0.001})       # → checkpoint metadata
    kitaru.log(model="demo-model", latency_ms=120)  # → checkpoint metadata
    return draft

@flow
def writing_agent(topic: str) -> str:
    kitaru.log(topic=topic, stage="started")     # → execution metadata
    draft = write_draft(topic)
    kitaru.log(stage="completed")                # → execution metadata
    return draft

Execution-level and checkpoint-level metadata remain separate — they do not mix together.

Repeated keys and merging behavior

When you log the same key multiple times in the same scope, the behavior depends on the value type:

Value typeBehavior
DictionaryValues are merged (keys from both calls are combined)
Scalar (string, number, etc.)The latest value wins

For example:

from kitaru import checkpoint
import kitaru

@checkpoint
def tracked_step() -> str:
    kitaru.log(cost={"usd": 0.001})
    kitaru.log(cost={"tokens": 42})
    # cost metadata is merged: {"usd": 0.001, "tokens": 42}

    kitaru.log(status="draft")
    kitaru.log(status="final")
    # status metadata resolves to "final" (latest scalar wins)
    return "done"

What values are accepted

Metadata values should be JSON-serializable:

  • Strings, numbers, booleans
  • Lists and dictionaries
  • Nested combinations of the above

Standard keys like cost, tokens, latency, and model are common conventions, but you can use any key name that makes sense for your workflow.

Runtime logs (separate system)

Runtime logs are a different system from structured metadata. They cover captured execution/checkpoint log lines and where those lines are stored.

Retrieve logs with:

  • kitaru executions logs <exec_id>
  • KitaruClient().executions.logs(exec_id, ...)
  • MCP get_execution_logs

Configure the preferred backend destination with kitaru log-store ....

The precedence for Kitaru's preferred runtime log backend is:

  1. Environment variables (highest priority)
  2. Global user config (set via kitaru log-store set)
  3. Built-in default (the artifact-store backend)

The active stack's log store controls where runtime logs are actually emitted at execution time. If the stack's log store differs from Kitaru's saved preference, kitaru status and kitaru log-store show will warn about the mismatch.

For full details, see View Execution Runtime Logs and Configure Runtime Log Storage.

Next steps

On this page