Secrets + Model Registration
Store provider credentials, register a model alias, and use kitaru.llm() inside a flow
This walkthrough shows the full setup path for tracked LLM calls in Kitaru:
- store provider credentials in a secret
- register a model alias that points at that secret
- call
kitaru.llm()inside a flow using the alias
This is the most reusable setup because your flow code can stay stable while you change credentials or swap the underlying model later.
1) Store provider credentials in a secret
Create a secret with real environment-variable-style key names:
kitaru secrets set openai-creds --OPENAI_API_KEY=sk-...What this does:
- creates the secret if it does not exist yet
- updates the provided keys if the secret already exists
- stores it as a private secret by default
Use the exact key names your provider expects, for example:
OPENAI_API_KEYANTHROPIC_API_KEYAZURE_OPENAI_API_KEY
2) Register a model alias
Now register a reusable alias that points to a concrete LiteLLM model string and links the secret by name:
kitaru model register fast --model openai/gpt-4o-mini --secret openai-credsYou can inspect the aliases you have registered with:
kitaru model listWhat Kitaru stores here:
- the alias name (
fast) - the real model string (
openai/gpt-4o-mini) - a reference to the secret (
openai-creds)
kitaru model register stores a secret reference, not the secret value
itself. The first alias you register also becomes your default alias
automatically. Submitted and replayed runs automatically receive the current
registry snapshot through KITARU_MODEL_REGISTRY, so remote executions can
still resolve the alias.
3) Use the alias inside a flow
Once the alias exists, your flow code can stay simple:
import kitaru
from kitaru import checkpoint, flow
@checkpoint
def write_draft(topic: str, outline: str) -> str:
return kitaru.llm(
f"Write a short paragraph about {topic} using this outline:\n{outline}",
model="fast",
name="draft_call",
)
@flow
def llm_writer(topic: str) -> str:
outline = kitaru.llm(
f"Create a 3-bullet outline about {topic}.",
model="fast",
name="outline_call",
)
return write_draft(topic, outline)That code uses the alias name fast, not the raw provider/model string.
4) What happens at runtime
When kitaru.llm() runs, Kitaru does the following:
- resolves the model you asked for
- checks whether that value is an alias in the effective registry visible to the runtime
- if the alias has a linked secret, tries to resolve credentials at runtime
- calls LiteLLM with the resolved model and credentials
- saves prompt/response artifacts and logs usage, latency, and cost metadata
For known providers such as OpenAI, Anthropic, Gemini, and similar providers, credential lookup is environment first:
- if the provider's env var is already set, Kitaru uses the environment
- otherwise, if the alias has a linked secret, Kitaru loads that secret
- if neither is available, the call fails with setup guidance
Model selection precedence for kitaru.llm() is: explicit model= argument
first, then KITARU_DEFAULT_MODEL, then the effective default alias visible
in the current environment. If
KITARU_DEFAULT_MODEL matches a registered alias, Kitaru resolves it as an
alias. Otherwise it is treated as a raw LiteLLM model string.
5) Environment-only shortcut
If you do not want to link a secret, you can keep credentials in the runtime environment instead:
kitaru model register fast --model openai/gpt-4o-mini
export OPENAI_API_KEY=sk-...This is a useful shortcut for local development.
For advanced custom environments, you can also set KITARU_MODEL_REGISTRY
explicitly to add aliases or override matching transported aliases.
For known providers, environment variables still win if both are present. In
other words: if your alias links openai-creds but OPENAI_API_KEY is already
exported in the process, Kitaru uses the environment value.
6) Non-LLM secrets
The walkthrough above covers the fully user-facing LLM path that Kitaru supports today.
If you need a secret for some other external service inside a checkpoint, see Manage Secrets for the current advanced low-level pattern.