Kitaru

llm

LLM call primitive for tracked model interactions.

kitaru.llm() wraps one LiteLLM completion call with Kitaru tracking.

func_normalize_call_name(name) -> str

Normalize optional user call names into ID-safe call names.

paramnamestr | None

Returns

str
func_provider_name(model) -> str | None

Extract the provider prefix from a LiteLLM model identifier.

parammodelstr

Returns

str | None
func_provider_credential_keys(model) -> tuple[str, ...] | None

Return known environment-variable credential keys for a model provider.

parammodelstr

Returns

tuple[str, ...] | None
func_read_secret_values(secret_name) -> dict[str, str]

Read secret key/value pairs from ZenML for env injection.

paramsecret_namestr

Returns

dict[str, str]
func_resolve_credential_overlay(selection) -> tuple[dict[str, str], str]

Resolve env-first credentials with optional ZenML secret fallback.

paramselectionResolvedModelSelection

Returns

tuple[dict[str, str], str]
func_normalize_messages(prompt, *, system) -> list[dict[str, Any]]

Normalize string/chat prompt input into LiteLLM message format.

parampromptstr | list[dict[str, Any]]
paramsystemstr | None

Returns

list[dict[str, typing.Any]]
func_extract_response_text(raw_response) -> str

Extract the text response from a LiteLLM completion response.

paramraw_responseAny

Returns

str
func_extract_usage(raw_response) -> _LLMUsage

Extract usage/cost values from a LiteLLM completion response.

paramraw_responseAny

Returns

kitaru.llm._LLMUsage
func_temporary_env(additions) -> Any

Temporarily add/override environment variables for one call.

paramadditionsMapping[str, str]

Returns

typing.Any
func_execute_llm_call(request) -> str

Execute one normalized LLM call and persist artifacts/metadata.

paramrequest_LLMRequest

Returns

str
func_llm_checkpoint_call(request) -> str

Synthetic checkpoint used for flow-body kitaru.llm() calls.

paramrequest_LLMRequest

Returns

str
funcllm(prompt, *, model=None, system=None, temperature=None, max_tokens=None, name=None) -> str

Make a tracked LLM call.

parampromptstr | list[dict[str, Any]]

User prompt text or a chat-style message list.

parammodelstr | None
= None

Model alias or concrete LiteLLM model identifier.

paramsystemstr | None
= None

Optional system prompt.

paramtemperaturefloat | None
= None

Optional sampling temperature.

parammax_tokensint | None
= None

Optional maximum response tokens.

paramnamestr | None
= None

Optional display name for this call.

Returns

str

The model response text.