Stacks
Create, inspect, switch, and delete the stacks Kitaru uses for execution
A stack is the execution environment for your flows. It bundles three things:
- Compute — where your flow code runs (locally, on Kubernetes, on Vertex AI, etc.)
- Storage — where checkpoint outputs and saved data are persisted
- Container registry — where container images are pushed for remote execution
When you run a flow, Kitaru uses the active stack to decide where to execute it and where to store results. Locally, everything runs on your machine with local file storage. For production, you point the stack at cloud infrastructure.
The default stack
After kitaru init, you get a default stack that runs everything locally:
kitaru stack currentThis is enough to develop and test flows on your machine. No cloud accounts or containers required.
List available stacks
kitaru stack listThe table view shows each stack ID and marks the active one.
If you need machine-readable output, use JSON:
kitaru stack list --output jsonEach list item includes:
idnameis_activeis_managed
is_managed is true for stacks created by Kitaru's stack create command.
Switching stacks
kitaru stack use prod-k8sYou can pass either a stack name or a stack ID. The selected stack is persisted as your default until you switch it again.
Now every .run() call uses that stack. You can also override per-run:
my_agent.run(topic="...", stack="prod-k8s")This command changes the fallback stack Kitaru will use when no higher-precedence override is present. It does not rewrite any per-flow or per-run overrides.
Create a local stack
kitaru stack create devBy default, Kitaru creates:
- a local orchestrator named
dev - a local artifact store named
dev - a stack named
dev
Then it automatically activates the new stack.
You will see output like:
Created stack: dev
Active stack: default → devIf you want to create the stack without switching to it yet:
kitaru stack create dev --no-activateCreate a remote stack
Today, the CLI and MCP server can provision five shipped stack types:
localkubernetesvertexsagemakerazureml
These remote stack commands assume you are already connected to the Kitaru
server that should own the stack. If you already have a deployed server,
connect first with kitaru login ... and verify with kitaru status.
In story form: Kitaru can assemble the stack definition and cloud connector for you, but it still expects the bucket, registry, and any cluster you point at to already exist.
Kubernetes example
kitaru stack create prod-k8s \
--type kubernetes \
--artifact-store s3://my-bucket/kitaru \
--container-registry 123456789012.dkr.ecr.eu-west-1.amazonaws.com \
--cluster prod-cluster \
--region eu-west-1For the end-to-end Kubernetes setup, see Kubernetes.
For all available orchestrator fields (useful with --extra), see the ZenML Kubernetes orchestrator reference.
Vertex example
kitaru stack create prod-vertex \
--type vertex \
--artifact-store gs://my-bucket/kitaru \
--container-registry us-central1-docker.pkg.dev/my-project/my-repo \
--region us-central1Vertex uses a managed runner, so there is no --cluster or --namespace flag. kitaru stack show prod-vertex will report the runner location that ZenML stores for the Vertex orchestrator. For all available orchestrator fields (useful with --extra), see the ZenML Vertex orchestrator reference.
SageMaker example
kitaru stack create prod-sagemaker \
--type sagemaker \
--artifact-store s3://my-bucket/kitaru \
--container-registry 123456789012.dkr.ecr.eu-west-1.amazonaws.com \
--region eu-west-1 \
--execution-role arn:aws:iam::123456789012:role/SageMakerExecutionRoleSageMaker is also a managed-runner path, so there is no --cluster or --namespace flag. kitaru stack show prod-sagemaker will report the runner region and execution role. For all available orchestrator fields (useful with --extra), see the ZenML SageMaker orchestrator reference.
AzureML example
kitaru stack create prod-azureml \
--type azureml \
--artifact-store az://my-container/kitaru \
--container-registry demo.azurecr.io/my-team/my-image \
--subscription-id 00000000-0000-0000-0000-000000000123 \
--resource-group ml-platform \
--workspace team-ml \
--region westeuropeAzureML is another managed-runner path, so there is no --cluster, --namespace, or --execution-role flag. kitaru stack show prod-azureml will report the runner subscription, resource group, workspace, and location that ZenML stores for the AzureML orchestrator. For all available orchestrator fields (useful with --extra), see the ZenML AzureML orchestrator reference.
You can also keep the same inputs in a YAML file and create the stack with:
kitaru stack create -f stack.yamlCLI flags still override YAML values when both are provided.
Advanced stack defaults with --extra and --async
The named stack flags cover the common story: where artifacts live, which registry to use, which cluster or cloud region to target.
Sometimes you need one layer deeper. That is what --extra is for.
Think of it like this:
- the named flags are the front desk
--extrais the side door into the underlying stack component defaults
You pass overrides as TARGET.FIELD=VALUE, where TARGET is one of:
orchestratorartifact_storecontainer_registry
For example, this Vertex stack sets a pipeline root and leaves the orchestrator asynchronous by default:
kitaru stack create prod-vertex \
--type vertex \
--artifact-store gs://my-bucket/kitaru \
--container-registry us-central1-docker.pkg.dev/my-project/my-repo \
--region us-central1 \
--async \
--extra orchestrator.pipeline_root=gs://my-bucket/vertex-root--async is just a convenience flag for the common case orchestrator.synchronous=false.
If you need the explicit setting instead, --extra wins:
kitaru stack create prod-vertex \
--type vertex \
--artifact-store gs://my-bucket/kitaru \
--container-registry us-central1-docker.pkg.dev/my-project/my-repo \
--region us-central1 \
--async \
--extra orchestrator.synchronous=trueYou can also keep the same advanced defaults in YAML:
name: prod-vertex
type: vertex
artifact_store: gs://my-bucket/kitaru
container_registry: us-central1-docker.pkg.dev/my-project/my-repo
region: us-central1
async: true
extra:
orchestrator:
pipeline_root: gs://my-bucket/vertex-root
container_registry:
default_repository: agentsCLI --extra values merge on top of YAML extra: values instead of replacing the whole object. In story form: the YAML file is your saved blueprint, and the CLI extras are the sticky notes you add for this one build.
Kitaru does not try to duplicate every underlying field in its own docs. For full field inventories, see the ZenML component reference for your orchestrator type: Kubernetes, Vertex, SageMaker, AzureML.
Delete a stack
To delete only the stack record and keep its components:
kitaru stack delete devTo also remove Kitaru-managed components that are not shared with other stacks:
kitaru stack delete dev --recursiveIf the stack you are deleting is currently active, Kitaru protects you by default. Use --force to switch back to the default stack first and then continue:
kitaru stack delete dev --recursive --forceUse the Python SDK
import kitaru
print(kitaru.current_stack())
print(kitaru.list_stacks())
kitaru.create_stack("dev")
kitaru.use_stack("production")
kitaru.delete_stack("dev", recursive=True, force=True)The SDK keeps StackInfo intentionally small: id, name, and is_active.
That means is_managed is part of structured list output, not part of StackInfo itself.
One important scope note: the public Python SDK kitaru.create_stack(...) currently provisions local stacks only. Kubernetes, Vertex, SageMaker, and AzureML stack creation are exposed through the CLI and MCP surfaces.
Precedence with flow-level stack overrides
The active stack is only one layer in the execution precedence chain. Higher layers can override it:
my_flow.run(..., stack="gpu-cluster")@flow(stack="gpu-cluster")kitaru.configure(stack="gpu-cluster")KITARU_STACKpyproject.toml([tool.kitaru].stack)- currently active stack
In story form:
kitaru stack use prodchanges your persisted default stackkitaru.configure(stack="gpu-cluster")changes the default only for the current Python process@flow(stack="gpu-cluster")binds a default to one specific flow definitionmy_flow.run(stack="gpu-cluster")overrides everything else for that one execution
Those higher-precedence overrides do not change the active stack you see in kitaru stack current; they are temporary execution-time bindings.