Containerization
How Kitaru builds and configures container images for remote execution
When you run a flow on a remote stack (Kubernetes, Vertex AI, SageMaker, Azure ML),
Kitaru packages your code into a container image automatically. The image
parameter on @flow controls how that image is built.
Default behavior
With no image configuration, Kitaru:
- Uses a default Python base image
- Installs
kitaruinto the container - Packages your project source code (detected from the
.kitaru/project root)
This is enough for simple flows with no extra dependencies.
The image parameter
Pass image to @flow, .run(), or kitaru.configure(). It accepts an
ImageSettings object or a plain dictionary:
from kitaru import flow
import kitaru
@flow(
image=kitaru.ImageSettings(
base_image="python:3.12-slim",
requirements=["httpx", "pydantic-ai"],
apt_packages=["git"],
environment={"MY_VAR": "value"},
),
)
def my_agent(topic: str) -> str:
...Or as a dictionary:
@flow(
image={
"base_image": "python:3.12-slim",
"requirements": ["httpx", "pydantic-ai"],
},
)
def my_agent(topic: str) -> str:
...Available fields
| Field | Type | Description |
|---|---|---|
base_image | str | Docker image to start from (e.g. python:3.12-slim) |
requirements | list[str] | Python packages to install (pip format) |
dockerfile | str | Path to a custom Dockerfile instead of auto-building |
environment | dict[str, str] | Environment variables set inside the container |
apt_packages | list[str] | System packages to install via apt |
replicate_local_python_environment | bool | Mirror your local pip freeze into the container |
Automatic Kitaru injection
Kitaru automatically adds itself to the container requirements so your flow code
can import and run kitaru at execution time. You do not need to add kitaru to
your requirements list manually.
If you already include kitaru (with or without a version pin), it is not
duplicated:
@flow(
image=kitaru.ImageSettings(
requirements=["kitaru>=0.2.0", "httpx"],
),
)
def my_agent(topic: str) -> str:
...
# Container installs: kitaru>=0.2.0, httpx (no duplicate)If you provide a custom base_image or dockerfile, Kitaru does not
auto-inject the SDK. Your image must already include kitaru.
Replicating your local environment
During development, you can mirror your entire local Python environment into the
container. This runs pip freeze and installs everything so the remote container
matches your dev setup exactly:
@flow(image={"replicate_local_python_environment": True})
def my_agent(topic: str) -> str:
...This is convenient for quick iteration but produces less reproducible builds.
For production, pin explicit requirements instead.
Custom Dockerfile
For full control, point to your own Dockerfile:
@flow(
image=kitaru.ImageSettings(
dockerfile="docker/Dockerfile.agent",
),
)
def my_agent(topic: str) -> str:
...When using a custom Dockerfile, you are responsible for installing Python,
kitaru, and any other dependencies your flow needs.
System packages
If your flow needs OS-level tools (e.g., git, ffmpeg, poppler-utils),
use apt_packages:
@flow(
image=kitaru.ImageSettings(
apt_packages=["git", "ffmpeg"],
requirements=["httpx"],
),
)
def my_agent(topic: str) -> str:
...Setting image config at different levels
The image parameter follows the same
precedence rules as other
execution settings. From highest to lowest priority:
# 1. Per-run override (highest)
my_agent.run("topic", image={"requirements": ["httpx"]})
# 2. Flow decorator default
@flow(image={"requirements": ["httpx"]})
def my_agent(topic: str) -> str: ...
# 3. Process-level default
kitaru.configure(image=kitaru.ImageSettings(requirements=["httpx"]))
# 4. Environment variable
# export KITARU_IMAGE='{"requirements": ["httpx"]}'
# 5. pyproject.toml
# [tool.kitaru.image]
# requirements = ["httpx"]Environment variables inside the container
Use environment to inject env vars into the running container. This is useful
for API keys, feature flags, or runtime configuration:
@flow(
image=kitaru.ImageSettings(
environment={
"OPENAI_API_KEY": "{{ OPENAI_KEY }}",
"LOG_LEVEL": "DEBUG",
},
),
)
def my_agent(topic: str) -> str:
...For sensitive values like API keys, prefer
Kitaru secrets over hardcoded
environment variables. kitaru.llm() resolves alias-linked secrets
automatically.
How source code is packaged
Kitaru detects your project root from the .kitaru/ directory created by
kitaru init. Everything under that root is packaged into the container image
so your flow code, local modules, and utility files are available at runtime.
Make sure you have run kitaru init in your project directory before running
flows on remote stacks.
Related pages
- Configuration — full config precedence and env vars
- Stacks — execution environments and stack management
- Kubernetes