From zero to production agents in 4 weeks — on us.
Our engineers pair with your team to deploy Kitaru, migrate your agent code, and prove it works — on your infrastructure, with your data. No cost, no commitment.
Talk to usWhat you get in 4 weeks
Powered by 5 years of ZenML MLOps experience, hundreds of production deployments, and a team that's shipped LLMOps and agent infrastructure for enterprise teams.
Deploy on your infra
We deploy Kitaru on your cloud (AWS, GCP, Azure) or on-prem. We handle Kubernetes, networking, and storage config so your team doesn't have to context-switch.
Week 1Build a durable agent
We pair-program a real agent workflow with your team — checkpoints, human-in-the-loop waits, LLM calls, and replay. Not a toy demo; something you'll actually ship.
Week 2Migrate your agents
We take one of your production agents — LangChain, CrewAI, PydanticAI, or custom — and make it durable. No rewrites; thin decorators around your existing Python.
Week 3Benchmark
Compare reliability, cost, and latency before and after. You get a written report showing exactly what durable execution changed for your agents.
Week 4Talk to us
You're in!
We'll be in touch within one business day to kick off your onboarding.
Frequently asked questions
Is this really free? What's the catch?
There is no catch. Kitaru is open source and we don't charge for this program. We do it because every real-world deployment teaches us something, and teams that go through onboarding tend to become long-term users and contributors. If you eventually need managed hosting or enterprise support, we offer that too — but there's zero obligation.
What kind of agents does Kitaru work with?
Any Python-based agent — LangChain, CrewAI, PydanticAI, custom code, or raw API calls. Kitaru is framework-agnostic. It wraps your existing control flow with durable execution primitives. If your agent runs in Python, it works with Kitaru.
How much engineering time does our team need to invest?
Roughly 2-4 hours per week for one engineer. We do the heavy lifting — infra setup, code migration, debugging. Your engineer joins pair-programming sessions and reviews PRs so the knowledge transfers to your team.
What clouds and infrastructure do you support?
AWS, GCP, and Azure with Kubernetes. We also support on-prem Kubernetes clusters. Kitaru runs anywhere you can run a container — we just need cluster access and a storage backend (S3, GCS, or Azure Blob).
What happens after the 4 weeks?
You have a working production deployment, migrated agent code, and a benchmark report. Your team keeps everything — it's your infrastructure, your code, your Kitaru instance. We offer optional ongoing support and managed hosting if you want it, but you're fully self-sufficient after onboarding.
Do you need access to our data or models?
No. Kitaru runs on your infrastructure and your data never leaves your environment. We need cluster access to deploy and debug, but we don't need to see your training data, prompts, or model outputs. We can work under NDA if needed.
Ready to make your agents durable?
Open source (Apache 2.0). Free onboarding program.