From bd103e17fed9a3f3522d4bb168cac2073f59c2a2 Mon Sep 17 00:00:00 2001 From: Tobias Brunner Date: Mon, 8 Dec 2025 15:42:55 +0100 Subject: [PATCH 1/3] adr discussing the choice of kubernetes distro --- docs/ADRs/adr002.md | 38 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) create mode 100644 docs/ADRs/adr002.md diff --git a/docs/ADRs/adr002.md b/docs/ADRs/adr002.md new file mode 100644 index 0000000..c34d6be --- /dev/null +++ b/docs/ADRs/adr002.md @@ -0,0 +1,38 @@ +--- +status: draft +date: 2025-12-05 +author: Marco De Luca and Tobias Brunner +--- + +# ADR 002 Kubernetes Distribution + +## Context and Problem Statement + +TODO + +## Considered Options + +We evaluated two options for running Kubernetes on each CSP: using the CSP's managed Kubernetes offering or deploying our own Kubernetes distribution on top of their compute layer (e.g., Talos Linux). We explicitly decided not to use OpenShift or OKE, mainly due to their high cost, added complexity, and the amount of bloat they introduce for a platform that only needs to run hosted AppCat workloads. Lower infrastructure cost directly benefits Servala and allows us to offer more competitive pricing. + +For Servala, consistency across CSPs and predictable behavior for AppCat are essential. Managed Kubernetes offerings differ significantly between providers, resulting in fragmentation and making AppCat development, testing, and support more difficult. A BYO Kubernetes approach gives us full control over versions, components, and security defaults, enabling a standardized setup across all CSPs. + +The table below summarizes the main differences we identified. + +| BYO Kubernetes | Cloud Kubernetes | +| ----------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------- | +| :lucide-check: We have a lot already in place (concept) | :lucide-check: Out-of-the-box pre-initiated | +| :lucide-check: Standardized infrastructure across CSPs | :lucide-check: Potentially better integrated | +| :lucide-check: Easier for AppCat to develop and test | :lucide-check: Pre-defined upgrade paths | +| :lucide-check: Full control over versions, upgrade cadence and feature gates | :lucide-check: Support from CSP | +| :lucide-check: Freedom of choice of cluster components | :lucide-x: Limited flexibility | +| :lucide-check: Potentially better security | :lucide-x: Inconsistency across CSPs (different k8s flavors, k8s version, CRDs, API feature gates) | +| :lucide-check: Predictable cluster behavior across CSPs | :lucide-x: Harder for AppCat to test on different environments | +| :lucide-check: Easier to implement in a GitOps-first pattern | :lucide-x: Opinionated software and constraints | +| :lucide-check: Potentially cheaper, scalable cost model tied to raw compute offering not per-cluster service fees | :lucide-x: Unpredictable behavior (e.g., noisy neighbors) | +| :lucide-check: Streamlined support and troubleshooting model | | +| :lucide-x: Cope with infrastructure | | +| :lucide-x: Manage whole stack | | + +## Decision + +### Consequences From f5edfc53649c74f4ead1b2ce9a8c4a3bc928f0cd Mon Sep 17 00:00:00 2001 From: Tobias Brunner Date: Mon, 8 Dec 2025 17:05:39 +0100 Subject: [PATCH 2/3] document decision for talos --- docs/ADRs/adr002.md | 35 ++++++++++++++++++++++++++++++++--- 1 file changed, 32 insertions(+), 3 deletions(-) diff --git a/docs/ADRs/adr002.md b/docs/ADRs/adr002.md index c34d6be..b684ed0 100644 --- a/docs/ADRs/adr002.md +++ b/docs/ADRs/adr002.md @@ -8,13 +8,16 @@ author: Marco De Luca and Tobias Brunner ## Context and Problem Statement -TODO +We're building a multi-CSP platform that runs VSHN AppCat workloads across different cloud providers. We need to decide how to provision and manage Kubernetes clusters on each CSP. The choice affects operational complexity, cost structure, consistency across environments, and our ability to support AppCat reliably. + +Each CSP offers their own managed Kubernetes service (EKS, AKS, GKE, SKS, etc.), but these services differ significantly in behavior, versioning, available features, and upgrade paths. For a platform like Servala, where predictable behavior and consistency are essential for AppCat development and operations, these differences create friction. ## Considered Options -We evaluated two options for running Kubernetes on each CSP: using the CSP's managed Kubernetes offering or deploying our own Kubernetes distribution on top of their compute layer (e.g., Talos Linux). We explicitly decided not to use OpenShift or OKE, mainly due to their high cost, added complexity, and the amount of bloat they introduce for a platform that only needs to run hosted AppCat workloads. Lower infrastructure cost directly benefits Servala and allows us to offer more competitive pricing. +We evaluated two main options for running Kubernetes on each CSP: -For Servala, consistency across CSPs and predictable behavior for AppCat are essential. Managed Kubernetes offerings differ significantly between providers, resulting in fragmentation and making AppCat development, testing, and support more difficult. A BYO Kubernetes approach gives us full control over versions, components, and security defaults, enabling a standardized setup across all CSPs. +1. using the CSP's managed Kubernetes offering +1. deploying our own Kubernetes distribution on top of their compute layer (e.g., Talos Linux). The table below summarizes the main differences we identified. @@ -33,6 +36,32 @@ The table below summarizes the main differences we identified. | :lucide-x: Cope with infrastructure | | | :lucide-x: Manage whole stack | | +Based on this, BYO Kubernetes is a clear winner. + +### Kubernetes distributions + +We evaluated the following Kubernetes distributions: + +**OpenShift**: Too expensive due to subscription costs and high infrastructure requirements. Adds significant complexity and bloat for a platform that only needs to run hosted AppCat workloads. The overhead doesn't justify the benefits for our use case. + +**Rancher**: Past experience with Rancher has been negative. The additional management layer introduces complexity and potential points of failure that we want to avoid. + +**k3s**: Lightweight and easy to deploy, but lacks full integration with the underlying operating system. We would still need to manage a traditional Linux distribution separately, which adds operational burden. + +**Talos Linux**: Purpose-built for Kubernetes with an immutable, API-driven design. No SSH, no shell, minimal attack surface. The OS and Kubernetes are managed as a single unit with declarative configuration. Produces consistent behavior across all environments. + ## Decision +We're choosing a BYO Kubernetes approach using Talos Linux across all CSPs. + +Talos Linux provides an immutable, API-driven operating system purpose-built for Kubernetes. It eliminates SSH access, uses a declarative configuration model, and produces identical cluster behavior regardless of where it runs. This gives us the consistency and control we need without the operational burden of managing traditional Linux distributions. + +Each CSP will run Talos Linux on their compute layer (usually virtual machines). We control the Kubernetes version, component configuration, security defaults, and upgrade cadence. The same cluster configuration works everywhere, which simplifies AppCat development, testing, and support. + ### Consequences + +The operational model shifts from consuming managed services to managing infrastructure. We take on responsibility for Kubernetes upgrades, security patches, and cluster lifecycle management. This requires investment in automation and tooling, but we already have experience and concepts in place from previous work. + +In return, we get full control over our platform. AppCat can be developed and tested against a single, predictable Kubernetes environment instead of accounting for differences between different flavors of Kubernetes distributions. Troubleshooting becomes easier because cluster behavior is consistent. Cost becomes predictable and tied to compute resources rather than per-cluster service fees. + +The security posture improves. Talos Linux has no SSH, no shell, and a minimal attack surface. We define exactly what runs on the nodes. There are no surprises from CSP-specific components or automatic updates we don't control. From f6aa543dfcd5d708c3126c1a9c9f468d5b699259 Mon Sep 17 00:00:00 2001 From: Tobias Brunner Date: Tue, 16 Dec 2025 09:20:10 +0100 Subject: [PATCH 3/3] add note about flatcar linux --- docs/ADRs/adr002.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/ADRs/adr002.md b/docs/ADRs/adr002.md index b684ed0..6a419ea 100644 --- a/docs/ADRs/adr002.md +++ b/docs/ADRs/adr002.md @@ -48,6 +48,8 @@ We evaluated the following Kubernetes distributions: **k3s**: Lightweight and easy to deploy, but lacks full integration with the underlying operating system. We would still need to manage a traditional Linux distribution separately, which adds operational burden. +**Flatcar Container Linux**: A container-optimized OS forked from CoreOS Container Linux. Provides automatic updates, immutable infrastructure patterns, and is designed for running containers. However, it still requires a separate Kubernetes distribution to be installed on top (like k3s or kubeadm), adding another layer to manage. While more secure than traditional Linux distributions, it retains SSH access and a shell, which increases the attack surface compared to Talos. + **Talos Linux**: Purpose-built for Kubernetes with an immutable, API-driven design. No SSH, no shell, minimal attack surface. The OS and Kubernetes are managed as a single unit with declarative configuration. Produces consistent behavior across all environments. ## Decision