Merge pull request 'ADR 002 Kubernetes Distribution' (#2) from adr/k8s-distro into main
Reviewed-on: #2 Reviewed-by: Marco De Luca <marco.deluca@noreply.servala.app.codey.ch>
This commit is contained in:
commit
e723d36f49
1 changed files with 69 additions and 0 deletions
69
docs/ADRs/adr002.md
Normal file
69
docs/ADRs/adr002.md
Normal file
|
|
@ -0,0 +1,69 @@
|
||||||
|
---
|
||||||
|
status: draft
|
||||||
|
date: 2025-12-05
|
||||||
|
author: Marco De Luca and Tobias Brunner
|
||||||
|
---
|
||||||
|
|
||||||
|
# ADR 002 Kubernetes Distribution
|
||||||
|
|
||||||
|
## Context and Problem Statement
|
||||||
|
|
||||||
|
We're building a multi-CSP platform that runs VSHN AppCat workloads across different cloud providers. We need to decide how to provision and manage Kubernetes clusters on each CSP. The choice affects operational complexity, cost structure, consistency across environments, and our ability to support AppCat reliably.
|
||||||
|
|
||||||
|
Each CSP offers their own managed Kubernetes service (EKS, AKS, GKE, SKS, etc.), but these services differ significantly in behavior, versioning, available features, and upgrade paths. For a platform like Servala, where predictable behavior and consistency are essential for AppCat development and operations, these differences create friction.
|
||||||
|
|
||||||
|
## Considered Options
|
||||||
|
|
||||||
|
We evaluated two main options for running Kubernetes on each CSP:
|
||||||
|
|
||||||
|
1. using the CSP's managed Kubernetes offering
|
||||||
|
1. deploying our own Kubernetes distribution on top of their compute layer (e.g., Talos Linux).
|
||||||
|
|
||||||
|
The table below summarizes the main differences we identified.
|
||||||
|
|
||||||
|
| BYO Kubernetes | Cloud Kubernetes |
|
||||||
|
| ----------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------- |
|
||||||
|
| :lucide-check: We have a lot already in place (concept) | :lucide-check: Out-of-the-box pre-initiated |
|
||||||
|
| :lucide-check: Standardized infrastructure across CSPs | :lucide-check: Potentially better integrated |
|
||||||
|
| :lucide-check: Easier for AppCat to develop and test | :lucide-check: Pre-defined upgrade paths |
|
||||||
|
| :lucide-check: Full control over versions, upgrade cadence and feature gates | :lucide-check: Support from CSP |
|
||||||
|
| :lucide-check: Freedom of choice of cluster components | :lucide-x: Limited flexibility |
|
||||||
|
| :lucide-check: Potentially better security | :lucide-x: Inconsistency across CSPs (different k8s flavors, k8s version, CRDs, API feature gates) |
|
||||||
|
| :lucide-check: Predictable cluster behavior across CSPs | :lucide-x: Harder for AppCat to test on different environments |
|
||||||
|
| :lucide-check: Easier to implement in a GitOps-first pattern | :lucide-x: Opinionated software and constraints |
|
||||||
|
| :lucide-check: Potentially cheaper, scalable cost model tied to raw compute offering not per-cluster service fees | :lucide-x: Unpredictable behavior (e.g., noisy neighbors) |
|
||||||
|
| :lucide-check: Streamlined support and troubleshooting model | |
|
||||||
|
| :lucide-x: Cope with infrastructure | |
|
||||||
|
| :lucide-x: Manage whole stack | |
|
||||||
|
|
||||||
|
Based on this, BYO Kubernetes is a clear winner.
|
||||||
|
|
||||||
|
### Kubernetes distributions
|
||||||
|
|
||||||
|
We evaluated the following Kubernetes distributions:
|
||||||
|
|
||||||
|
**OpenShift**: Too expensive due to subscription costs and high infrastructure requirements. Adds significant complexity and bloat for a platform that only needs to run hosted AppCat workloads. The overhead doesn't justify the benefits for our use case.
|
||||||
|
|
||||||
|
**Rancher**: Past experience with Rancher has been negative. The additional management layer introduces complexity and potential points of failure that we want to avoid.
|
||||||
|
|
||||||
|
**k3s**: Lightweight and easy to deploy, but lacks full integration with the underlying operating system. We would still need to manage a traditional Linux distribution separately, which adds operational burden.
|
||||||
|
|
||||||
|
**Flatcar Container Linux**: A container-optimized OS forked from CoreOS Container Linux. Provides automatic updates, immutable infrastructure patterns, and is designed for running containers. However, it still requires a separate Kubernetes distribution to be installed on top (like k3s or kubeadm), adding another layer to manage. While more secure than traditional Linux distributions, it retains SSH access and a shell, which increases the attack surface compared to Talos.
|
||||||
|
|
||||||
|
**Talos Linux**: Purpose-built for Kubernetes with an immutable, API-driven design. No SSH, no shell, minimal attack surface. The OS and Kubernetes are managed as a single unit with declarative configuration. Produces consistent behavior across all environments.
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
We're choosing a BYO Kubernetes approach using Talos Linux across all CSPs.
|
||||||
|
|
||||||
|
Talos Linux provides an immutable, API-driven operating system purpose-built for Kubernetes. It eliminates SSH access, uses a declarative configuration model, and produces identical cluster behavior regardless of where it runs. This gives us the consistency and control we need without the operational burden of managing traditional Linux distributions.
|
||||||
|
|
||||||
|
Each CSP will run Talos Linux on their compute layer (usually virtual machines). We control the Kubernetes version, component configuration, security defaults, and upgrade cadence. The same cluster configuration works everywhere, which simplifies AppCat development, testing, and support.
|
||||||
|
|
||||||
|
### Consequences
|
||||||
|
|
||||||
|
The operational model shifts from consuming managed services to managing infrastructure. We take on responsibility for Kubernetes upgrades, security patches, and cluster lifecycle management. This requires investment in automation and tooling, but we already have experience and concepts in place from previous work.
|
||||||
|
|
||||||
|
In return, we get full control over our platform. AppCat can be developed and tested against a single, predictable Kubernetes environment instead of accounting for differences between different flavors of Kubernetes distributions. Troubleshooting becomes easier because cluster behavior is consistent. Cost becomes predictable and tied to compute resources rather than per-cluster service fees.
|
||||||
|
|
||||||
|
The security posture improves. Talos Linux has no SSH, no shell, and a minimal attack surface. We define exactly what runs on the nodes. There are no surprises from CSP-specific components or automatic updates we don't control.
|
||||||
Loading…
Add table
Add a link
Reference in a new issue