add adr001
This commit is contained in:
parent
35260a218d
commit
cf24cec7b7
2 changed files with 64 additions and 4 deletions
|
|
@ -1,20 +1,80 @@
|
|||
---
|
||||
status: draft
|
||||
date: 2025-12-05
|
||||
author: max
|
||||
author: Marco De Luca
|
||||
---
|
||||
|
||||
# ADR 001 Test
|
||||
# ADR 001 Control Plane Architecture
|
||||
|
||||
## Context and Problem Statement
|
||||
|
||||
TODO
|
||||
Servala currently uses AppCat's split control plane architecture. The split control plane consists of a control cluster and service clusters. In this model, Crossplane resources get created on the control cluster where Crossplane is installed, and service clusters act as executors where plain Kubernetes resources get scheduled.
|
||||
|
||||
The control cluster needs kubeconfig credentials to every service cluster. Service clusters need kubeconfig credentials back to the control cluster. This creates two-way cluster dependencies. Development, operations, and architecture are getting complex fast.
|
||||
|
||||
We're building a sovereign, multi-CSP platform. The control model needs to be simpler and more predictable. We need to decide - keep the split architecture or move orchestration to the Portal?
|
||||
|
||||
## Considered Options
|
||||
|
||||
TODO
|
||||
### Option 1: Keep the Split Control Plane Architecture
|
||||
|
||||
The split control plane idea comes from [AppCat](https://kb.vshn.ch/app-catalog/control-plane/split-architecture.html).
|
||||
|
||||
The control plane needs kubeconfigs to N service clusters. Each service cluster needs a kubeconfig back to the control plane. Due to how [Project Syn](https://syn.tools/) works, we can't create these kubeconfigs during compilation time, they have to be generated manually and put into Vault every time we add a new service cluster.
|
||||
|
||||
Managing provider configs for Helm and Kubernetes gets complicated because they need to connect to service clusters from the control cluster. Connection details only exist on the control cluster and don't propagate to service clusters. Instead of writing connection details directly into instance namespaces, we have to wrap them into separate secrets and deploy via Crossplane provider-kubernetes.
|
||||
|
||||
The deletion protection webhooks need to check against composites on the control cluster, which means service clusters need reverse connectivity back to the control plane for webhook lookups. And developers have to constantly remember whether they're dealing with Crossplane Managed Resources (control cluster) or plain Kubernetes resources (service cluster). Some objects like `ProviderConfigs` and `Usages` don't implement the `resource.Managed` interface, which makes this even harder.
|
||||
|
||||
Pros:
|
||||
|
||||
- Clean separation between orchestration and execution
|
||||
- Flexible in concept
|
||||
|
||||
Cons:
|
||||
|
||||
- Manual kubeconfig generation and Vault management for every new cluster
|
||||
- Complex provider config management across clusters
|
||||
- Connection details require wrapping through provider-kubernetes
|
||||
- Service clusters need reverse connectivity for webhooks
|
||||
- Resource placement is confusing and inconsistent
|
||||
- No real operational benefit for Servala's use cases
|
||||
|
||||
### Option 2: Move Orchestration to the Servala Portal
|
||||
|
||||
In this model, the Portal becomes the single orchestration point. Each workload cluster has Crossplane installed and operates independently as a complete execution environment. The Portal handles all scheduling, CSP selection, environment management, and decides which cluster should provision resources. Clusters only need to be reachable by the Portal.
|
||||
|
||||
This simplifies the model: users interact with the Portal, the Portal decides where and how to provision resources, and clusters execute those decisions with their own Crossplane installation. No bidirectional dependencies, no credential management between clusters, no confusion about where resources belong.
|
||||
|
||||
Pros:
|
||||
|
||||
- Single orchestration authority
|
||||
- Drastically reduced infrastructure and operational complexity
|
||||
- No bidirectional cluster dependencies
|
||||
- Simpler AppCat integration with predictable behavior
|
||||
- Easier cluster onboarding across multiple CSPs
|
||||
- Fewer moving parts and failure points
|
||||
- Each cluster is self-contained with its own Crossplane
|
||||
|
||||
Cons:
|
||||
|
||||
- Portal becomes critical infrastructure for all orchestration
|
||||
- We lose some theoretical flexibility (though we're not actually using it)
|
||||
- Per-cluster resource footprint is bigger
|
||||
|
||||
## Decision
|
||||
|
||||
We're abandoning the split control plane architecture in favor of a converged model.
|
||||
|
||||
The Portal will continue doing what it already does. It decides where to create resources based on CSP selection and environment requirements. What changes is the cluster architecture:
|
||||
|
||||
- Each cluster has Crossplane installed and operates as a converged control and execution plane
|
||||
- Clusters are self-contained, no split between control and service clusters
|
||||
- No bidirectional dependencies between clusters
|
||||
- No reverse connectivity needed
|
||||
|
||||

|
||||
|
||||
### Consequences
|
||||
|
||||
The architecture gets way simpler. We eliminate an entire class of problems around bidirectional control plane integration. All clusters behave consistently regardless of which CSP they're running on. Onboarding becomes straightforward. AppCat doesn't need conditional logic for different control plane models anymore. Fewer moving parts means less operational overhead and fewer failure points. The mental model becomes clear: Portal orchestrates, clusters execute.
|
||||
|
|
|
|||
BIN
docs/ADRs/images/control-plane-architecture.png
Normal file
BIN
docs/ADRs/images/control-plane-architecture.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 94 KiB |
Loading…
Add table
Add a link
Reference in a new issue