Skip to content

Decentralised Platform

A decentralised platform architecture distributes control plane components across workload clusters rather than consolidating them in a central management cluster. While I recommend centralised platforms for most production cases, decentralised architectures excel when team autonomy, fault isolation, or regulatory boundaries outweigh the operational overhead of duplication. This approach gives teams local control and reduces blast radius but fragments policy enforcement and increases the cost of maintaining consistency.

Key Characteristics

  • Control plane components (Argo CD, Crossplane, policies) run on each or several workload cluster.
  • Each cluster independently reconciles its own state and manages its resources.
  • Teams bootstrap and operate their own GitOps and infrastructure tooling.
  • No central registration or approval step for cluster onboarding.
  • IAM roles and provider configs scoped to each cluster or account.
  • Core cloud guardrails still managed centrally via Terraform or native IaC.

Security Posture and Access Model

Each workload cluster runs its own Crossplane and/or Argo CD instances with scoped IAM roles. This limits the blast radius of a compromise to a single cluster or account. A misconfigured policy or runaway controller does not cascade to other teams. If an attacker gains access to one cluster they only control that environment, not your entire fleet.

Policy enforcement becomes fragmented across clusters. You rely on each cluster correctly applying admission controllers and Kyverno rules. Drift detection, audit trails, and compliance evidence are harder to aggregate because there is no single source of truth. The platform team trades central visibility for stronger isolation boundaries. This trade off makes sense when regulatory or trust boundaries align with cluster boundaries.

Pros and Cons

ProsCons
Promotes developer flexibilityFragmented policy enforcement and weaker drift detection
Blast radius for Argo CD or Crossplane incidents limited to a single accountDuplicated tooling (many Argo CD, Crossplane, observability stacks)
Failure isolation. A bad policy does not impact other teamsHigher operational overhead (patching and upgrades everywhere)
Local autonomy speeds up niche use cases (edge, GPU, regulated enclave)Harder to produce unified audit and inventory
High team autonomy and faster iterationInconsistent developer experience across teams
Tailored controller versions per cluster (if really needed)Version skew risks and uneven rollout of fixes
Easier to pilot emerging tech without risking core platformMore TLS endpoints, backups, upgrades to manage

Best for: Team autonomy and fault isolation

Choose this pattern when strict fault isolation is required (regulated data domains, air gapped edge fleets). It works well when team autonomy and local iteration speed matter more than consistency. Decentralized makes sense when you have strong governance elsewhere (landing zones, cloud policies, network boundaries) and the operational cost of duplicating tooling is acceptable for your team size. This is also a good fit when experimenting with platform patterns in small teams before standardizing centrally as your team grows.

Crossplane Pattern (Local Control with Policy Guardrails)

Decentralized Crossplane Architecture

In a decentralised setup, each workload cluster runs its own Crossplane instance. This gives teams local control over their infrastructure provisioning without depending on a central management cluster. The trade-off is that you need stronger policy enforcement at each cluster to prevent teams from creating non-compliant resources.

Local Crossplane Deployment

The Crossplane operator runs directly on each workload cluster. Provider credentials are scoped to that cluster's service account, Iam resources are typically created by the IaC platform module. If teams only provision resources within their own cloud account, you only need a single ProviderConfig per cluster.

Policy-Based Guardrails

Without central RBAC limiting what gets applied, you need admission controllers to enforce compliance. Use Kyverno or similar tools to restrict cloud resource creation to approved XRD claims or specific native cloud CRDs. This prevents teams from bypassing compositions and creating unmanaged resources directly.

The policy layer becomes critical here. Each cluster must correctly apply and maintain these rules, or teams could drift from your standards.

The Result

Teams get infrastructure on-demand with full local control. The platform maintains governance through compositions and admission policies rather than centralized RBAC. If one cluster is compromised or misconfigured, the blast radius stays contained to that account.

Argo CD Pattern (Distributed GitOps Control)

Decentralized ArgoCD Architecture

Each workload cluster runs its own Argo CD instance, giving teams complete control over their GitOps workflow. There's no central registration or approval process. Teams bootstrap and operate their own instances according to shared guidelines.

Local Deployment Model

Run one Argo CD instance per workload cluster or per team cluster. Teams own the full lifecycle: installation, upgrades, backup, and recovery. Disaster recovery becomes a per-cluster concern since there's no single source of truth. You'll need documented bootstrap scripts and runbooks for each team to follow.

Advantages of Local Control

Teams can tune Argo CD to their specific needs without waiting on a central platform team. Want to test a new sync strategy or adjust health checks for your stack? Just do it. This reduces bottlenecks and lets the platform team focus on shared guardrails rather than fielding Argo CD configuration requests.

The Trade-Offs

You'll see inconsistent RBAC and fragmented policies across instances, making audits harder. Version skew becomes a real problem as teams upgrade on different schedules. Every instance adds more operational overhead: more TLS certs, backups, monitoring endpoints, and upgrade paths to manage.

Global changes like adopting a new repository credential standard require coordination across all teams. The operational burden shifts left to development teams, so they need platform-level skills to run this well.

Reducing the Pain

Publish a standard Argo CD base Kustomize overlay that teams can import and customize. Maintain a shared repository with reusable Application manifests and health checks. Document GitHub App or OIDC integration patterns once, then let teams copy the approach.

These shared resources won't eliminate the duplication, but they'll reduce how much each team reinvents.

When This Makes Sense

I only recommend decentralised Argo CD when team autonomy gains outweigh the cost of duplication. It's the right choice when strict isolation is required (regulated data domains, air-gapped edge fleets). Development teams running this pattern need to be comfortable with platform engineering tasks, not just application development.