Argo CD Agent for Onboarding ARO Cluster into GitOps Controlplane
ARO doesn't support the declarative argocd-k8s-auth
pattern that works with AKS. This is a fundamental authentication difference, not a missing feature. The new Argo CD agent solves this cleanly by flipping the connection model from push to pull.
Why AKS's execProvider Doesn't Work with ARO
When onboarding AKS clusters to Argo CD, you can use argocd-k8s-auth azure
as an execProvider. This works because the AKS API server natively validates Entra ID (AAD) tokens. Argo CD presents an AAD token, AKS accepts it, done.
ARO doesn't work this way.
OpenShift's API server authenticates through the OpenShift OAuth server, not directly against Entra ID. While you can configure OpenShift OAuth to use Entra ID as an identity provider for human logins, that doesn't help with non-interactive GitOps automation. The API server expects an OpenShift OAuth token (typically from a ServiceAccount) or a client certificate, not a raw AAD token.
This isn't a limitation of ARO. It's how OpenShift authentication is designed.
The Azure RBAC Trap
You might think assigning Azure RBAC roles on the ARO resource would solve this:
resource "azurerm_role_assignment" "argo_to_aro" {
scope = azurerm_redhat_openshift_cluster.aro.id
role_definition_name = "Contributor"
principal_id = azurerm_user_assigned_identity.argocd.principal_id
}
This doesn't grant Kubernetes API access.
That role assignment lets your workload identity call Azure ARM APIs like az aro show
or az aro list-credentials
. It doesn't authenticate you to the OpenShift/Kubernetes API server running inside ARO. Those are two separate authentication domains.
Classic Workaround: ServiceAccount Tokens + External Secrets
Before the agent, the standard pattern was:
- Create a ServiceAccount on the ARO cluster with appropriate RBAC
- Mint a token using
oc create token
or the TokenRequest API - Store
serverUrl
,caCert
, andsaToken
in Key Vault or Vault - Use an ExternalSecret to materialize the Argo CD cluster secret with
bearerToken
auth
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: aro-cluster
namespace: argocd
spec:
secretStoreRef:
name: vault
target:
template:
type: Opaque
metadata:
labels:
argocd.argoproj.io/secret-type: cluster
data:
name: aro-prod
server: "{{ .serverUrl }}"
config: |
{
"bearerToken": "{{ .saToken }}",
"tlsClientConfig": {
"insecure": false,
"caData": "{{ .caCert }}"
}
}
data:
- secretKey: serverUrl
remoteRef: { key: aro-prod/serverUrl }
- secretKey: caCert
remoteRef: { key: aro-prod/caCert }
- secretKey: saToken
remoteRef: { key: aro-prod/saToken }
This works. It's fully declarative. But you're still managing credentials for the Kubernetes API, and you need rotation logic for those tokens.
The Agent Approach: Pull Instead of Push
The Argo CD Cluster Agent (available in Argo CD v2.10+ and OpenShift GitOps 1.17+) changes the connection model entirely.
Mode | Connection Direction | Authentication Requirements |
---|---|---|
Push (classic) | Central Argo CD → remote kube-API | Needs kubeconfig/SA token per cluster |
Pull (agent) | Agent → central Argo CD | No kube-API credentials stored centrally |
Instead of Argo CD connecting to each cluster's API server, a lightweight agent runs inside the target cluster and opens an outbound HTTPS/gRPC connection to the central Argo CD instance.
Why This Solves the ARO Problem
The agent eliminates the need for Argo CD to authenticate to the OpenShift API at all.
The agent already runs inside ARO with cluster-local ServiceAccount credentials. It handles all Kubernetes operations locally. The central Argo CD instance just tells it what to do over the secure tunnel.
This means:
- No cluster secrets in the central Argo CD namespace
- No token rotation logic to maintain
- Works through firewalls and private networks (outbound only)
- Scales better for multi-cluster setups (10s to 100s of clusters)
Setting Up the Agent
On the central Argo CD instance (AKS):
Enable the cluster agent feature in the argocd-cm
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
cluster.agent.enabled: "true"
Make sure Argo CD is accessible from ARO via a public endpoint or Private Link.
On the ARO cluster:
Deploy the agent with a simple manifest:
apiVersion: argoproj.io/v1alpha1
kind: ClusterAgent
metadata:
name: aro-agent
namespace: argocd
spec:
serverAddr: https://argocd.yourdomain.com
authToken: "<token-from-central-argocd>"
clusterName: aro-prod
The authToken
is generated by the central Argo CD instance and authenticates the agent's outbound connection. Once applied, the ARO cluster appears in argocd cluster list
automatically.
Making It Fully Declarative
Store that ClusterAgent manifest in Git alongside your other cluster configuration. You can bootstrap it using:
- A separate GitOps operator on ARO (like Red Hat OpenShift GitOps)
- A one-time
oc apply
during cluster provisioning - An init Job in your cluster bootstrap pipeline
Since the manifest is in Git and only needs to be applied once during cluster setup, it fits naturally into infrastructure-as-code workflows.
When to Use Each Approach
Use the agent when:
- You're managing multiple ARO/OpenShift clusters
- You want to minimize stored credentials
- Your clusters are behind firewalls or in private networks
- You're comfortable with Tech Preview features (as of late 2025)
Use ServiceAccount tokens when:
- You need production-stable features only
- You have a small number of clusters to manage
- You already have robust secret rotation infrastructure
- Your security model requires central credential storage/auditing