EKS Pod Identity for Crossplane AWS Providers
Crossplane offers 2 versions of the AWS provider:
- The native provider maintained by the community:
provider-aws
- one provider that host various different AWS services, learn more via the api reference
- The upjet provider maintained by upbound:
provider-family-aws
- this provider uses a provider family concept where each cloud service has a separate provider, learn more here
Both providers support Pod Identity however, the setup varies slightly. In this blog post, we will show you how to configure EKS Pod Identity using Terraform. The same concept may be used for other providers if they support it.
Terraform Configuration
INFO
To be able to use EKS Pod Identity the agent needs to be installed on the EKS Cluster.
The service account requires some IAM permissions to be able to access the AWS resources. The following Terraform snippet creates an IAM role with the necessary permissions for the service account to access S3.
resource "aws_iam_policy" "crossplane_policy" {
name = "crossplane-policy"
description = "Policy for crossplane controller to manage S3 buckets and objects"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"s3:*"
]
Resource = "*" # using an unrestricted policy because we need full control
}
]
})
}
data "aws_iam_policy_document" "assume_role" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = ["pods.eks.amazonaws.com"]
}
actions = [
"sts:AssumeRole",
"sts:TagSession"
]
}
}
resource "aws_iam_role_policy_attachment" "additional_policies" {
policy_arn = aws_iam_policy.crossplane_policy.arn
role = aws_iam_role.iam_for_s3.name
}
resource "aws_iam_role" "iam_for_s3" {
name = "crossplane-s3-role"
assume_role_policy = data.aws_iam_policy_document.assume_role.json
}
Afterward, the following Terraform snippet can be used to create the necessary aws_eks_pod_identity_association
resource. You'll need to provide the aws_eks_cluster.example.name
and aws_iam_role.example.arn
values, which are the name of your EKS cluster and the ARN of the IAM role you want to associate with the service account.
resource "aws_eks_pod_identity_association" "crossplane_s3" {
cluster_name = aws_eks_cluster.example.name
namespace = "crossplane-system"
service_account = "provider-aws-s3"
role_arn = aws_iam_role.iam_for_s3.arn
}
resource "aws_eks_pod_identity_association" "crossplane_s3" {
cluster_name = aws_eks_cluster.example.name
namespace = "crossplane-system"
service_account = "provider-aws"
role_arn = aws_iam_role.iam_for_s3.arn
}
You can verify the association was created by running the following command:
aws eks list-pod-identity-associations --cluster-name $ClusterName
Crossplane Provider Configuration
WARNING
- If you create the provider before the pod association is created, it will have to be restarted before the configuration will take effect.
- Since Crossplane v2 Providers are no longer namespaced resources, only ProviderConfigs are.
Below we will show how to configure the providers with EKS Pod Identity. I’ve also added annotations for ArgoCD. If you are not using ArgoCD you can skip them, but when you do use it these annotations are important. They make sure ArgoCD applies the objects in the right order so your application syncs without issues. Leaving them out while using ArgoCD will cause sync problems.
Upbound
First, create the
Provider
for theprovider-family-aws
, which will provide the base resources (ProviderConfig
, ...) for all Upbound AWS providers.yamlapiVersion: pkg.crossplane.io/v1 kind: Provider metadata: name: provider-family-aws namespace: crossplane-system annotations: argocd.argoproj.io/sync-wave: "1" argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true spec: package: xpkg.upbound.io/upbound/provider-family-aws:v1
Create the
DeploymentRuntimeConfig
for the provider, which will specify the service account name to use. This service account should match the one used in the Terraform configuration above.yamlapiVersion: pkg.crossplane.io/v1beta1 kind: DeploymentRuntimeConfig metadata: name: provider-aws-pod-id-drc namespace: crossplane-system annotations: argocd.argoproj.io/sync-wave: "2" argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true spec: serviceAccountTemplate: metadata: name: provider-aws-s3
Then we create the AWS S3
Provider
&ProviderConfig
for the provider, which will reference theDeploymentRuntimeConfig
.yamlapiVersion: pkg.crossplane.io/v1 kind: Provider metadata: name: provider-aws-s3 namespace: crossplane-system annotations: argocd.argoproj.io/sync-wave: "3" argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true argocd.argoproj.io/health-check-timeout: "600s" spec: package: xpkg.upbound.io/upbound/provider-aws-s3:v1 runtimeConfigRef: name: provider-aws-pod-id-drc --- apiVersion: aws.upbound.io/v1beta1 kind: ProviderConfig metadata: name: provider-aws-s3 annotations: argocd.argoproj.io/sync-wave: "4" argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true argocd.argoproj.io/health-check-timeout: "600s" spec: credentials: source: PodIdentity
Object | purpose |
---|---|
DeploymentRuntimeConfig | Allows you to specify values for provider deployment (node‑selector, service‑account name, etc.). |
ProviderConfig | Supplies the credentials source (IRSA, Pod Identity, Secret, etc.) and is the object that every managed resource references via providerConfigRef . |
Provider | defines which provider package should be used |
crossplane-contrib
- Create the
DeploymentRuntimeConfig
for the provider, which will specify the service account name to use. This service account should match the one used in the Terraform configuration above.
apiVersion: pkg.crossplane.io/v1beta1
kind: DeploymentRuntimeConfig
metadata:
name: provider-aws-native-drc
namespace: crossplane-system
annotations:
argocd.argoproj.io/sync-wave: "1"
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
serviceAccountTemplate:
metadata:
name: provider-aws
- Afterward, create the
Provider
for theprovider-aws
, which will provide all the AWS resources and reference theDeploymentRuntimeConfig
.
apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
name: provider-aws
annotations:
argocd.argoproj.io/sync-wave: "3"
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
argocd.argoproj.io/health-check-timeout: "600s"
spec:
package: xpkg.crossplane.io/crossplane-contrib/provider-aws:v0.54.2
runtimeConfigRef:
name: provider-aws-native-drc
- Then we create the
ProviderConfig
for the AWS provider, creating an instance of the provider to be used in creating custom resources.
apiVersion: aws.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: provider-config-aws
namespace: crossplane-system
annotations:
argocd.argoproj.io/sync-wave: "5"
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
argocd.argoproj.io/health-check-timeout: "600s"
spec:
credentials:
source: InjectedIdentity # InjectedIdentity instead of PodIdentity for the native provider
Conclusion
At this point you should have everything working and be able to deploy S3 buckets through Kubernetes using EKS Pod Identity. In future posts I’ll cover how to create composite resource definitions so developers can spin up cloud resources with governance built in.
Troubleshooting
Under Construction
The provider containers should host the AWS_CONTAINER_CREDENTIALS_FULL_URI
& AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE
environment variable, which is set by the Pod Identity Agent.
The crossplane provider container does not have a shell so you'll need a debug container to check the environment variables:
kubectl debug -it -n crossplane-system `
pod/provider-aws-s3-8691ce5b9d4b-d9f586758-2sncd `
--image=nicolaka/netshoot `
--target=package-runtime `
--share-processes `
-- /bin/bash