Skip to main content

Self-Managed Installation with Helm

This guide shows you how to install ACK controllers yourself using Helm. This method works on any Kubernetes cluster and gives you full control over controller configuration.

Prerequisites​

Before you begin, make sure you have:

  • Kubernetes cluster (any Kubernetes 1.20+, including EKS, GKE, AKS, kind, minikube)
  • kubectl installed and configured
  • Helm 3.8+ installed (installation guide)
  • AWS account with appropriate permissions
  • AWS CLI installed and configured (optional, but helpful)
Works on any Kubernetes

ACK works on any Kubernetes cluster, not just Amazon EKS. You can run it on GKE, AKS, on-premises clusters, or even local development clusters like kind or minikube.

Quick Start: Create a DynamoDB Table​

We'll use the DynamoDB controller for this tutorial because it's simple, free-tier eligible, and easy to verify in the AWS Console.

Step 1: Install the DynamoDB Controller​

First, find the latest version and install the controller using Helm:

# Set variables
export SERVICE=dynamodb
export RELEASE_VERSION=$(curl -sL https://api.github.com/repos/aws-controllers-k8s/${SERVICE}-controller/releases/latest | jq -r '.tag_name | ltrimstr("v")')
export ACK_SYSTEM_NAMESPACE=ack-system
export AWS_REGION=us-west-2

# Log in to ECR Public
aws ecr-public get-login-password --region us-east-1 | \
helm registry login --username AWS --password-stdin public.ecr.aws

# Install the controller
helm install --create-namespace -n $ACK_SYSTEM_NAMESPACE \
ack-$SERVICE-controller \
oci://public.ecr.aws/aws-controllers-k8s/$SERVICE-chart \
--version=$RELEASE_VERSION \
--set=aws.region=$AWS_REGION
Change the region

Replace us-west-2 with your preferred AWS region.

Verify the controller is running:

kubectl get pods -n ack-system

You should see output like:

NAME                                      READY   STATUS    RESTARTS   AGE
ack-dynamodb-controller-6d8f7c4b9-xk7zp 1/1 Running 0 30s

Step 2: Configure AWS Permissions​

The controller needs AWS credentials to create resources. Choose your authentication method:

Recommended for EKS clusters

EKS Pod Identity is the simplest and most secure way to provide AWS permissions to your controller.

# Create IAM role with DynamoDB permissions
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
export CLUSTER_NAME=your-cluster-name

# Create IAM role with trust policy for EKS Pod Identity
aws iam create-role \
--role-name ack-${SERVICE}-controller \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": "pods.eks.amazonaws.com"
},
"Action": [
"sts:AssumeRole",
"sts:TagSession"
]
}]
}'

# Attach DynamoDB policy
aws iam attach-role-policy \
--role-name ack-${SERVICE}-controller \
--policy-arn arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess

# Create Pod Identity Association
aws eks create-pod-identity-association \
--cluster-name $CLUSTER_NAME \
--namespace $ACK_SYSTEM_NAMESPACE \
--service-account ack-${SERVICE}-controller \
--role-arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/ack-${SERVICE}-controller

# Restart controller to pick up the association
kubectl rollout restart deployment -n $ACK_SYSTEM_NAMESPACE \
ack-${SERVICE}-controller
Easiest setup

EKS Pod Identity requires no OIDC provider setup and works out of the box on EKS 1.24+.

Step 3: Create Your First AWS Resource​

Now let's create a DynamoDB table! Save this manifest as my-table.yaml:

apiVersion: dynamodb.services.k8s.aws/v1alpha1
kind: Table
metadata:
name: my-first-ack-table
spec:
tableName: my-first-ack-table
billingMode: PAY_PER_REQUEST
attributeDefinitions:
- attributeName: id
attributeType: S
keySchema:
- attributeName: id
keyType: HASH
tags:
- key: environment
value: development
- key: managed-by
value: ack

Apply the manifest:

kubectl apply -f my-table.yaml

Step 4: Verify the Resource​

Check the table status in Kubernetes:

# Get the table resource
kubectl get tables

# Describe for more details
kubectl describe table my-first-ack-table

You should see output like:

NAME                   STATUS
my-first-ack-table ACTIVE

Verify in AWS Console or using AWS CLI:

aws dynamodb describe-table --table-name my-first-ack-table --region $AWS_REGION

Step 5: Update the Resource​

Let's add a global secondary index. Update my-table.yaml:

apiVersion: dynamodb.services.k8s.aws/v1alpha1
kind: Table
metadata:
name: my-first-ack-table
spec:
tableName: my-first-ack-table
billingMode: PAY_PER_REQUEST
attributeDefinitions:
- attributeName: id
attributeType: S
- attributeName: email
attributeType: S
keySchema:
- attributeName: id
keyType: HASH
globalSecondaryIndexes:
- indexName: email-index
keySchema:
- attributeName: email
keyType: HASH
projection:
projectionType: ALL
tags:
- key: environment
value: development
- key: managed-by
value: ack

Apply the update:

kubectl apply -f my-table.yaml

Watch the update progress:

kubectl get table my-first-ack-table -w

Step 6: Clean Up​

When you're done, delete the table:

kubectl delete -f my-table.yaml

This deletes both the Kubernetes resource and the DynamoDB table in AWS. You can verify:

# Check Kubernetes
kubectl get tables

# Check AWS
aws dynamodb list-tables --region $AWS_REGION

What You Learned​

Congratulations! You've successfully:

  • Installed an ACK service controller
  • Configured AWS permissions
  • Created an AWS resource using kubectl
  • Updated the resource
  • Deleted the resource

Common Issues​

Here are quick solutions to common problems. For detailed troubleshooting, see the Troubleshooting Guide.

Controller pod not starting

Check the controller logs:

kubectl logs -n ack-system deployment/ack-dynamodb-controller

Common causes:

  • Helm chart version mismatch
  • Resource constraints
  • Image pull errors
Resources not being created in AWS

Check for permission issues:

kubectl describe table my-first-ack-table

Look for events like:

  • AccessDenied - IAM role missing permissions
  • InvalidClientTokenId - Credentials not configured
  • UnrecognizedClientException - Wrong region or credentials
"ACK.Terminal" condition

A terminal condition means the controller encountered an unrecoverable error. Common causes:

  • Invalid configuration (e.g., invalid attribute type)
  • AWS service limits exceeded
  • Resource name already exists in AWS

Check the kubectl describe output for the specific error message.

See the Troubleshooting Guide for more detailed solutions.

Installing Additional Controllers​

To install another controller, repeat the process with a different service name:

export SERVICE=s3  # or rds, ec2, elasticache, etc.
export RELEASE_VERSION=$(curl -sL https://api.github.com/repos/aws-controllers-k8s/${SERVICE}-controller/releases/latest | jq -r '.tag_name | ltrimstr("v")')

helm install -n $ACK_SYSTEM_NAMESPACE \
ack-$SERVICE-controller \
oci://public.ecr.aws/aws-controllers-k8s/$SERVICE-chart \
--version=$RELEASE_VERSION \
--set=aws.region=$AWS_REGION

Remember to configure appropriate IAM permissions for each controller.

Next Steps​

Learn Core Concepts

Understand CRDs, controllers, reconciliation, and more.

Read Concepts

Advanced Features

Learn about resource adoption, field exports, and deletion policies.

View Guides

Explore More Services

Browse 50+ available service controllers.

View Controllers

Getting Help​

Built with ♥ by AWS