Getting Started
This section provides quick start guides for using the Cloud Controller, including its core features like the cloud admission controller and cloud scanner.
Prerequisites
Before you begin, ensure you have the following prerequisites:
Amazon EKS Cluster Setup
Ensure you have an Amazon EKS cluster running. If you don’t have one, create an EKS cluster by following the steps in the EKS Documentation.
Enable Pod Identity Addon
Enable the EKS Pod Identity Addon to allow seamless access to AWS resources without using explicit AWS credentials. You can enable this addon through either the AWS Management Console or the AWS CLI.
Refer to the AWS documentation for more details.
-
Via AWS Management Console
- When creating a new EKS cluster, select the Pod Identity Addon during the setup process.
- If your cluster is already created, you can add the addon by navigating to the EKS Clusters section in the AWS Management Console:
- Select your cluster.
- Go to the Add-ons tab.
- Choose Add Add-on, search for Pod Identity, and select it.
- Follow the prompts to complete the setup.
-
Via AWS CLI
Run the following command to enable the Pod Identity Addon:
aws eks create-addon --cluster-name <EKS_CLUSTER_NAME> --addon-name eks-pod-identity-agent --addon-version v1.0.0-eksbuild.1
Replace
<EKS_CLUSTER_NAME>
with your cluster name and add the latest version available.
Ensure the worker node IAM role has the necessary permissions to assume the eks-auth:AssumeRoleForPodIdentity
action. If you are using the managed policy AmazonEKSWorkerNodePolicy
, no additional configuration is needed.
Create an IAM Role for Scanner Pods
-
Create an IAM role in the same account as the EKS cluster. Attach the following trust policy to allow the EKS Pod Identity Agent to assume the role:
{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowEksAuthToAssumeRoleForPodIdentity", "Effect": "Allow", "Principal": { "Service": "pods.eks.amazonaws.com" }, "Action": [ "sts:AssumeRole", "sts:TagSession" ] } ] }
-
Attach a policy to the IAM role with the permissions required for scanning AWS resources. For example, to scan all EKS, ECS, and Lambda services, provide read access to the cloud controller. You can make the permissions more granular as needed.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:Describe*",
"ecs:List*",
"ecs:Get*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"lambda:List*",
"lambda:Get*",
"lambda:Describe*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"eks:Describe*",
"eks:List*",
"eks:Get*"
],
"Resource": "*"
}
]
}
Create Pod Identity Association
Associate the IAM role created earlier with the existing service account cloud-controller-scanner
in the nirmata
namespace. Use the AWS CLI:
aws eks create-pod-identity-association \
--cluster-name <EKS_CLUSTER_NAME> \
--role-arn <IAM_ROLE_ARN> \
--namespace nirmata \
--service-account cloud-controller-scanner
Replace <EKS_CLUSTER_NAME>
with your cluster name and <IAM_ROLE_ARN>
with the ARN of the IAM role created earlier.
Important: The pod-identity association must be done before installing the Helm chart. If the Helm chart is already installed, restart the pods to ensure Pod Identity works correctly.
Deploy the Cloud Control Helm Chart
Deploy the cloud controller Helm chart into your EKS cluster:
Create a values.yaml
file with the following information:
scanner:
primaryAWSAccountConfig:
accountID: "your-account-id"
accountName: "cloud-control-demo"
regions: ["us-west-1","us-east-1"] # insert any other regions
services: ["EKS","ECS","Lambda"] # insert services to scan
Refer to the complete list of fields for Helm values here.
NOTE: The services provided in values.yaml should be accessible by the cloud controller. Make sure to attach appropriate IAM policy when creating the IAM Role above.
helm repo add nirmata https://nirmata.github.io/kyverno-charts
helm repo update nirmata
helm install cloud-controller nirmata/cloud-controller \
--create-namespace \
--namespace nirmata \
-f values.yaml
Verify Installation
Verify that the cloud controller pods are running in the nirmata
namespace:
kubectl get pods -n nirmata
The output should display the running pods:
NAME READY STATUS RESTARTS AGE
cloud-control-admission-controller-dfd7f69fd-jjhn5 1/1 Running 0 17d
cloud-control-reports-controller-7954bb477d-lb2ld 1/1 Running 0 17d
cloud-control-scanner-7756dc6ddf-qljbb 1/1 Running 0 17d
Cloud Admission Controller
This section provides a step-by-step guide on how to use the admission controller to intercept AWS requests and apply policies to them.
Setting up a Proxy
To intercept AWS requests, you need to create a proxy server that listens on a specific port. The proxy server will apply the policies to the requests and forward them to the AWS cloud if they are compliant.
In this example, we will create a proxy server that listens on port 8443
. It intercepts all requests destined for AWS. It then checks these requests against defined policies, specifically those labeled app: kyverno
. Only compliant requests are forwarded to AWS.
apiVersion: nirmata.io/v1alpha1
kind: Proxy
metadata:
name: proxy-sample
spec:
port: 8443
caKeySecret:
name: cloud-controller-admission-controller-svc.nirmata.svc.tls-ca
namespace: nirmata
urls:
- ".*.amazonaws.com"
policySelectors:
- matchLabels:
app: kyverno
The admission controller automatically generates self-signed CA certificates. These certificates are stored as a Secret in the nirmata
Namespace.
To retrieve the Secret name, run the following command:
kubectl get secrets -n nirmata
The output should show the generated secret:
NAME TYPE DATA AGE
cloud-controller-admission-controller-svc.nirmata.svc.tls-ca kubernetes.io/tls 2 4m28s
The cloud-controller-admission-controller-svc.nirmata.svc.tls-ca
Secret contains the required CA certificate. As shown in the above Proxy configuration, the spec.caKeySecret
field references this Secret.
The proxy server is now running within your Kubernetes cluster, listening on port 8443. To use this proxy from your local machine, you need to establish a connection between your local port 8443 and the proxy server’s port 8443 within the cluster. This is achieved using port forwarding.
kubectl port-forward svc/cloud-controller-admission-controller-svc 8443:8443 -n nirmata
By running this command, any traffic sent to localhost:8443
on your machine will be forwarded to the proxy server in the cluster.
This allows you to interact with the proxy and, consequently, enforce your policies on AWS requests as if the proxy server was running locally.
ValidatingPolicies
We will create a ValidatingPolicy to ensure that ECS clusters include the group
tag. The policy will be labeled app: kyverno
to align with the policy selector specified in the Proxy configuration. Operating in Enforce
mode, this policy will block and prevent non-compliant requests from being forwarded to AWS.
apiVersion: nirmata.io/v1alpha1
kind: ValidatingPolicy
metadata:
name: ecs-cluster
labels:
app: kyverno
spec:
failureAction: Enforce
admission: true
rules:
- name: check-tags
identifier: payload.clusterName
match:
all:
- (metadata.provider): "AWS"
- (metadata.service): "ecs"
- (metadata.action): "CreateCluster"
assert:
all:
- message: A 'group' tag is required
check:
payload:
(tags[?key=='group'] || `[]`):
(length(@) > `0`): true
Using the AWS CLI
You need to configure your AWS CLI to route requests through the proxy server. This involves setting two environment variables:
-
HTTPS_PROXY
: This tells the AWS CLI to send all requests through the controller acting as a local proxy.export HTTPS_PROXY=http://localhost:8443
-
AWS_CA_BUNDLE
: The controller uses a self-signed security certificate. This variable tells the AWS CLI to trust that certificate.First, you need to download the certificate:
kubectl get secrets -n nirmata cloud-control-admission-controller-svc.nirmata.svc.tls-ca -o jsonpath="{.data.tls\.crt}" | base64 --decode > ca.crt
Then, set the environment variable:
export AWS_CA_BUNDLE=ca.crt
It tells the AWS CLI which Certificate Authority (CA) to trust for verifying the proxy’s SSL certificate. Because the cloud admission controller is using a self-signed certificate (not issued by a publicly trusted CA), the AWS CLI won’t trust it by default. By setting
AWS_CA_BUNDLE
to the path of the controller’s CA certificate (ca.crt), you’re explicitly telling the AWS CLI that this certificate is valid and should be used to establish a secure connection with the proxy. Without this, the AWS CLI would reject the connection due to the untrusted certificate.
Once configured, your AWS CLI commands will be checked against the defined policies before being sent to AWS.
Example: Creating an ECS Cluster
The following examples demonstrate how the admission controller enforces a policy requiring all ECS clusters to have a group
tag.
-
Create an ECS cluster without the
group
tag:aws ecs create-cluster --cluster-name bad-cluster
The output should be similar to the following:
An error occurred (406) when calling the CreateCluster operation: ecs-cluster.check-tags bad-cluster: -> A 'group' tag is required -> all[0].check.data.(tags[?key=='group'] || `[]`).(length(@) > `0`): Invalid value: false: Expected value: true
As expected, the request was blocked since it violates the ValidatingPolicy that requires all ECS clusters to have the
group
tag. -
Create an ECS cluster with the
group
tag:aws ecs create-cluster --cluster-name good-cluster --tags key=group,value=test key=owner,value=test
The output should be similar to the following:
{ "cluster": { "clusterArn": "arn:aws:ecs:us-east-1:844333597536:cluster/good-cluster", "clusterName": "good-cluster", "status": "ACTIVE", "registeredContainerInstancesCount": 0, "runningTasksCount": 0, "pendingTasksCount": 0, "activeServicesCount": 0, "statistics": [], "tags": [ { "key": "owner", "value": "test" }, { "key": "group", "value": "test" } ], "settings": [ { "name": "containerInsights", "value": "disabled" } ], "capacityProviders": [], "defaultCapacityProviderStrategy": [] } }
The request was successful since it complies with the ValidatingPolicy that requires all ECS clusters to have the
group
tag.
Cloud Scanner
ECS Clusters and Task Definitions
To test the scanner, we will create ECS resources, both compliant and non-compliant with the ValidatingPolicies that check the group
tag, by creating ECS clusters and task definitions, some with and some without the required group
tag.
-
Create an ECS cluster named
bad-cluster
without thegroup
tag:aws ecs create-cluster --cluster-name bad-cluster
-
Register a task definition named
bad-task
without thegroup
tag:aws ecs register-task-definition \ --family bad-task \ --container-definitions '[{"name": "my-app", "image": "nginx:latest", "essential": true, "portMappings": [{"containerPort": 80, "hostPort": 80}]}]' \ --requires-compatibilities FARGATE \ --cpu 256 \ --memory 512 \ --network-mode awsvpc
-
Create an ECS cluster named
good-cluster
with thegroup
tag:aws ecs create-cluster --cluster-name good-cluster --tags key=group,value=development
-
Register a task definition named
good-task
with thegroup
tag:aws ecs register-task-definition \ --family good-task \ --container-definitions '[{"name": "my-app", "image": "nginx:latest", "essential": true, "portMappings": [{"containerPort": 80, "hostPort": 80}]}]' \ --requires-compatibilities FARGATE \ --cpu 256 \ --memory 512 \ --network-mode awsvpc \ --tags '[{"key": "group", "value": "production"}]'
View Reports
In this example, the scanner will generate four ClusterPolicyReports: two for the bad-cluster
and bad-task
resources, and two for the good-cluster
and good-task
resources. The reports will show the compliance status of the resources based on the ValidatingPolicies.
To view the generated reports, run the following command:
kubectl get clusterpolicyreports
The output should show the generated reports:
NAME KIND NAME PASS FAIL WARN ERROR SKIP AGE
1a468eba2818db9333ede8428bf6c910d467db5d5fc1b36adc535ce32cea2c5 ECSCluster good-cluster 1 0 0 0 0 4s
1c4a23fad560fe66ee7d139b456451d8aaf7c75a9aaa12007a37f1d6f3056b9 ECSCluster bad-cluster 0 1 0 0 0 4s
91696bc8dbb327de99c4d34c579de8bd71e2ef45ad325d10d39d690ad14776c ECSTaskDefinition bad-task__2 0 1 0 0 0 4s
cf987d912032e51712ad73a2067a1c5ffee16d8872575166c0739ffedfc0766 ECSTaskDefinition good-task__2 1 0 0 0 0 4s