v4.24.0
Introduction
These release notes highlight the updates included in the NDP Private Edition v4.24.0. It introduces key features and enhancements added since version 4.22, along with guidance for upgrading from v4.22 to the latest release.
New Features
The following features have been introduced since the release of 4.22:
- Cluster Policy Report Results
- Invite new users from approved domain via Share Report dialog
- Pipeline Scanning
- Repository Scan Reports
- Repository Compliance
- AI-Powered Remediation
- AI-Powered Violation Insights
- Suppress Policy Violations
- GitOps for PolicySets
- Jira Integration
Installation
Nirmata DevSecOps Platform (NDP) can be deployed using Helm charts provided in the following repository. You can follow the instructions provided in the README page for more details.
To clone the repository:
git clone https://github.com/nirmata/nch-charts.git
cd nch-charts
git checkout release/4.24
make help
Upgrade from Release 4.22
To perform an upgrade of a system from 4.22.x to 4.24.0, you must render the charts provided in https://github.com/nirmata/nch-charts
Step 1: Clone the Helm chart repository
git clone https://github.com/nirmata/nch-charts.git
cd nch-charts
git checkout release/4.24
make help
Step 2: Edit the value file
Edit the value file ./config/values/environments/prod.yml
This should be the only value file that you will have to modify. You can specify values to the system that you are upgrading:
- namespace
- request/limits for each service
- Replicas for each service
- Bedrock inference profile
- NDP true/false
- Image registry
- MongoDB configuration: hosted service versus local deployment, credentials, authorization, and encryption parameters
Step 3: Render the Kubernetes manifests
NDP setup
make render-all ENV=prod NDP=true OUT=<output-directory>
Manifest Changes
To help you apply all the YAML changes thoroughly and safely, this section highlights the main changes from 4.22. All these changes are taken care of in the files shared with Duke.
Key Changes
Zookeeper Removal: Zookeeper is no longer required and must be removed
Environment Variables:
nirmata.workflow.usecuratoradded to catalog and config Deployments
Policies Service Split: Split into 3 different deployments:
- Policies: exposes the policies API
- Policies-processor: implements background tasks
- Policies-event-processor: processes events coming from the clusters
The deployments share the same image. The identity of the pods is defined by environment variables:
nirmata.policies.api,nirmata.policies.processor,nirmata.policies.event.processor.New Environment Variables: Added 2 environment variables:
nirmata.llm.apps.hostnirmata.datapipeline.enabled
LLM Apps Service: All Nirmata services implementing AI features must send AI Bedrock requests to llm-apps.
Environments Service Split: Split into 2 deployments:
- Environments: provides the API
- Environment-processor: implements background tasks
AI Configuration
This release introduces two new AI-powered features: Summarization & Prioritization, and Remediation.
These features are exclusively compatible with AWS Bedrock, the AI backend service validated by Nirmata. We have validated these features using two models: Anthropic Claude SONNET 3.7 and Anthropic Claude SONNET 4.0.
Please note that if you intend to use Claude 4.0, it is crucial to increase its model quotas to at least match the default quotas of Claude 3.7. The default quotas for Claude 4.0 are significantly lower, which can lead to frequent request throttling.
The inference profile must be created in the AWS account of the customer.
Inference Profile Creation
To create an inference profile, first locate the ARN of your desired model (Claude 4.0 or Claude 3.7). Model ARNs follow a specific format. You can then use this ARN to generate the profile.
arn:aws:bedrock:<region>:<your-aws-account-number>:inference-profile/us.anthropic.claude-sonnet-4-20250514-v1:0
Or
arn:aws:bedrock:<region>:<your-aws-account-number>:inference-profile/us.anthropic.claude-3-7-sonnet-20250219-v1:0
aws bedrock list-inference-profiles --region <your-region> --no-cli-pager
aws bedrock create-inference-profile \
--inference-profile-name <profile-name> \
--model-source copyFrom='<model-arn>' \
--region <your-region>
Pod Identity Configuration for EKS Clusters
To establish a trust relationship between Nirmata’s llm-app pods and the AWS Bedrock service, you must first configure an AWS Role. This role will be assumed by the llm-app pods, which are responsible for sending all Bedrock API requests. Therefore, all pods within this deployment need to be trusted by AWS.
Create an IAM role named nirmata-bedrock-role in your AWS Console. Then, attach the AmazonBedrockFullAccess policy to this role.
Then select the Trust relationships tab and click on Edit trust policy. Insert the following JSON:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "pods.eks.amazonaws.com"
},
"Action": [
"sts:AssumeRole",
"sts:TagSession"
]
}
]
}
This next step links the IAM role from Step 1 to the Kubernetes service account of the llm-apps pods: llm-app. This association tells EKS that any pod using the llm-apps service account should be allowed to assume the nirmata-bedrock-role.
# Replace placeholders with your values
aws eks create-pod-identity-association \
--cluster-name <YOUR-CLUSTER-NAME> \
--namespace <NAMESPACE> \
--service-account llm-apps \
--role-arn arn:aws:iam::<AWS-ACCOUNT-ID>:role/nirmata-bedrock-role
The EKS Pod Identity Agent will automatically handle injecting the necessary AWS credential environment variables into the pod.
Pod Identity Configuration for Non-EKS Clusters
For a non-EKS Kubernetes cluster, you would configure IAM Roles for Service Accounts (IRSA) by setting up your own OIDC provider.
The principle is the same as with EKS: your pod gets a short-lived token from its service account, and AWS exchanges that token for temporary IAM credentials. The main difference is that you are responsible for setting up the OIDC “trust bridge” that EKS normally manages for you.
The process involves three main parts:
- Kubernetes Cluster: Your cluster’s API server acts as an OIDC issuer, creating and signing JWTs (JSON Web Tokens) for your service accounts.
- Public OIDC Endpoint: You expose the cluster’s OIDC discovery documents to the internet so AWS can verify the JWTs.
- AWS IAM: You configure IAM to trust your cluster’s OIDC endpoint, allowing it to exchange the JWT for temporary role credentials.
Here is the step-by-step guide to setting this up.
Step 1: Expose the Cluster’s OIDC Endpoint
Your Kubernetes API server already has an OIDC issuer, but it’s typically not publicly accessible. You must expose the discovery endpoint (/.well-known/openid-configuration) and the JSON Web Key Set (JWKS) URL to the public internet so AWS can reach them.
This is often done using an Ingress controller or a dedicated proxy service. The public URL will be your OIDC provider URL (e.g., https://oidc.your-domain.com).
Step 2: Create an OIDC Identity Provider in IAM
Next, you need to tell AWS to trust your cluster’s public OIDC endpoint.
Get the Root CA Thumbprint: You need the thumbprint of the certificate chain for your OIDC endpoint. You can get this with openssl.
# Replace oidc.your-domain.com with your public OIDC URL
openssl s_client -servername oidc.your-domain.com -showcerts -connect oidc.your-domain.com:443 < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout
This will output a fingerprint. You need the final hash (e.g., 9E99A48A9960D1492597E0D9C9287EE1D16652C5).
Create the IAM OIDC Provider: Use the public URL and the thumbprint to create the provider in AWS.
aws iam create-open-id-connect-provider \
--url https://oidc.your-domain.com \
--client-id-list sts.amazonaws.com \
--thumbprint-list <YOUR_THUMBPRINT>
Step 3: Create the IAM Role and Trust Policy
This step is similar to the EKS configuration, but the trust policy is different. It will trust the OIDC provider you just created instead of the EKS service.
Create a file named non-eks-trust-policy.json:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/oidc.your-domain.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.your-domain.com:sub": "system:serviceaccount:<NAMESPACE>:<SERVICE_ACCOUNT_NAME>"
}
}
}
]
}
Key Differences:
- Principal: The Federated principal points to the OIDC provider you created.
- Action: The action is
sts:AssumeRoleWithWebIdentity. - Condition: The condition checks the JWT’s
sub(subject) claim to ensure it matches the specific namespace and service account name.
Create the role using this policy and attach the permissions your pod needs (e.g., S3 access).
Step 4: Configure the Pod
Your application’s AWS SDK needs to be told which role to assume and where to find the JWT. This is done by injecting specific environment variables into the pod.
The service account token is automatically mounted into the pod at /var/run/secrets/kubernetes.io/serviceaccount/token.
Here is an example of the environment variables for the llm-apps pods:
env:
# Tells the AWS SDK which role to assume
- name: AWS_ROLE_ARN
value: "arn:aws:iam::<AWS_ACCOUNT_ID>:role/MyNonEksRole"
# Tells the AWS SDK where to find the token for exchange
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: /var/run/secrets/kubernetes.io/serviceaccount/token
# Standard AWS environment variables
- name: AWS_REGION
value: "us-west-2"
Known Issues
Kube Controller Upgrade: Upgrading to the latest nirmata-kube-controller doesn’t work out of the box. It requires manual YAML updates. This will be fixed in a future patch.
PolicySets Configuration: New PolicySets can only be used with the Kyverno Operator after configuring the secret to access the repository.