Architecture
Overview
The Cloud Control Point (CCP) architecture provides unified governance across cloud environments, Kubernetes clusters, and CI/CD pipelines. It leverages Kyverno-JSON policies to enforce compliance, perform audits, and generate reports on the state of cloud resources. The CCP has three main controllers:
- Admission Controller: Intercepts cloud API requests and enforces policies in real-time.
- Scanner: Periodically scans existing cloud resources for policy compliance.
- Event Handler: (Future) Reacts to cloud events and applies relevant policies.
Common Components
graph LR subgraph "Common Logic" A[Input Payload] --> B(Pre-processor) B --> C(Policy Engine) D[Policy Cache] --> C C --> E(Report Generator) C --> F(Event Emitter) end E --> G[PolicyReport CR] F --> H[Event]
Pre-processor
The Pre-processor transforms incoming payloads (cloud API requests, scan results, or EventBridge events) into a standardized format consumable by the Policy Engine. It extracts relevant information, such as resource type, name, and properties, to create a unified representation for policy evaluation.
Policy Cache
The Policy Cache stores Kyverno-JSON policies and makes them readily available to the Policy Engine. It’s updated whenever policies are created, updated, or deleted, ensuring the engine always uses the latest policy versions. The cache is queried based on policy attributes like spec.admission
, spec.scan
, and spec.events
to select policies applicable to the current context (admission control, scanning, or event handling).
Policy Engine
The Policy Engine is the core component that evaluates the processed payload against the Kyverno-JSON policies retrieved from the Policy Cache. It determines whether the resource complies with the policies and generates a result (pass/fail/warn) for each rule within a policy.
Report Generator
The Report Generator creates PolicyReport
Custom Resources (CRs) based on the Policy Engine’s evaluation results. These reports provide a structured record of policy compliance and violations, including details about the resource, policy, and specific rule violations.
Events
The Event Emitter generates events based on policy evaluation outcomes. These events can be used for notifications, triggering downstream actions, or integrating with other systems. This component provides real-time feedback on policy enforcement and compliance status.
Admission Controller
Overview
The Admission Controller acts as a gatekeeper for cloud API requests. It intercepts requests before they reach the cloud provider and evaluates them against policies. This allows CCP to enforce policies in real-time, preventing non-compliant resources from being created or modified.
Components
- Proxy Controller: Watches for
Proxy
CRs. EachProxy
CR defines a proxy server instance, configuring its port, TLS certificates, and the URLs it intercepts. This allows for flexible deployment and management of multiple proxy servers. - Proxy Server: The actual proxy that intercepts cloud API requests. It pre-processes requests, evaluates them against policies using the common components, and either forwards or blocks requests based on policy results and the
spec.failureAction
(enforce/audit) setting.
graph LR subgraph "Kubernetes Cluster" subgraph "Admission Controller" A[Proxy Controller] --> B[Proxy Server] C[Proxy CR] --> A end end D[Cloud API Request] --> B B --> E[Cloud Provider] B --> J[Common Logic] J --> F[PolicyReport CR] F --> G[Reports Controller] G --> H[Nirmata Kube Controller] H --> I[NCH]
Workflow
- A
Proxy
CR is created, defining the configuration for a proxy server. - The Proxy Controller detects the
Proxy
CR and creates a corresponding Proxy Server instance. - A client makes a cloud API request.
- The Proxy Server intercepts the request.
- The request is pre-processed and evaluated against relevant policies.
- Based on the policy evaluation and
spec.failureAction
, the request is either:- Forwarded: If the request is compliant or the policy is in audit mode.
- Blocked: If the request violates a policy in enforce mode. An error is returned to the client.
- A Kubernetes
Event
is generated, regardless of whether the request was allowed or blocked.
Scanner
Overview
The Scanner periodically scans existing cloud resources for policy compliance. It supports both scheduled scans (defined by the --scanInterval
flag) and on-demand scans triggered by the creation of new policies with spec.scan
set to true
.
Components
- Resource Loader: An interface with provider-specific implementations (AWS, Azure, GCP) for fetching cloud resources. This abstracts the interaction with different cloud APIs, allowing the scanner to work with multiple cloud providers.
- Controller: Orchestrates the scanning process. It retrieves resources using the Resource Loader, pre-processes them, evaluates them against policies, and generates
PolicyReport
CRs. It also manages the scan schedule and triggers on-demand scans. AccountConfig
CRs: (e.g.,AWSAccountConfig
,AzureAccountConfig
) Define the scope of the scan for each cloud provider, including account credentials, regions, and services to scan. These CRs provide a declarative way to configure scans for different cloud environments.
Workflow
graph LR A[Controller] --> B(Resource Loader) C["*AccountConfig CR"] --> A B --> D{Cloud Provider} D --> E[Resource Payload] E --> F["Common Logic"] F --> G[PolicyReport CR] G --> H[Reports Controller] H --> I[Nirmata Kube Controller] I --> J[NCH]
- The Controller is triggered by a schedule or a new policy with
spec.scan: true
. - The Controller reads the relevant
AccountConfig
CR for the target cloud provider (e.g.AWSAccountConfig
). - Using credentials from the
AccountConfig
, the Controller authenticates with the cloud provider. - The Resource Loader fetches the specified resources from the cloud provider.
- Resources are pre-processed and evaluated against policies.
PolicyReport
CRs are generated and sent to the Reports Controller for processing by NCH.
Policy and Reporting
- Unified Policy Format: Kyverno-JSON policies are used for admission control, scanning, and event handling. A single policy can be applied across multiple controllers by setting flags within the policy specification (
spec.admission
,spec.scan
,spec.events
). - Standard Reporting:
PolicyReport
CRs adhere to thewg-policy
format, enabling integration with Kubernetes tooling and providing a consistent reporting structure across different enforcement mechanisms. These reports are compatible with the Nirmata Control Hub (NCH) reporting format.
Role-Based Access Control (RBAC)
RBAC is implemented at the cloud account level. Users or teams granted access to a cloud account can view all resources and policy violations within that account. This provides granular control over who can access and manage cloud compliance data.