Kubernetes Policy Engines
Compare and implement policies using OPA Gatekeeper, Kyverno, and Polaris for comprehensive Kubernetes governance
What You'll Learn
- Kubernetes Policy Engines 2025: OPA Gatekeeper vs Kyverno vs Polaris Comparison
- OPA Gatekeeper Tutorial: Complete Guide to Kubernetes Policy Enforcement
- Kyverno Tutorial: YAML-Based Kubernetes Policy Management Guide
- Polaris Kubernetes Scanner: Security Best Practice Validation Tutorial
- Kubernetes Multi-Policy Engine Strategy: Combining Gatekeeper, Kyverno, and Polaris
- Kubernetes Policy CI/CD Integration: Automated Security Scanning Pipeline
- Kubernetes Policy Monitoring: Prometheus Metrics and Grafana Dashboards Setup
- Kubernetes Policy Best Practices
- Troubleshooting Common Issues
- Your Achievement Summary
๐ Prerequisites
- Kubernetes cluster with admin access
- kubectl configured and working
- Helm 3.x installed
- Basic understanding of Kubernetes resources and RBAC
- Familiarity with YAML and admission controllers
- Read: What is Policy-as-Code?
๐ฏ What You'll Learn
- Comparing OPA Gatekeeper, Kyverno, and Polaris
- Installing and configuring each policy engine
- Writing policies for security, compliance, and best practices
- Implementing mutation and validation rules
- Monitoring and reporting on policy violations
- Best practices for multi-engine environments
๐ท๏ธ Topics Covered
Kubernetes Policy Engines 2025: OPA Gatekeeper vs Kyverno vs Polaris Comparison
Kubernetes policy engines provide automated governance for your clusters by enforcing security policies, compliance requirements, and operational best practices. Each engine has unique strengths and approaches to policy management.
๐ OPA Gatekeeper
Kubernetes-native policy engine using Open Policy Agent with Rego language
๐ก๏ธ Kyverno
YAML-based policy engine with no new language to learn
๐ Polaris
Open source validation tool for Kubernetes best practices
Feature Comparison
Policy Engine Feature Matrix
| Feature | OPA Gatekeeper | Kyverno | Polaris |
|---|---|---|---|
| Policy Language | Rego | YAML | YAML Config |
| Admission Control | โ Full | โ Full | โ Validation only |
| Mutation | โ Yes | โ Yes | โ No |
| Dry Run Mode | โ Yes | โ Yes | โ Yes |
| External Data | โ Yes | โ Limited | โ No |
| Learning Curve | ๐ด High | ๐ข Low | ๐ข Low |
OPA Gatekeeper Tutorial: Complete Guide to Kubernetes Policy Enforcement
OPA Gatekeeper brings Open Policy Agent to Kubernetes as a validating and mutating admission controller. It uses Constraint Templates and Constraints to define and enforce policies written in the Rego language.
Installation and Setup
Install OPA Gatekeeper
# Install using kubectl
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/release-3.14/deploy/gatekeeper.yaml
# Or install using Helm
helm repo add gatekeeper https://open-policy-agent.github.io/gatekeeper/charts
helm install gatekeeper gatekeeper/gatekeeper \
--namespace gatekeeper-system \
--create-namespace
# Verify installation
kubectl get pods -n gatekeeper-system
# Check webhook configuration
kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io \
-l gatekeeper.sh/system=yesCreating Constraint Templates
required-labels-template.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
validation:
properties:
labels:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
violation[{"msg": msg}] {
required := input.parameters.labels
provided := input.review.object.metadata.labels
missing := required[_]
not provided[missing]
msg := sprintf("Missing required label: %v", [missing])
}container-limits-template.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8scontainerlimits
spec:
crd:
spec:
names:
kind: K8sContainerLimits
validation:
properties:
cpu:
type: string
memory:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8scontainerlimits
missing_cpu_limits[container] {
container := input.review.object.spec.containers[_]
not container.resources.limits.cpu
}
missing_memory_limits[container] {
container := input.review.object.spec.containers[_]
not container.resources.limits.memory
}
violation[{"msg": msg}] {
container := missing_cpu_limits[_]
msg := sprintf("Container '%v' is missing CPU limits", [container.name])
}
violation[{"msg": msg}] {
container := missing_memory_limits[_]
msg := sprintf("Container '%v' is missing memory limits", [container.name])
}
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
cpu_limit := container.resources.limits.cpu
cpu_limit_num := to_number(trim_suffix(cpu_limit, "m"))
cpu_limit_num > to_number(trim_suffix(input.parameters.cpu, "m"))
msg := sprintf("Container '%v' CPU limit '%v' exceeds maximum '%v'",
[container.name, cpu_limit, input.parameters.cpu])
}Creating Constraints
require-labels-constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: must-have-environment-label
spec:
match:
kinds:
- apiGroups: ["apps"]
kinds: ["Deployment", "ReplicaSet"]
- apiGroups: [""]
kinds: ["Pod", "Service"]
excludedNamespaces: ["kube-system", "gatekeeper-system"]
parameters:
labels: ["environment", "app", "version"]container-limits-constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sContainerLimits
metadata:
name: container-must-have-limits
spec:
match:
kinds:
- apiGroups: ["apps"]
kinds: ["Deployment"]
- apiGroups: [""]
kinds: ["Pod"]
excludedNamespaces: ["kube-system", "gatekeeper-system"]
parameters:
cpu: "1000m"
memory: "2Gi"Advanced Gatekeeper Examples
๐ง 1. Security Context Policy
security-context-template.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8ssecuritycontext
spec:
crd:
spec:
names:
kind: K8sSecurityContext
validation:
properties:
runAsNonRoot:
type: boolean
allowPrivilegeEscalation:
type: boolean
requiredDropCapabilities:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8ssecuritycontext
violation[{"msg": msg}] {
input.parameters.runAsNonRoot
container := input.review.object.spec.containers[_]
not container.securityContext.runAsNonRoot
msg := sprintf("Container '%v' must run as non-root", [container.name])
}
violation[{"msg": msg}] {
not input.parameters.allowPrivilegeEscalation
container := input.review.object.spec.containers[_]
container.securityContext.allowPrivilegeEscalation
msg := sprintf("Container '%v' must not allow privilege escalation", [container.name])
}
violation[{"msg": msg}] {
required_drops := input.parameters.requiredDropCapabilities
container := input.review.object.spec.containers[_]
dropped := container.securityContext.capabilities.drop
missing := required_drops[_]
not missing in dropped
msg := sprintf("Container '%v' must drop capability '%v'", [container.name, missing])
}๐ง 2. Network Policy Enforcement
network-policy-template.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8srequirenetworkpolicy
spec:
crd:
spec:
names:
kind: K8sRequireNetworkPolicy
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequirenetworkpolicy
violation[{"msg": msg}] {
input.review.object.kind == "Namespace"
namespace := input.review.object.metadata.name
not has_network_policy(namespace)
msg := sprintf("Namespace '%v' must have a NetworkPolicy", [namespace])
}
has_network_policy(namespace) {
# This would require external data to check existing NetworkPolicies
# For demo purposes, we'll check if the namespace has a specific annotation
input.review.object.metadata.annotations["networking.policy/required"] == "false"
}
has_network_policy(namespace) {
input.review.object.metadata.labels["network-policy"] == "default"
}๐ง 3. Mutation Example - Adding Labels
add-labels-mutation.yaml
apiVersion: mutations.gatekeeper.sh/v1alpha1
kind: Assign
metadata:
name: add-default-labels
spec:
applyTo:
- groups: ["apps"]
kinds: ["Deployment"]
versions: ["v1"]
match:
scope: Namespaced
kinds:
- apiGroups: ["apps"]
kinds: ["Deployment"]
excludedNamespaces: ["kube-system", "gatekeeper-system"]
location: "spec.template.metadata.labels.managed-by"
parameters:
assign:
value: "gatekeeper"
---
apiVersion: mutations.gatekeeper.sh/v1alpha1
kind: AssignMetadata
metadata:
name: add-creation-timestamp
spec:
match:
scope: Namespaced
kinds:
- apiGroups: ["apps"]
kinds: ["Deployment"]
location: "metadata.annotations.created-by-gatekeeper"
parameters:
assign:
value: "{{ .CreationTimestamp }}"Monitoring Gatekeeper
Check Constraint Status
# List all constraints
kubectl get constraints
# Check specific constraint status
kubectl describe k8srequiredlabels must-have-environment-label
# View violations
kubectl get k8srequiredlabels must-have-environment-label -o yaml
# Check gatekeeper audit logs
kubectl logs -n gatekeeper-system -l control-plane=audit-controller
# Check controller manager logs
kubectl logs -n gatekeeper-system -l control-plane=controller-managerKyverno Tutorial: YAML-Based Kubernetes Policy Management Guide
Kyverno is a policy engine designed for Kubernetes that uses YAML for policy definitions. It provides validation, mutation, and generation capabilities without requiring a new language to learn.
Installation and Setup
Install Kyverno
# Install using kubectl
kubectl create -f https://github.com/kyverno/kyverno/releases/latest/download/install.yaml
# Or install using Helm
helm repo add kyverno https://kyverno.github.io/kyverno/
helm install kyverno kyverno/kyverno \
--namespace kyverno \
--create-namespace
# Verify installation
kubectl get pods -n kyverno
# Check webhook configuration
kubectl get validatingwebhookconfigurations \
-l app.kubernetes.io/name=kyvernoValidation Policies
require-labels-policy.yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-labels
spec:
validationFailureAction: enforce
background: true
rules:
- name: check-environment-label
match:
any:
- resources:
kinds:
- Pod
- Service
- Deployment
validate:
message: "Label 'environment' is required"
pattern:
metadata:
labels:
environment: "?*"
- name: check-app-label
match:
any:
- resources:
kinds:
- Pod
- Service
- Deployment
validate:
message: "Label 'app' is required"
pattern:
metadata:
labels:
app: "?*"container-security-policy.yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: container-security
spec:
validationFailureAction: enforce
background: true
rules:
- name: check-runAsNonRoot
match:
any:
- resources:
kinds:
- Pod
- Deployment
validate:
message: "Containers must run as non-root"
pattern:
spec:
=(securityContext):
=(runAsNonRoot): true
containers:
- name: "*"
=(securityContext):
=(runAsNonRoot): true
- name: require-resource-limits
match:
any:
- resources:
kinds:
- Pod
- Deployment
validate:
message: "Containers must have resource limits"
pattern:
spec:
containers:
- name: "*"
resources:
limits:
memory: "?*"
cpu: "?*"
- name: disallow-privileged
match:
any:
- resources:
kinds:
- Pod
- Deployment
validate:
message: "Privileged containers are not allowed"
pattern:
spec:
containers:
- name: "*"
=(securityContext):
=(privileged): falseMutation Policies
add-default-labels.yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-default-labels
spec:
rules:
- name: add-managed-by-label
match:
any:
- resources:
kinds:
- Pod
- Service
- Deployment
mutate:
patchStrategicMerge:
metadata:
labels:
+(managed-by): kyverno
+(created-date): "{{ time_now_utc() }}"
- name: add-security-context
match:
any:
- resources:
kinds:
- Pod
- Deployment
mutate:
patchStrategicMerge:
spec:
+(securityContext):
+(runAsNonRoot): true
+(fsGroup): 2000
containers:
- (name): "*"
+(securityContext):
+(allowPrivilegeEscalation): false
+(capabilities):
+(drop):
- ALL
+(readOnlyRootFilesystem): true
+(runAsNonRoot): trueGeneration Policies
generate-network-policy.yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: generate-default-network-policy
spec:
rules:
- name: default-deny-ingress
match:
any:
- resources:
kinds:
- Namespace
exclude:
any:
- resources:
namespaces:
- kube-system
- kyverno
- kube-public
generate:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
name: default-deny-ingress
namespace: "{{ request.object.metadata.name }}"
data:
spec:
podSelector: {}
policyTypes:
- Ingress
- name: generate-limit-range
match:
any:
- resources:
kinds:
- Namespace
exclude:
any:
- resources:
namespaces:
- kube-system
- kyverno
generate:
apiVersion: v1
kind: LimitRange
name: default-limitrange
namespace: "{{ request.object.metadata.name }}"
data:
spec:
limits:
- default:
cpu: "500m"
memory: "512Mi"
defaultRequest:
cpu: "100m"
memory: "128Mi"
type: ContainerAdvanced Kyverno Examples
๐ง 1. Image Verification Policy
image-verification.yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: verify-images
spec:
validationFailureAction: enforce
background: false
rules:
- name: verify-signature
match:
any:
- resources:
kinds:
- Pod
verifyImages:
- imageReferences:
- "gcr.io/mycompany/*"
- "registry.company.com/*"
attestors:
- entries:
- keys:
publicKeys: |-
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE...
-----END PUBLIC KEY-----
mutateDigest: true
verifyDigest: true
- name: allowed-registries
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Images must come from approved registries"
pattern:
spec:
containers:
- name: "*"
image: "gcr.io/mycompany/* | registry.company.com/* | docker.io/library/*"๐ง 2. Advanced Mutation with Context
context-based-mutation.yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-environment-config
spec:
rules:
- name: add-environment-variables
match:
any:
- resources:
kinds:
- Deployment
context:
- name: environment
variable:
jmesPath: request.object.metadata.namespace
- name: configmapdata
configMap:
name: env-config
namespace: "{{ environment }}"
mutate:
patchStrategicMerge:
spec:
template:
spec:
containers:
- (name): "*"
env:
- name: ENVIRONMENT
value: "{{ environment }}"
- name: CONFIG_VERSION
value: "{{ configmapdata.data.version }}"
- name: LOG_LEVEL
value: "{{ configmapdata.data.loglevel }}"Monitoring Kyverno
Check Kyverno Status
# List all policies
kubectl get clusterpolicies
# Check policy status
kubectl describe clusterpolicy require-labels
# View policy violations
kubectl get events --field-selector reason=PolicyViolation
# Check Kyverno logs
kubectl logs -n kyverno -l app.kubernetes.io/name=kyverno
# Get policy reports
kubectl get polr -A # Policy Reports
kubectl get cpolr # Cluster Policy Reports
# Check specific policy report
kubectl describe polr -n defaultPolaris Kubernetes Scanner: Security Best Practice Validation Tutorial
Polaris validates Kubernetes deployments against security and reliability best practices. It can run as a dashboard, CLI tool, or webhook validator.
Installation and Setup
Install Polaris
# Install dashboard using kubectl
kubectl apply -f https://github.com/FairwindsOps/polaris/releases/latest/download/dashboard.yaml
# Or install using Helm
helm repo add fairwinds-stable https://charts.fairwinds.com/stable
helm install polaris fairwinds-stable/polaris \
--namespace polaris \
--create-namespace
# Install CLI tool
curl -L https://github.com/FairwindsOps/polaris/releases/latest/download/polaris_linux_amd64.tar.gz \
| tar xz
sudo mv polaris /usr/local/bin/
# Install webhook (for admission control)
helm install polaris fairwinds-stable/polaris \
--namespace polaris \
--create-namespace \
--set webhook.enable=true \
--set dashboard.enable=falseCustom Polaris Configuration
polaris-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: polaris-config
namespace: polaris
data:
config.yaml: |
resources:
cpuRequestsMissing: warning
memoryRequestsMissing: warning
cpuLimitsMissing: warning
memoryLimitsMissing: warning
images:
tagNotSpecified: error
pullPolicyNotAlways: warning
healthChecks:
readinessProbeNotSet: warning
livenessProbeNotSet: warning
networking:
hostNetworkSet: error
hostPortSet: warning
security:
hostIPCSet: error
hostPIDSet: error
notReadOnlyRootFilesystem: warning
privilegeEscalationAllowed: error
runningAsRoot: warning
runningAsNonRoot: error
readOnlyRootFilesystem: warning
capabilities:
error:
ifAnyAdded:
- SYS_ADMIN
- NET_ADMIN
- ALL
warning:
ifAnyAddedBeyond:
- CHOWN
- DAC_OVERRIDE
- FSETID
- FOWNER
- MKNOD
- NET_RAW
- SETGID
- SETUID
- SETFCAP
- SETPCAP
- NET_BIND_SERVICE
- SYS_CHROOT
- KILL
- AUDIT_WRITE
exemptions:
- controllerNames:
- kube-apiserver
- kube-proxy
- kube-scheduler
- etcd-manager-events
- kube-controller-manager
- kube-dns
- etcd-manager-main
rules:
- hostNetworkSet
- hostPortSet
- hostIPCSet
- hostPIDSet
- runningAsRoot
- controllerNames:
- kube-flannel-ds
rules:
- notReadOnlyRootFilesystem
- runningAsRoot
- notRunningAsNonRoot
- readOnlyRootFilesystem
- capabilitiesUsing Polaris CLI
Polaris CLI Usage
# Audit entire cluster
polaris audit --output-format=pretty
# Audit specific namespace
polaris audit --output-format=json --namespace=default
# Audit from YAML files
polaris audit --audit-path=./k8s-manifests/
# Generate detailed report
polaris audit --output-format=json > polaris-audit-report.json
# Set minimum score threshold
polaris audit --set-exit-code-on-danger --set-exit-code-below-score=80
# Audit with custom config
polaris audit --config=custom-polaris-config.yamlCustom Polaris Checks
custom-checks-config.yaml
customChecks:
requireEnvironmentLabel:
successMessage: Environment label is set
failureMessage: Environment label should be set
category: Best Practices
target: Container
schema:
'$schema': http://json-schema.org/draft-07/schema
type: object
properties:
metadata:
type: object
properties:
labels:
type: object
properties:
environment:
type: string
required:
- environment
required:
- labels
required:
- metadata
disallowLatestTag:
successMessage: Image tag is not 'latest'
failureMessage: Image tag should not be 'latest'
category: Security
target: Container
schema:
'$schema': http://json-schema.org/draft-07/schema
type: object
properties:
image:
type: string
not:
pattern: ':latest$|:latest@'
required:
- image
requireSeccompProfile:
successMessage: Seccomp profile is set
failureMessage: Seccomp profile should be set
category: Security
target: Pod
schema:
'$schema': http://json-schema.org/draft-07/schema
type: object
properties:
securityContext:
type: object
properties:
seccompProfile:
type: object
properties:
type:
type: string
enum: ["RuntimeDefault", "Localhost"]
required:
- type
required:
- seccompProfile
required:
- securityContextKubernetes Multi-Policy Engine Strategy: Combining Gatekeeper, Kyverno, and Polaris
Different policy engines excel at different use cases. Here's how to effectively combine them for comprehensive Kubernetes governance.
Recommended Architecture
multi-engine-strategy.yaml
# Use Case Distribution:
# 1. OPA Gatekeeper - Complex governance and compliance
# - Regulatory compliance (SOX, HIPAA, PCI-DSS)
# - Complex business logic
# - External data integration
# - Advanced RBAC policies
# 2. Kyverno - Day-to-day operations and mutations
# - Image policies and verification
# - Resource mutations and generation
# - Simple validation rules
# - Developer-friendly policies
# 3. Polaris - Security scanning and best practices
# - CI/CD security validation
# - Best practice enforcement
# - Developer education
# - Periodic security audits
# Example namespace labeling strategy:
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
policy-engine: "gatekeeper" # Use Gatekeeper for strict compliance
security-scan: "polaris" # Always scan with Polaris
environment: "production"
annotations:
kyverno.io/exclude: "true" # Exclude Kyverno mutations in prod
---
apiVersion: v1
kind: Namespace
metadata:
name: development
labels:
policy-engine: "kyverno" # Use Kyverno for flexibility
security-scan: "polaris" # Always scan with Polaris
environment: "development"Policy Coordination
policy-coordination.yaml
# Kyverno policy to add Gatekeeper exemptions
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: coordinate-with-gatekeeper
spec:
rules:
- name: add-gatekeeper-exemption
match:
any:
- resources:
kinds:
- Pod
namespaces:
- development
- testing
mutate:
patchStrategicMerge:
metadata:
annotations:
+(gatekeeper.sh/exclude): "true"
---
# Gatekeeper constraint with Kyverno coordination
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: production-labels-only
spec:
match:
kinds:
- apiGroups: ["apps"]
kinds: ["Deployment"]
namespaces: ["production"]
excludedNamespaces: []
parameters:
labels: ["environment", "app", "version", "team"]
---
# Polaris webhook configuration to exclude certain namespaces
apiVersion: v1
kind: ConfigMap
metadata:
name: polaris-webhook-config
namespace: polaris
data:
config.yaml: |
webhook:
namespacesExcluded:
- kube-system
- polaris
- gatekeeper-system
- kyverno
rules:
# Only run security checks in webhook mode
security:
runningAsRoot: error
privilegeEscalationAllowed: error
capabilities:
error:
ifAnyAdded: ["SYS_ADMIN", "NET_ADMIN"]Kubernetes Policy CI/CD Integration: Automated Security Scanning Pipeline
Integrate policy validation into your deployment pipelines to catch violations before they reach your clusters.
GitHub Actions Workflow
.github/workflows/k8s-policy-validation.yml
name: Kubernetes Policy Validation
on:
pull_request:
paths:
- 'k8s/**'
- 'manifests/**'
- 'charts/**'
jobs:
policy-validation:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup kubectl
uses: azure/setup-kubectl@v3
with:
version: 'latest'
- name: Install Polaris CLI
run: |
curl -L https://github.com/FairwindsOps/polaris/releases/latest/download/polaris_linux_amd64.tar.gz | tar xz
sudo mv polaris /usr/local/bin/
- name: Install Conftest (for OPA policies)
run: |
curl -L https://github.com/open-policy-agent/conftest/releases/latest/download/conftest_linux_x86_64.tar.gz | tar xz
sudo mv conftest /usr/local/bin/
- name: Install Kyverno CLI
run: |
curl -L https://github.com/kyverno/kyverno/releases/latest/download/kyverno-cli_linux_x86_64.tar.gz | tar xz
sudo mv kyverno /usr/local/bin/
- name: Validate with Polaris
run: |
echo "๐ Running Polaris security validation..."
polaris audit --audit-path=k8s/ --format=json > polaris-results.json
# Check if there are high-severity issues
high_issues=$(cat polaris-results.json | jq '.Results[] | select(.PodResult.ContainerResults[].Results[] | select(.Severity == "error")) | length' | wc -l)
if [ $high_issues -gt 0 ]; then
echo "โ High-severity security issues found!"
cat polaris-results.json | jq '.Results[] | select(.PodResult.ContainerResults[].Results[] | select(.Severity == "error"))'
exit 1
else
echo "โ
Polaris validation passed!"
fi
- name: Validate with Conftest (OPA policies)
run: |
echo "๐ก๏ธ Running OPA policy validation..."
if [ -d "policies/opa" ]; then
find k8s/ -name "*.yaml" -o -name "*.yml" | xargs conftest verify --policy policies/opa/
else
echo "No OPA policies found, skipping..."
fi
- name: Validate with Kyverno CLI
run: |
echo "โ๏ธ Running Kyverno policy validation..."
if [ -d "policies/kyverno" ]; then
kyverno apply policies/kyverno/ --resource k8s/ --output json > kyverno-results.json
# Check for policy violations
violations=$(cat kyverno-results.json | jq '.results[] | select(.result == "fail") | length' | wc -l)
if [ $violations -gt 0 ]; then
echo "โ Kyverno policy violations found!"
cat kyverno-results.json | jq '.results[] | select(.result == "fail")'
exit 1
else
echo "โ
Kyverno validation passed!"
fi
else
echo "No Kyverno policies found, skipping..."
fi
- name: Generate Policy Report
if: always()
run: |
echo "## ๐ Kubernetes Policy Validation Report" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
# Polaris results
if [ -f polaris-results.json ]; then
score=$(cat polaris-results.json | jq '.ClusterSummary.Score')
echo "### Polaris Security Score: ${score}%" >> $GITHUB_STEP_SUMMARY
errors=$(cat polaris-results.json | jq '.ClusterSummary.Results.Error')
warnings=$(cat polaris-results.json | jq '.ClusterSummary.Results.Warning')
successes=$(cat polaris-results.json | jq '.ClusterSummary.Results.Success')
echo "- โ Errors: $errors" >> $GITHUB_STEP_SUMMARY
echo "- โ ๏ธ Warnings: $warnings" >> $GITHUB_STEP_SUMMARY
echo "- โ
Successes: $successes" >> $GITHUB_STEP_SUMMARY
fi
echo "" >> $GITHUB_STEP_SUMMARY
echo "### Policy Engines Used" >> $GITHUB_STEP_SUMMARY
echo "- ๐ Polaris: Security best practices" >> $GITHUB_STEP_SUMMARY
[ -d "policies/opa" ] && echo "- ๐ก๏ธ OPA: Custom governance policies" >> $GITHUB_STEP_SUMMARY
[ -d "policies/kyverno" ] && echo "- โ๏ธ Kyverno: Validation and mutation" >> $GITHUB_STEP_SUMMARY
- name: Upload Policy Results
if: always()
uses: actions/upload-artifact@v3
with:
name: policy-validation-results
path: |
polaris-results.json
kyverno-results.jsonPre-commit Hooks
.pre-commit-config.yaml
repos:
- repo: https://github.com/FairwindsOps/polaris
rev: 5.0.0
hooks:
- id: polaris-audit
args: ['--audit-path', 'k8s/', '--set-exit-code-on-danger']
- repo: https://github.com/open-policy-agent/conftest
rev: v0.46.0
hooks:
- id: conftest-verify
args: ['--policy', 'policies/opa/']
files: ^k8s/.*\.ya?ml$
- repo: local
hooks:
- id: kyverno-validate
name: Kyverno Policy Validation
entry: kyverno
language: system
args: ['apply', 'policies/kyverno/', '--resource']
files: ^k8s/.*\.ya?ml$
pass_filenames: trueKubernetes Policy Monitoring: Prometheus Metrics and Grafana Dashboards Setup
Set up comprehensive monitoring to track policy compliance across your Kubernetes clusters.
Prometheus Metrics Integration
policy-metrics-servicemonitor.yaml
# ServiceMonitor for Gatekeeper metrics
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: gatekeeper-metrics
namespace: gatekeeper-system
spec:
selector:
matchLabels:
control-plane: controller-manager
endpoints:
- port: metrics
interval: 30s
path: /metrics
---
# ServiceMonitor for Kyverno metrics
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: kyverno-metrics
namespace: kyverno
spec:
selector:
matchLabels:
app.kubernetes.io/name: kyverno
endpoints:
- port: metrics
interval: 30s
path: /metrics
---
# Custom policy violation exporter
apiVersion: apps/v1
kind: Deployment
metadata:
name: policy-violation-exporter
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: policy-violation-exporter
template:
metadata:
labels:
app: policy-violation-exporter
spec:
containers:
- name: exporter
image: policy-exporter:latest
ports:
- containerPort: 8080
name: metrics
env:
- name: PROMETHEUS_PORT
value: "8080"
- name: SCRAPE_INTERVAL
value: "60s"Grafana Dashboard Configuration
policy-dashboard.json
{
"dashboard": {
"title": "Kubernetes Policy Compliance",
"panels": [
{
"title": "Policy Violations by Engine",
"type": "stat",
"targets": [
{
"expr": "sum by (engine) (policy_violations_total)",
"legendFormat": "{{engine}}"
}
]
},
{
"title": "Gatekeeper Constraint Violations",
"type": "table",
"targets": [
{
"expr": "gatekeeper_violations{enforcement_action=\"deny\"}",
"format": "table"
}
]
},
{
"title": "Kyverno Policy Results",
"type": "piechart",
"targets": [
{
"expr": "sum by (result) (kyverno_policy_results_total)",
"legendFormat": "{{result}}"
}
]
},
{
"title": "Security Score Trend",
"type": "timeseries",
"targets": [
{
"expr": "polaris_security_score",
"legendFormat": "Security Score"
}
]
},
{
"title": "Top Violating Namespaces",
"type": "bargauge",
"targets": [
{
"expr": "topk(10, sum by (namespace) (policy_violations_total))",
"legendFormat": "{{namespace}}"
}
]
}
]
}
}Kubernetes Policy Best Practices
Engine Selection
Use Gatekeeper for complex compliance requirements, Kyverno for operational automation, Polaris for security scanning and education, and avoid policy conflicts between engines.
Policy Design
Start with warn/audit mode before enforcement, use clear descriptive policy names, include helpful violation messages, and document policy rationale and exemptions.
Testing Strategy
Test policies in development environments first, use automated policy validation in CI/CD, create comprehensive test cases, and monitor policy performance impact.
Deployment Strategy
Gradual rollout across clusters, use namespace-based policy scoping, plan for emergency policy disabling, and regular policy review and updates.
Governance
Maintain policy catalog and documentation, define clear escalation procedures, regular compliance reporting, and team training on policy tools.
Performance
Monitor admission controller latency, optimize complex Rego policies, use appropriate failure modes, and regular performance testing.
Troubleshooting Common Issues
Admission Controller Issues
โ Webhook timeouts causing failures
Problem: Policy webhook timeouts causing deployment failures.
Solutions:
- Check webhook endpoint health and logs
- Increase webhook timeout settings
- Optimize policy complexity and performance
- Configure failure policy appropriately
โ Policy conflicts between engines
Problem: Multiple policy engines creating conflicting rules.
Solutions:
- Use namespace selectors to scope policies
- Coordinate exemptions between engines
- Establish clear ownership boundaries
- Document inter-engine dependencies
Policy Development Issues
โ Rego policy not working as expected
Problem: OPA Gatekeeper Rego policies not behaving correctly.
Solutions:
- Use OPA Playground to test Rego logic
- Add debug output to policies
- Check input data structure carefully
- Test with minimal examples first
๐ Congratulations!
Kubernetes Policy Engine Mastery
You now have comprehensive knowledge of Kubernetes policy engines including:
OPA Gatekeeper
Master complex compliance policies using Open Policy Agent with Rego language for enterprise governance.
Kyverno
Implement YAML-based policy management with validation, mutation, and generation capabilities.
Polaris
Validate Kubernetes deployments against security and reliability best practices with comprehensive scanning.
Multi-Engine Integration
Combine different policy engines effectively for comprehensive Kubernetes governance strategies.
CI/CD Pipeline Integration
Integrate automated security scanning pipelines with policy validation in your deployment workflows.
Monitoring and Troubleshooting
Set up comprehensive monitoring with Prometheus metrics and Grafana dashboards for policy compliance tracking.