Complete Guide: How to Write Your First Kubernetes Policy
A step-by-step tutorial on writing, testing, and deploying your first Open Policy Agent (OPA) Gatekeeper policy to enforce custom rules on your Kubernetes cluster.
What You'll Learn
๐ Prerequisites
- A running Kubernetes cluster (Minikube, Kind, Docker Desktop, or a cloud provider).
kubectlcommand-line tool installed and configured to access your cluster.- OPA Gatekeeper installed on your cluster. If you don't have it, you can install it with:
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/release-3.14/deploy/gatekeeper.yaml - A basic understanding of Kubernetes concepts like Deployments and Labels.
- Read: What is Policy-as-Code?
๐ฏ What You'll Learn
- The structure of a Gatekeeper ConstraintTemplate and Constraint.
- How to write simple Rego logic to validate Kubernetes resources.
- A repeatable process for testing your policies with sample data before deployment.
- How to deploy policies to your cluster and verify they are working.
- Basic commands for monitoring and debugging your new policy.
๐ท๏ธ Topics Covered
Introduction: Why Do We Need Policies?
As your Kubernetes environment grows, ensuring consistency and security becomes a major challenge. How do you prevent a developer from accidentally exposing a service to the internet? How do you make sure every application has a label identifying the team that owns it?
This is where policy-as-code comes in. Using OPA Gatekeeper, an admission controller for Kubernetes, we can define rules (policies) that automatically check every resource created or updated in the cluster. If a resource violates a policy, Gatekeeper will reject it.
In this guide, we will write our very first policy. The goal is simple but practical: ensure every new Deployment has a team label.
๐ฏ Our Policy Goal
We will create a policy that ensures every new Deployment has a team label. This helps with:
- Resource ownership tracking
- Cost allocation and billing
- Security and compliance auditing
- Operational responsibility assignment
Kubernetes Admission Controller Tutorial: Policy Structure Guide
In Gatekeeper, a policy is made of two parts. Understanding this separation is the key to mastering Kubernetes policies.
๐น ConstraintTemplate
This is the policy blueprint. It contains the generic Rego logic to identify a violation. Think of it as a function definition; it defines what to check but doesn't run on its own.
๐น Constraint
This is the policy instance. It tells Gatekeeper to actually *use* a specific ConstraintTemplate, which resources to check (e.g., only Deployments), and any parameters needed. Think of it as a function call.
ConstraintTemplate
- Defines the Rego logic
- Specifies parameters schema
- Reusable across multiple constraints
- Template-level configuration
Constraint
- Applies the template to resources
- Provides specific parameters
- Defines scope (which resources)
- Instance-level enforcement
OPA Gatekeeper First Policy Example: Rego Logic Tutorial
Let's create the blueprint for our "required labels" policy. This file defines the logic to find resources that are missing a label.
Create a file named ct-required-labels.yaml:
ct-required-labels.yaml
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
validation:
# This defines the parameters that our Constraint can accept.
# We want to pass in a list of strings for the required label keys.
openAPIV3Schema:
type: object
properties:
labels:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
# This is the Rego logic that defines a "violation".
rego: |
package k8srequiredlabels
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
# Get the resource being reviewed by Gatekeeper
provided := {label | input.review.object.metadata.labels[label]}
# Get the list of required labels from the Constraint's parameters
required := {label | label := input.parameters.labels[_]}
# Find the set of labels that are required but not provided
missing := required - provided
# If the 'missing' set is not empty, we have a violation
count(missing) > 0
# This is the error message that will be shown to the user
msg := sprintf("You are missing required labels: %v", [missing])
}๐ Key Takeaways from this file
- The
metadata.name(k8srequiredlabels) is how we'll reference this template later. - The
regoblock contains the core logic. It compares the set of required labels (passed in as a parameter) with the set of labels provided on the Kubernetes resource. - If any required labels are
missing, it generates aviolationwith a helpful error message.
How to Write Kubernetes Policies Step by Step: Testing Guide
Before deploying a policy to a live cluster, you should always test it. The OPA Playground is a perfect tool for this.
Setup OPA Playground
Copy the Rego code from the rego: block above and paste it into the main editor window of the OPA Playground.
Add Test Input
In the INPUT panel on the right, paste JSON that simulates an invalid Kubernetes Deployment being submitted to the API server.
Evaluate Policy
Click the "Evaluate" button to see if your policy correctly identifies the violation.
Verify Results
Check that the output shows the expected violation message, confirming your logic works correctly.
Test Input (Invalid Deployment)
INPUT (Invalid Deployment)
{
"review": {
"object": {
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"name": "nginx-bad",
"labels": {
"app": "nginx"
}
}
}
},
"parameters": {
"labels": ["team"]
}
}Click the "Evaluate" button. You will see the following JSON output, correctly identifying the violation:
Playground Output
[
{
"msg": "You are missing required labels: [\"team\"]",
"details": {
"missing_labels": [
"team"
]
}
}
]This confirms our logic is working correctly! If you were to add a team label to the input and re-evaluate, the output would be empty, meaning no violation was found.
Kubernetes Policy Validation Tutorial: Deployment Best Practices
Now that we've tested our logic, let's deploy the policy.
A. Apply the ConstraintTemplate
First, apply the template to your cluster so Gatekeeper knows about our new policy logic.
Deploy ConstraintTemplate
kubectl apply -f ct-required-labels.yamlB. Apply the Constraint
Next, create the Constraint to actually enforce the policy. This file tells Gatekeeper to use our template to check all Deployments.
Create a file named c-require-team-label.yaml:
c-require-team-label.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels # This matches the 'kind' in our ConstraintTemplate
metadata:
name: deployment-must-have-team-label
spec:
match:
kinds:
- apiGroups: ["apps"]
kinds: ["Deployment"]
parameters:
labels:
- "team" # This is the specific label key we are requiringApply it to the cluster:
Deploy Constraint
kubectl apply -f c-require-team-label.yamlC. Test the Enforcement
Our policy is now live! Let's try to create a Deployment that violates it. Create deployment-invalid.yaml:
deployment-invalid.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-invalid
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx # Missing the 'team' label!
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80Now, try to apply it:
Test Invalid Deployment
kubectl apply -f deployment-invalid.yamlGatekeeper will reject it with an error message, proving our policy works!
Expected Error Output
Error from server (Forbidden): error when creating "deployment-invalid.yaml": admission webhook "validation.gatekeeper.sh" denied the request: [deployment-must-have-team-label] You are missing required labels: ["team"]Gatekeeper ConstraintTemplate Examples: Monitoring Guide
Once a policy is deployed, how do you see if existing resources are in violation? Gatekeeper's audit feature runs periodically and reports violations on the Constraint object itself.
You can check the status of your constraint by running:
Check Constraint Status
kubectl get k8srequiredlabels deployment-must-have-team-label -o yamlIn the output, you can look at the status.totalViolations field to see a count of non-compliant resources that already existed in the cluster before the policy was applied.
If you need to debug the policy engine itself, you can check the logs of the Gatekeeper pods in the gatekeeper-system namespace.
Debug Gatekeeper Logs
# Check Gatekeeper controller logs
kubectl logs -n gatekeeper-system deployment/gatekeeper-controller-manager
# Check audit logs
kubectl logs -n gatekeeper-system deployment/gatekeeper-audit๐ Monitoring Best Practices
- Regularly check constraint status for violation counts
- Monitor Gatekeeper controller logs for policy errors
- Set up alerts for policy violations in production
- Review audit logs to understand resource compliance trends
๐ Congratulations!
Kubernetes Policy Mastery Achieved
You have successfully written, tested, and deployed your first Kubernetes policy with OPA Gatekeeper. You now have the fundamental skills to:
Structure Gatekeeper Policies
Understand and create ConstraintTemplates and Constraints for policy enforcement.
Write Basic Rego
Create Rego logic to validate resource attributes and generate meaningful violation messages.
Test Before Deploy
Use the OPA Playground and other tools to validate policies before deploying them to production.
Enforce and Monitor
Deploy policies to live clusters, prevent misconfigurations, and monitor policy violations effectively.
Next Steps
๐ Advanced Policy Ideas to Explore
From here, you can explore more complex policies, such as:
- Restricting container images to trusted registries.
- Disallowing the use of the
latestimage tag. - Enforcing resource requests and limits on Pods.
- Validating Ingress hostnames against an approved list.
The principles you learned in this guide are the foundation for building a robust and secure Kubernetes governance strategy.