advanced 30 min read cloud-security Updated: 2025-07-18

OPA Gatekeeper Tutorial

A step-by-step guide to installing, configuring, and writing your first policies with Open Policy Agent (OPA) Gatekeeper on Kubernetes.

🏷️ Topics Covered

opa gatekeeper tutorialkubernetes policy enforcementrego policy examplesgatekeeper constrainttemplatekubernetes admission controlleropa gatekeeper installation helmgatekeeper constraintskubernetes security policies

What is OPA Gatekeeper?

Open Policy Agent (OPA) Gatekeeper is a specialized Kubernetes-native policy engine that enforces policies executed by OPA. It functions as a validating and mutating admission controller webhook, allowing you to intercept requests to the Kubernetes API server *before* an object is created or updated. This "shift-left" approach prevents non-compliant resources from ever entering your cluster.

🎯 Core Concept

Gatekeeper extends Kubernetes by introducing new custom resources (CRDs) that allow you to define and enforce policies without modifying your application code. It's the bridge between the general-purpose OPA engine and the Kubernetes API.

Core Concepts

Understanding two key resources is essential to using Gatekeeper:

📜

ConstraintTemplate

The blueprint for a policy. It contains the Rego code that defines the policy logic and the schema for the parameters that will be passed to it.

⚙️

Constraint

An instance of a `ConstraintTemplate`. This is where you specify which resources the policy applies to (e.g., Deployments in the `production` namespace) and provide the parameters defined in the template.

Installation and Setup

The recommended way to install Gatekeeper is with its official Helm chart, which simplifies configuration and upgrades.

# 1. Add the Gatekeeper Helm repository
helm repo add gatekeeper https://open-policy-agent.github.io/gatekeeper/charts

# 2. Update your Helm repositories
helm repo update

# 3. Install Gatekeeper into its own namespace
helm install gatekeeper gatekeeper/gatekeeper \
  --namespace gatekeeper-system \
  --create-namespace

# 4. Verify the installation
kubectl get pods --namespace gatekeeper-system
# You should see the audit and controller-manager pods running.

Writing Your First Policy: Require Labels

Let's create a common policy: ensuring all new namespaces have an `owner` label. This is a multi-step process that demonstrates the power of Gatekeeper's architecture.

1

Define the ConstraintTemplate

First, we create the template. This defines the Rego logic to check for missing labels and specifies that a `labels` array will be a parameter.

# k8srequiredlabels-template.yaml
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
  name: k8srequiredlabels
spec:
  crd:
    spec:
      names:
        kind: K8sRequiredLabels
      validation:
        openAPIV3Schema:
          type: object
          properties:
            labels:
              type: array
              items:
                type: string
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8srequiredlabels

        violation[{"msg": msg, "details": {"missing_labels": missing}}] {
          provided := {label | input.review.object.metadata.labels[label]}
          required := {label | label := input.parameters.labels[_]}
          missing := required - provided
          count(missing) > 0
          msg := sprintf("You must provide labels: %v", [missing])
        }
2

Apply the ConstraintTemplate

Apply the template to your cluster so Gatekeeper knows about this new type of policy.

kubectl apply -f k8srequiredlabels-template.yaml
3

Create the Constraint

Now, create an instance of the template. This `Constraint` targets all `Namespace` resources and passes the `owner` label as a parameter.

# ns-must-have-owner-constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
  name: ns-must-have-owner
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Namespace"]
  parameters:
    labels: ["owner"]
4

Apply the Constraint

Apply the constraint to start enforcing the policy.

kubectl apply -f ns-must-have-owner-constraint.yaml
5

Test the Policy

Try to create a namespace that violates the policy. The Kubernetes API server will reject it with a message from Gatekeeper.

# bad-ns.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: test-ns
---
# Try to apply it
$ kubectl apply -f bad-ns.yaml

Error from server (Forbidden): admission webhook "validation.gatekeeper.sh" denied the request: [ns-must-have-owner] You must provide labels: {"owner"}

Advanced Policy Examples

1. Disallow the 'latest' Image Tag

Using the `:latest` tag in production is a bad practice. This policy blocks any pod that uses it.

# k8sdisallowlatesttag-template.yaml
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
  name: k8sdisallowlatesttag
spec:
  crd:
    spec:
      names:
        kind: K8sDisallowLatestTag
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8sdisallowlatesttag

        violation[{"msg": msg}] {
          container := input.review.object.spec.containers[_]
          endswith(container.image, ":latest")
          msg := sprintf("Container <%v> uses the 'latest' tag which is not allowed.", [container.name])
        }
---
# no-latest-tag-constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sDisallowLatestTag
metadata:
  name: no-latest-tag
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]

2. Enforce Container Resource Limits

This policy ensures every container has both CPU and memory limits defined.

# k8sresourcelimits-template.yaml
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
  name: k8sresourcelimits
spec:
  crd:
    spec:
      names:
        kind: K8sResourceLimits
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8sresourcelimits

        violation[{"msg": msg}] {
          container := input.review.object.spec.containers[_]
          not container.resources.limits.cpu
          msg := sprintf("Container <%v> is missing cpu limits.", [container.name])
        }

        violation[{"msg": msg}] {
          container := input.review.object.spec.containers[_]
          not container.resources.limits.memory
          msg := sprintf("Container <%v> is missing memory limits.", [container.name])
        }
---
# require-resource-limits-constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sResourceLimits
metadata:
  name: require-resource-limits
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
    namespaces:
      - "production"

Troubleshooting Common Issues

Problem: Policy is applied but doesn't block non-compliant resources.

  • Check `enforcementAction`: By default, constraints are set to `deny`. If it was changed to `dryrun`, it will only report violations, not block them.
  • Check `match` clauses: Ensure the `kinds` and `namespaces` in your `Constraint` correctly target the resources you want to validate. A common mistake is a typo in a kind or namespace.
  • Check Webhook Status: Run `kubectl get validatingwebhookconfigurations` to ensure the Gatekeeper webhook is active and configured correctly.

Problem: My Rego works in the OPA Playground but fails in the ConstraintTemplate.

  • Check the Input Path: In the playground, you might test with `input.metadata`. Inside a Gatekeeper `ConstraintTemplate`, the actual resource is located at `input.review.object`. So your Rego should reference `input.review.object.metadata`.

🎉 Congratulations!

You now have a solid foundation for using OPA Gatekeeper to enforce custom policies in your Kubernetes cluster. You've learned how to:

  • ✅ Install Gatekeeper and understand its core concepts.
  • ✅ Write `ConstraintTemplates` with Rego and instantiate them with `Constraints`.
  • ✅ Create and test a real-world policy to require labels.
  • ✅ Implement advanced policies for security and operational best practices.