Keystone Operator NetworkPolicy
Reference documentation for the chart-level NetworkPolicy that restricts the egress and ingress of the keystone-operator pod itself. This is distinct from the per-CR NetworkPolicy emitted by reconcileNetworkPolicy, which protects Keystone API pods rendered from a Keystone CR.
- Scope: the operator Deployment pod selected by
keystone-operator.selectorLabels. - Chart:
operators/keystone/helm/keystone-operator. - Template:
templates/networkpolicy.yaml. - Values schema: the authoritative contract for all tunables is
values.schema.json.
Overview
When networkPolicy.enabled=true, the chart renders one networking.k8s.io/v1NetworkPolicy that:
- Default-denies both directions for the operator pod by listing
IngressandEgressinpolicyTypeswithout a catch-all rule. - Opens explicit egress to the kube-apiserver (required for all controller-runtime clients and leader election).
- Opens explicit egress to cluster DNS (required to resolve Service DNS names used by ESO, MariaDB, and any hostname-based client config).
- When
webhook.enabled=true, opens explicit ingress on TCP 9443 from the API-server CIDRs so admission webhooks are reachable. - When
networkPolicy.allowMetricsFromis non-empty, opens explicit ingress on the metrics port from the listed peers (opt-in).
Two failure modes are explicitly refused by the template at render time (fail-closed): an empty kubeApiServer.cidrs or an empty kubeApiServer.ports list while enabled=true. Either condition triggers rather than rendering a NetworkPolicy that would block all controller traffic or open every port.
Default-off posture
networkPolicy.enabled defaults to false. Rationale:
- Many production clusters run CNIs that do not enforce NetworkPolicy (kindnet without extension, Flannel without
kube-router, etc.). Opting in by default would silently provide no protection on those clusters while adding surface area on clusters that do enforce it. - Both
kubeApiServer.cidrsandkubeApiServer.portsare cluster-specific. There is no safe default that works across kind, GKE, EKS, AKS, and on-prem kubeadm installations. - Operators upgrading from earlier chart versions keep working with no change in values.
Operators must explicitly opt in by setting networkPolicy.enabled=true and populating kubeApiServer.cidrs / kubeApiServer.ports. The how-to guide walks through the enablement steps.
Rules rendered
All rules below are emitted on a single NetworkPolicy object named after include "keystone-operator.fullname" . in the release namespace, with spec.podSelector matching the operator Deployment's selector labels.
Egress
| Direction | Protocol / Port | Peer | Values key | Default | Gated by |
|---|---|---|---|---|---|
| Egress | UDP 53 + TCP 53 | namespaceSelector + podSelector | networkPolicy.dns.namespaceSelector, networkPolicy.dns.podSelector | kubernetes.io/metadata.name: kube-system + k8s-app: kube-dns | networkPolicy.dns.enabled (default true) |
| Egress | TCP <ports[*]> | ipBlock[*] | networkPolicy.kubeApiServer.cidrs, networkPolicy.kubeApiServer.ports | [] (must be set when enabled=true) | Always when networkPolicy.enabled=true |
One rule, N×M tuples. The kube-apiserver rule emits a single
egressentry with all CIDRs underto:and all ports underports:. By NetworkPolicy semantics this permits every (cidr, port) combination — one rule with three CIDRs and two ports covers six tuples. Do not expand the list into one rule per tuple.
Ingress
| Direction | Protocol / Port | Peer | Values key | Default | Gated by |
|---|---|---|---|---|---|
| Ingress | TCP 9443 | ipBlock[*] | networkPolicy.webhookClients.cidrs (falls back to networkPolicy.kubeApiServer.cidrs when empty) | fallback to kubeApiServer.cidrs | webhook.enabled (default true) |
| Ingress | TCP <metrics.port> | each entry from allowMetricsFrom rendered verbatim as a NetworkPolicyPeer | networkPolicy.allowMetricsFrom | [] | Non-empty allowMetricsFrom |
Not covered: health probes (port 8081)
The operator exposes liveness and readiness probes on TCP 8081. These probes are called by the kubelet from the node's host network namespace, which is not subject to NetworkPolicy in the standard CNIs (Calico, Cilium, Antrea) — see the upstream Kubernetes NetworkPolicy "what you can't do" list. Therefore the template renders no ingress rule for 8081, and adding one is unnecessary. Probes continue to work with the default-deny ingress posture.
Values snippet: kind cluster
The following snippet enables the policy on a local kind cluster, whose API server endpoint is reachable at 10.96.0.1:6443 via the built-in kubernetes Service:
networkPolicy:
enabled: true
kubeApiServer:
cidrs:
- 10.96.0.1/32
ports:
- 6443
# dns, allowMetricsFrom, webhookClients left at defaultsThis same snippet is the fixture used by the chart-level E2E test at tests/e2e/keystone-operator/network-policy-egress/00-install-operator.yaml and is the minimum viable configuration on kind.
Production example
Production clusters typically have the API server behind a VIP or NLB outside the cluster CIDR. Discover the endpoint with:
kubectl get endpoints kubernetes -o json \
| jq -r '.subsets[] | .addresses[].ip as $ip | .ports[].port as $p | "\($ip) \($p)"'and set cidrs/ports accordingly. Metrics scraping peers are added explicitly:
networkPolicy:
enabled: true
kubeApiServer:
cidrs: ["10.0.0.10/32", "10.0.0.11/32", "10.0.0.12/32"]
ports: [6443]
allowMetricsFrom:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheusFail-closed guards
The template mirrors the defensive guard convention from the per-CR NetworkPolicy sub-reconciler (operators/keystone/internal/controller/reconcile_networkpolicy.go, lines ~61-63). If networkPolicy.enabled=true but either kubeApiServer.cidrs or kubeApiServer.ports is empty, Helm render aborts with one of:
Error: execution error at (keystone-operator/templates/networkpolicy.yaml):
networkPolicy.kubeApiServer.cidrs must not be empty when networkPolicy.enabled=true:
refusing to render a NetworkPolicy that would block all kube-apiserver egressError: execution error at (keystone-operator/templates/networkpolicy.yaml):
networkPolicy.kubeApiServer.ports must not be empty when networkPolicy.enabled=true:
refusing to render a NetworkPolicy that would open all ports to kube-apiserverThe JSON schema (values.schema.json) also enforces minItems: 1 on both lists when enabled=true, but schema validation can be bypassed with helm --skip-schema-validation. The template guard is the defense-in-depth backstop that catches that case.
Testing
| File | Scope |
|---|---|
operators/keystone/helm/keystone-operator/tests/networkpolicy_test.yaml | helm-unittest suite: default-off, enabled-on, DNS rule, kube-apiserver rule, webhook ingress, metrics ingress, no 8081 rule, fail-closed guards |
operators/keystone/helm/keystone-operator/tests/schema_validation_test.yaml | Schema tests for networkPolicy (invalid CIDR, out-of-range port, non-boolean enabled) |
tests/e2e/keystone-operator/network-policy-egress/chainsaw-test.yaml | Chainsaw E2E: operator installed with networkPolicy.enabled=true on kind still reconciles a minimal Keystone CR to Ready=True |
Run the helm-unittest suite locally with:
helm unittest operators/keystone/helm/keystone-operator \
-f 'tests/networkpolicy_test.yaml'Related
- How to enable the keystone-operator NetworkPolicy — step-by-step enablement, verification, and troubleshooting.
- Keystone Reconciler Architecture — including the per-CR
reconcileNetworkPolicysub-reconciler, which protects Keystone API pods (not the operator itself). - Upstream: Kubernetes NetworkPolicy concepts.