Chainsaw E2E Tests
Reference documentation for the Kyverno Chainsaw end-to-end test suite that validates the Memcached operator against a real kind cluster, covering deployment, scaling, configuration changes, monitoring, PDB management, graceful rolling updates, webhook validation, garbage collection, SASL authentication, TLS encryption, mutual TLS (mTLS), NetworkPolicy lifecycle, Service annotation propagation, status degraded detection, scale-to-zero behavior, owner reference GC chain validation, and HPA autoscaling lifecycle.
Source: test/e2e/
Overview
The E2E test suite exercises the operator end-to-end by deploying it to a kind cluster and applying Memcached custom resources via kubectl. Unlike envtest integration tests that run against an in-process API server, these tests validate the full operator lifecycle including controller watches, leader election, webhook TLS, and Kubernetes garbage collection.
The suite uses Kyverno Chainsaw v0.2.12, a declarative Kubernetes E2E testing framework. Each test scenario is defined in YAML with steps that apply resources, patch them, and assert on the resulting cluster state using partial object matching.
Test Infrastructure
Chainsaw Configuration (.chainsaw.yaml)
The global configuration at the project root controls timeouts and execution:
apiVersion: chainsaw.kyverno.io/v1alpha2
kind: Configuration
metadata:
name: memcached-operator-e2e
spec:
timeouts:
apply: 30s # Resource creation timeout
assert: 120s # Assertion timeout (allows for pod scheduling)
cleanup: 60s # Namespace cleanup timeout
delete: 30s # Deletion timeout
error: 30s # Error assertion timeout
cleanup:
skipDelete: false
execution:
failFast: true # Stop on first failure
parallel: 1 # Sequential execution across test cases
discovery:
testDirs:
- test/e2eKey timeout rationale:
- assert: 120s — Pod scheduling and readiness can vary significantly in CI; 120s accommodates slow schedulers without causing false positives.
- cleanup: 60s — Allows Kubernetes garbage collection to cascade through owner references before the namespace is force-deleted.
- parallel: 1 — Tests run sequentially to avoid resource contention on small kind clusters.
Makefile Target
make test-e2eDownloads Chainsaw v0.2.12 via go install (using the same go-install-tool pattern as controller-gen, kustomize, and other project tools) and runs the test suite:
CHAINSAW ?= $(LOCALBIN)/chainsaw
CHAINSAW_VERSION ?= v0.2.12
.PHONY: test-e2e
test-e2e: chainsaw ## Run end-to-end tests against a kind cluster using Chainsaw.
$(CHAINSAW) testPrerequisites
Before running make test-e2e, the following must be in place:
| Prerequisite | Purpose | Setup Command |
|---|---|---|
| kind cluster running | Target cluster for tests | kind create cluster |
| Operator deployed | Controller manager running in cluster | make deploy IMG=<image> |
| cert-manager installed | Webhook TLS certificates | kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.1/cert-manager.yaml |
| ServiceMonitor CRD | Required for monitoring-toggle test | Install via Prometheus Operator CRDs |
Shared Test Fixtures (test/e2e/resources/)
Reusable YAML templates referenced by multiple test scenarios:
| File | Purpose |
|---|---|
memcached-minimal.yaml | Minimal valid Memcached CR (1 replica, memcached:1.6, 64Mi maxMemoryMB) |
assert-deployment.yaml | Partial Deployment assertion (labels, replicas, container args, port) |
assert-service.yaml | Partial headless Service assertion (clusterIP: None, port 11211, selectors) |
assert-status-available.yaml | Status assertion (readyReplicas: 1, Available=True) |
File Structure
test/e2e/
├── autoscaling-disable/
│ ├── chainsaw-test.yaml # Test definition
│ ├── 00-memcached.yaml # CR with autoscaling enabled (test-autoscaling-disable)
│ ├── 01-assert-hpa.yaml # HPA exists before disabling
│ ├── 02-patch-disable.yaml # Patch autoscaling.enabled=false, replicas=3
│ ├── 03-error-hpa-gone.yaml # HPA deleted assertion
│ └── 03-assert-deployment.yaml # Deployment with replicas=3
├── autoscaling-enable/
│ ├── chainsaw-test.yaml # Test definition
│ ├── 00-memcached.yaml # CR with autoscaling enabled (test-autoscaling-enable)
│ ├── 01-assert-hpa.yaml # HPA with scaleTargetRef, metrics, behavior
│ ├── 01-assert-deployment.yaml # Deployment without hardcoded replicas
│ └── 02-assert-status.yaml # Status condition assertions
├── autoscaling-update/
│ ├── chainsaw-test.yaml # Test definition
│ ├── 00-memcached.yaml # CR with autoscaling min=2, max=10 (test-autoscaling-update)
│ ├── 01-assert-hpa.yaml # HPA with initial minReplicas=2, maxReplicas=10
│ ├── 02-patch-update.yaml # Patch minReplicas=3, maxReplicas=15
│ └── 03-assert-hpa-updated.yaml # HPA with updated bounds
├── basic-deployment/
│ ├── chainsaw-test.yaml # Test definition
│ ├── 00-memcached.yaml # Minimal CR (test-basic)
│ ├── 01-assert-deployment.yaml # Deployment assertions
│ ├── 01-assert-service.yaml # Service assertions
│ └── 02-assert-status.yaml # Status condition assertions
├── scaling/
│ ├── chainsaw-test.yaml
│ ├── 00-memcached.yaml # CR with replicas=1
│ ├── 01-assert-one-replica.yaml
│ ├── 02-patch-scale-up.yaml # Patch replicas to 3
│ ├── 03-assert-three-replicas.yaml
│ ├── 03-assert-status-scaled.yaml
│ ├── 04-patch-scale-down.yaml # Patch replicas to 1
│ └── 05-assert-one-replica.yaml
├── configuration-changes/
│ ├── chainsaw-test.yaml
│ ├── 00-memcached.yaml # CR with default config
│ ├── 01-assert-initial-args.yaml
│ ├── 02-patch-config.yaml # Patch maxMemoryMB, threads, maxItemSize
│ └── 03-assert-updated-args.yaml
├── monitoring-toggle/
│ ├── chainsaw-test.yaml
│ ├── 00-memcached.yaml # CR without monitoring
│ ├── 01-assert-no-exporter.yaml
│ ├── 02-patch-enable-monitoring.yaml
│ ├── 03-assert-exporter.yaml # Exporter sidecar on port 9150
│ ├── 03-assert-service-metrics.yaml
│ ├── 03-assert-servicemonitor.yaml # ServiceMonitor with labels and endpoints
│ ├── 04-patch-disable-monitoring.yaml
│ ├── 05-assert-no-exporter.yaml # Exporter sidecar removed
│ └── 05-error-servicemonitor-gone.yaml
├── pdb-creation/
│ ├── chainsaw-test.yaml
│ ├── 00-memcached.yaml # CR with PDB enabled (replicas=3)
│ ├── 01-assert-deployment.yaml
│ ├── 01-assert-pdb.yaml # PDB with minAvailable=1
│ ├── 02-patch-disable-pdb.yaml
│ └── 03-error-pdb-gone.yaml
├── graceful-rolling-update/
│ ├── chainsaw-test.yaml
│ ├── 00-memcached.yaml # CR with gracefulShutdown enabled
│ ├── 01-assert-deployment.yaml # Strategy + preStop + terminationGracePeriod
│ ├── 02-patch-update-image.yaml # Image change to trigger rollout
│ └── 03-assert-rolling-update.yaml
├── webhook-rejection/
│ ├── chainsaw-test.yaml
│ ├── 00-invalid-memory-limit.yaml
│ ├── 01-invalid-pdb-both.yaml
│ ├── 02-invalid-graceful-shutdown.yaml
│ ├── 03-invalid-sasl-no-secret.yaml
│ ├── 04-invalid-tls-no-secret.yaml
│ ├── 05-invalid-pdb-neither.yaml
│ ├── 06-invalid-pdb-min-ge-replicas.yaml
│ ├── 07-invalid-autoscaling-replicas-conflict.yaml
│ ├── 08-invalid-autoscaling-min-gt-max.yaml
│ └── 09-invalid-autoscaling-cpu-no-request.yaml
├── cr-deletion/
│ ├── chainsaw-test.yaml
│ ├── 00-memcached.yaml # CR with monitoring and PDB enabled
│ ├── 01-assert-deployment.yaml
│ ├── 01-assert-service.yaml
│ ├── 01-assert-pdb.yaml
│ ├── 01-assert-servicemonitor.yaml
│ ├── 02-error-deployment-gone.yaml
│ ├── 02-error-service-gone.yaml
│ ├── 02-error-pdb-gone.yaml
│ ├── 02-error-servicemonitor-gone.yaml
│ └── 02-error-cr-gone.yaml
├── sasl-authentication/
│ ├── chainsaw-test.yaml # Test definition
│ ├── 00-sasl-secret.yaml # Opaque Secret with password-file key
│ ├── 00-memcached.yaml # CR with security.sasl.enabled: true
│ ├── 01-assert-deployment.yaml # SASL volume, mount, and args assertions
│ └── 02-assert-status.yaml # Status condition assertions
├── tls-encryption/
│ ├── chainsaw-test.yaml # Test definition
│ ├── 00-cert-manager.yaml # Self-signed Issuer + Certificate
│ ├── 00-assert-certificate-ready.yaml # Certificate Ready=True assertion
│ ├── 01-memcached.yaml # CR with security.tls.enabled: true
│ ├── 02-assert-deployment.yaml # TLS volume, mount, args, port assertions
│ ├── 02-assert-service.yaml # Service TLS port assertion
│ └── 03-assert-status.yaml # Status condition assertions
├── tls-mtls/
│ ├── chainsaw-test.yaml # Test definition
│ ├── 00-cert-manager.yaml # Self-signed Issuer + Certificate (with CA)
│ ├── 00-assert-certificate-ready.yaml # Certificate Ready=True assertion
│ ├── 01-memcached.yaml # CR with tls.enabled + enableClientCert
│ ├── 02-assert-deployment.yaml # mTLS volume (ca.crt), args (ssl_ca_cert)
│ ├── 02-assert-service.yaml # Service TLS port assertion
│ └── 03-assert-status.yaml # Status condition assertions
├── network-policy/
│ ├── chainsaw-test.yaml # Test definition
│ ├── 00-memcached.yaml # CR with networkPolicy.enabled: true
│ ├── 01-assert-deployment.yaml # Deployment ready assertion
│ ├── 01-assert-networkpolicy.yaml # NetworkPolicy with podSelector, port 11211
│ ├── 02-patch-allowed-sources.yaml # Patch allowedSources with podSelector
│ ├── 03-assert-networkpolicy-allowed-sources.yaml # NetworkPolicy with from peer
│ ├── 04-cert-manager.yaml # Self-signed Issuer + Certificate
│ ├── 04-assert-certificate-ready.yaml # Certificate Ready=True assertion
│ ├── 05-patch-enable-tls-monitoring.yaml # Enable TLS and monitoring
│ ├── 06-assert-networkpolicy-all-ports.yaml # Ports 11211, 11212, 9150
│ ├── 07-patch-disable-networkpolicy.yaml # Disable networkPolicy
│ └── 08-error-networkpolicy-gone.yaml # NetworkPolicy deleted assertion
├── service-annotations/
│ ├── chainsaw-test.yaml # Test definition
│ ├── 00-memcached.yaml # CR with service.annotations
│ ├── 01-assert-service.yaml # Service with custom annotations
│ ├── 02-patch-update-annotations.yaml # Patch with new annotations
│ ├── 03-assert-service-updated.yaml # Service with updated annotations
│ ├── 04-patch-remove-annotations.yaml # Remove annotations (service: null)
│ └── 05-assert-service-no-annotations.yaml # Service without annotations
├── pdb-max-unavailable/
│ ├── chainsaw-test.yaml # Test definition
│ ├── 00-memcached.yaml # CR with PDB maxUnavailable=1 (replicas=3)
│ ├── 01-assert-deployment.yaml # Deployment ready assertion
│ ├── 01-assert-pdb.yaml # PDB with maxUnavailable=1
│ ├── 02-patch-max-unavailable.yaml # Patch maxUnavailable to 2
│ └── 03-assert-pdb-updated.yaml # PDB with maxUnavailable=2
├── verbosity-extra-args/
│ ├── chainsaw-test.yaml # Test definition
│ ├── 00-memcached.yaml # CR with verbosity=1 and extraArgs
│ ├── 01-assert-deployment.yaml # Args with -v and -o modern
│ ├── 02-patch-config.yaml # Patch verbosity=2, new extraArgs
│ └── 03-assert-deployment.yaml # Args with -vv and new extraArgs
├── custom-exporter-image/
│ ├── chainsaw-test.yaml # Test definition
│ ├── 00-memcached.yaml # CR with custom exporterImage
│ ├── 01-assert-deployment.yaml # Exporter with custom image
│ ├── 02-patch-exporter-image.yaml # Patch to default exporter image
│ └── 03-assert-deployment.yaml # Exporter with updated image
├── security-contexts/
│ ├── chainsaw-test.yaml # Test definition
│ ├── 00-memcached.yaml # CR with pod and container security contexts
│ ├── 01-assert-deployment.yaml # Security contexts on pod and container
│ ├── 02-patch-security-contexts.yaml # Patch with runAsUser=1000
│ └── 03-assert-deployment.yaml # Updated security contexts
├── hard-anti-affinity/
│ ├── chainsaw-test.yaml # Test definition
│ ├── 00-memcached.yaml # CR with antiAffinityPreset=hard
│ └── 01-assert-deployment.yaml # requiredDuringScheduling anti-affinity
├── status-degraded/
│ ├── chainsaw-test.yaml # Test definition
│ ├── 00-memcached.yaml # CR with non-existent image (test-degraded)
│ ├── 01-assert-deployment.yaml # Deployment created with invalid image
│ └── 01-assert-status.yaml # Degraded=True, Available=False, Progressing=False (ProgressingComplete)
├── scale-to-zero/
│ ├── chainsaw-test.yaml # Test definition
│ ├── 00-memcached.yaml # CR with replicas=1 (test-scale-zero)
│ ├── 01-assert-status-available.yaml # Initial Available=True, readyReplicas=1
│ ├── 02-patch-scale-zero.yaml # Patch replicas to 0
│ ├── 03-assert-deployment.yaml # Deployment.spec.replicas=0
│ └── 03-assert-status.yaml # Available=False, Progressing=False, Degraded=False
├── owner-references/
│ ├── chainsaw-test.yaml # Test definition
│ ├── 00-memcached.yaml # CR with all features enabled (test-owner-refs)
│ ├── 01-assert-deployment.yaml # Deployment ownerReferences assertion
│ ├── 01-assert-service.yaml # Service ownerReferences assertion
│ ├── 01-assert-pdb.yaml # PDB ownerReferences assertion
│ ├── 01-assert-networkpolicy.yaml # NetworkPolicy ownerReferences assertion
│ └── 01-assert-servicemonitor.yaml # ServiceMonitor ownerReferences assertion
└── resources/
├── memcached-minimal.yaml
├── assert-deployment.yaml
├── assert-service.yaml
└── assert-status-available.yamlTest Scenarios
1. Basic Deployment (REQ-002)
Directory: test/e2e/basic-deployment/
Verifies that creating a minimal Memcached CR produces the expected Deployment, headless Service, and status conditions.
| Step | Operation | Assertion |
|---|---|---|
| create-memcached-cr | apply 00-memcached.yaml | CR created |
| assert-deployment-created | assert 01-assert-deployment.yaml | Deployment with correct labels, args (-m 64 -c 1024 -t 4 -I 1m), port 11211 |
| assert-service-created | assert 01-assert-service.yaml | Headless Service (clusterIP: None), port 11211, correct selectors |
| assert-status-available | assert 02-assert-status.yaml | readyReplicas: 1, Available=True |
Owner references on Deployment and Service are verified as part of the Deployment and Service assertion files (Chainsaw partial matching includes metadata.ownerReferences).
2. Scaling (REQ-003)
Directory: test/e2e/scaling/
Verifies that updating spec.replicas scales the Deployment and updates status.readyReplicas.
| Step | Operation | Assertion |
|---|---|---|
| create-memcached-cr | apply | CR with replicas=1 |
| assert-initial-deployment | assert | Deployment.spec.replicas=1 |
| scale-up-to-3 | patch replicas=3 | — |
| assert-scaled-deployment | assert | Deployment.spec.replicas=3 |
| assert-scaled-status | assert | status.readyReplicas=3 |
| scale-down-to-1 | patch replicas=1 | — |
| assert-scaled-down | assert | Deployment.spec.replicas=1 |
3. Configuration Changes (REQ-004)
Directory: test/e2e/configuration-changes/
Verifies that changing memcached config fields triggers a rolling update with correct container args.
| Step | Operation | Assertion |
|---|---|---|
| create-memcached-cr | apply | CR with maxMemoryMB=64, threads=4 |
| assert-initial-args | assert | Container args: -m 64 -c 1024 -t 4 -I 1m |
| update-configuration | patch maxMemoryMB=256, threads=8, maxItemSize=2m | — |
| assert-updated-args | assert | Container args: -m 256 ... -t 8 -I 2m |
4. Monitoring Toggle (REQ-005)
Directory: test/e2e/monitoring-toggle/
Verifies that enabling monitoring injects the exporter sidecar and adds a metrics port to the Service.
| Step | Operation | Assertion |
|---|---|---|
| create-memcached-without-monitoring | apply | CR without monitoring |
| assert-no-exporter-sidecar | assert | Deployment has 1 container (memcached only) |
| enable-monitoring | patch monitoring.enabled=true | — |
| assert-exporter-sidecar-injected | assert | 2 containers: memcached (port 11211) + exporter (port 9150) |
| assert-service-metrics-port | assert | Service has metrics port |
| assert-servicemonitor-created | assert | ServiceMonitor with correct labels, endpoints, and selector |
| disable-monitoring | patch monitoring.enabled=false | — |
| assert-exporter-sidecar-removed | assert | Deployment has 1 container (memcached only) |
| assert-servicemonitor-deleted | error | ServiceMonitor is removed |
Prerequisite: ServiceMonitor CRD must be installed in the cluster.
5. PDB Creation (REQ-006)
Directory: test/e2e/pdb-creation/
Verifies that enabling PDB creates a PodDisruptionBudget with correct settings.
| Step | Operation | Assertion |
|---|---|---|
| create-memcached-with-pdb | apply | CR with replicas=3, PDB enabled, minAvailable=1 |
| assert-deployment-ready | assert | Deployment with 3 replicas |
| assert-pdb-created | assert | PDB with minAvailable=1, correct selector, owner reference |
| disable-pdb | patch PDB enabled=false | — |
| assert-pdb-deleted | error | PDB is removed |
6. Graceful Rolling Update (REQ-007)
Directory: test/e2e/graceful-rolling-update/
Verifies that graceful shutdown configures preStop hooks and the RollingUpdate strategy, and that image changes trigger a correct rolling update.
| Step | Operation | Assertion |
|---|---|---|
| create-memcached-with-graceful-shutdown | apply | CR with gracefulShutdown enabled |
| assert-graceful-shutdown-config | assert | RollingUpdate (maxSurge=1, maxUnavailable=0), preStop hook, terminationGracePeriodSeconds |
| trigger-rolling-update | patch image | — |
| assert-rolling-update-strategy | assert | All pods running new image, strategy preserved |
7. Webhook Rejection (REQ-008)
Directory: test/e2e/webhook-rejection/
Verifies that the validating webhook rejects invalid CRs. Each step uses Chainsaw's expect with ($error != null): true to assert that the apply operation fails.
| Step | Invalid CR | Expected Rejection Reason |
|---|---|---|
| reject-insufficient-memory-limit | maxMemoryMB=64, memory limit=32Mi | Memory limit < maxMemoryMB + 32Mi overhead |
| reject-pdb-mutual-exclusivity | Both minAvailable and maxUnavailable set | Mutually exclusive fields |
| reject-graceful-shutdown-invalid-period | terminationGracePeriodSeconds <= preStopDelaySeconds | Termination period must exceed pre-stop delay |
| reject-sasl-without-secret-ref | sasl.enabled=true, no credentialsSecretRef.name | Missing required secret reference |
| reject-tls-without-secret-ref | tls.enabled=true, no certificateSecretRef.name | Missing required secret reference |
| reject-pdb-neither-set | PDB enabled, neither minAvailable nor maxUnavailable | Exactly one of minAvailable or maxUnavailable required |
| reject-pdb-min-available-ge-replicas | PDB minAvailable >= replicas | minAvailable must be less than replicas |
| reject-autoscaling-replicas-conflict | spec.replicas=3 and autoscaling.enabled=true | spec.replicas and autoscaling.enabled are mutually exclusive |
| reject-autoscaling-min-gt-max | autoscaling.minReplicas=10, maxReplicas=5 | minReplicas must not exceed maxReplicas |
| reject-autoscaling-cpu-no-request | CPU utilization metric without resources.requests.cpu | CPU utilization metric requires resources.requests.cpu |
8. CR Deletion & Garbage Collection (REQ-009)
Directory: test/e2e/cr-deletion/
Verifies that deleting a Memcached CR garbage-collects all owned resources.
| Step | Operation | Assertion |
|---|---|---|
| create-memcached-with-all-resources | apply | CR with monitoring and PDB enabled |
| assert-all-resources-exist | assert | Deployment, Service, PDB, ServiceMonitor all present |
| delete-memcached-cr | delete Memcached/test-deletion | — |
| assert-all-resources-garbage-collected | error | Deployment, Service, PDB, ServiceMonitor, and CR are all gone |
The error operation asserts that the specified resource does not exist — the assertion succeeds when the GET returns NotFound.
9. SASL Authentication (MO-0032 REQ-001, REQ-006, REQ-008)
Directory: test/e2e/sasl-authentication/
Verifies that enabling SASL authentication creates the correct Secret volume, volumeMount, and container args (-Y <authfile>) in the Deployment.
| Step | Operation | Assertion |
|---|---|---|
| create-sasl-secret | apply 00-sasl-secret.yaml | Opaque Secret with password-file key created |
| create-memcached-cr | apply 00-memcached.yaml | CR with security.sasl.enabled: true, credentialsSecretRef.name: test-sasl-credentials |
| assert-deployment-sasl | assert 01-assert-deployment | Volume sasl-credentials with item {key: password-file, path: password-file}, mount at /etc/memcached/sasl (readOnly), args include -Y /etc/memcached/sasl/password-file |
| assert-status-available | assert 02-assert-status | readyReplicas: 1, Available=True |
The SASL Secret must be created before the Memcached CR because the validating webhook requires credentialsSecretRef.name to reference an existing Secret.
CRD fields tested:
spec.security.sasl.enabled— Enables SASL authenticationspec.security.sasl.credentialsSecretRef.name— References the Secret containing the password file
10. TLS Encryption (MO-0032 REQ-002, REQ-003, REQ-004, REQ-007, REQ-008, REQ-009)
Directory: test/e2e/tls-encryption/
Verifies that enabling TLS encryption creates a cert-manager Certificate, adds the TLS volume, volumeMount, -Z and ssl_chain_cert/ssl_key container args, and configures port 11212 on the Deployment and Service.
| Step | Operation | Assertion |
|---|---|---|
| create-cert-manager-resources | apply 00-cert-manager.yaml | Self-signed Issuer + Certificate (secretName: test-tls-certs) |
| assert-certificate-ready | assert 00-assert-certificate-ready | Certificate status Ready=True |
| create-memcached-cr | apply 01-memcached.yaml | CR with security.tls.enabled: true, certificateSecretRef.name: test-tls-certs |
| assert-deployment-tls | assert 02-assert-deployment | Volume tls-certificates with items tls.crt, tls.key; mount at /etc/memcached/tls (readOnly); args include -Z -o ssl_chain_cert=... -o ssl_key=...; port memcached-tls on 11212 |
| assert-service-tls-port | assert 02-assert-service | Service ports include memcached-tls on port 11212 targeting memcached-tls |
| assert-status-available | assert 03-assert-status | readyReplicas: 1, Available=True |
The Certificate must reach Ready=True before applying the Memcached CR to ensure the TLS Secret exists (avoiding a race condition where the operator cannot mount the Secret volume).
CRD fields tested:
spec.security.tls.enabled— Enables TLS encryptionspec.security.tls.certificateSecretRef.name— References the cert-manager Secret
cert-manager resources:
- Self-signed
Issuer(test-tls-selfsigned) Certificate(test-tls-cert) generating Secrettest-tls-certswithtls.crtandtls.key
11. Mutual TLS / mTLS (MO-0032 REQ-004, REQ-005, REQ-008, REQ-009)
Directory: test/e2e/tls-mtls/
Verifies that enabling TLS with enableClientCert: true adds the ca.crt key projection to the TLS volume and the ssl_ca_cert arg to the container, in addition to the standard TLS configuration.
| Step | Operation | Assertion |
|---|---|---|
| create-cert-manager-resources | apply 00-cert-manager.yaml | Self-signed Issuer + Certificate (secretName: test-mtls-certs) |
| assert-certificate-ready | assert 00-assert-certificate-ready | Certificate status Ready=True |
| create-memcached-cr | apply 01-memcached.yaml | CR with tls.enabled: true, enableClientCert: true, certificateSecretRef.name: test-mtls-certs |
| assert-deployment-mtls | assert 02-assert-deployment | Volume items include ca.crt alongside tls.crt/tls.key; args include -o ssl_ca_cert=/etc/memcached/tls/ca.crt; port memcached-tls on 11212 |
| assert-service-tls-port | assert 02-assert-service | Service ports include memcached-tls on port 11212 |
| assert-status-available | assert 03-assert-status | readyReplicas: 1, Available=True |
The mTLS test extends TLS by verifying that enableClientCert: true causes the operator to project the ca.crt key from the Secret and add the ssl_ca_cert=/etc/memcached/tls/ca.crt arg to enable client certificate verification.
CRD fields tested:
spec.security.tls.enabled— Enables TLS encryptionspec.security.tls.enableClientCert— Enables mutual TLS (client cert verification)spec.security.tls.certificateSecretRef.name— References the cert-manager Secret
Difference from TLS test: The TLS volume includes three items (tls.crt, tls.key, ca.crt) instead of two, and the container args include an additional -o ssl_ca_cert=/etc/memcached/tls/ca.crt.
12. NetworkPolicy Lifecycle (MO-0033 REQ-E2E-NP-001 through NP-005)
Directory: test/e2e/network-policy/
Verifies the full NetworkPolicy lifecycle: creation with correct podSelector and ingress port 11211, allowedSources propagation, port adaptation when TLS and monitoring are enabled (11211, 11212, 9150), and deletion when networkPolicy is disabled.
| Step | Operation | Assertion |
|---|---|---|
| create-memcached-with-networkpolicy | apply 00-memcached.yaml | CR with security.networkPolicy.enabled: true (test-netpol) |
| assert-deployment-ready | assert 01-assert-deployment.yaml | Deployment with correct labels |
| assert-networkpolicy-created | assert 01-assert-networkpolicy.yaml | NetworkPolicy with podSelector matching operator labels, policyTypes: [Ingress], port 11211/TCP |
| patch-allowed-sources | patch 02-patch-allowed-sources.yaml | Add allowedSources with podSelector app: allowed-client |
| assert-networkpolicy-allowed-sources | assert 03-assert-networkpolicy-allowed-sources.yaml | NetworkPolicy ingress from field contains podSelector with app: allowed-client |
| create-cert-manager-resources | apply 04-cert-manager.yaml | Self-signed Issuer + Certificate (secretName: test-netpol-certs) |
| assert-certificate-ready | assert 04-assert-certificate-ready.yaml | Certificate status Ready=True |
| patch-enable-tls-monitoring | patch 05-patch-enable-tls-monitoring.yaml | Enable TLS (certificateSecretRef.name: test-netpol-certs) and monitoring |
| assert-networkpolicy-all-ports | assert 06-assert-networkpolicy-all-ports.yaml | NetworkPolicy ingress ports: 11211/TCP, 11212/TCP, 9150/TCP; from peer preserved |
| disable-networkpolicy | patch 07-patch-disable-networkpolicy.yaml | Patch security.networkPolicy.enabled: false |
| assert-networkpolicy-deleted | error 08-error-networkpolicy-gone.yaml | NetworkPolicy resource no longer exists |
Prerequisite: cert-manager must be installed in the cluster (required for the TLS port adaptation step).
CRD fields tested:
spec.security.networkPolicy.enabled— Enables/disables the NetworkPolicyspec.security.networkPolicy.allowedSources— Configures ingressfrompeersspec.security.tls.enabled— Adds port 11212 to the NetworkPolicyspec.monitoring.enabled— Adds port 9150 to the NetworkPolicy
13. Service Annotations (MO-0033 REQ-E2E-SA-001, REQ-E2E-SA-002)
Directory: test/e2e/service-annotations/
Verifies that custom annotations defined in spec.service.annotations are propagated to the managed headless Service, that updating annotations propagates the changes, and that removing annotations clears them from the Service.
| Step | Operation | Assertion |
|---|---|---|
| create-memcached-with-annotations | apply 00-memcached.yaml | CR with two annotations: external-dns.alpha.kubernetes.io/hostname and service.beta.kubernetes.io/aws-load-balancer-internal (test-svc-ann) |
| assert-service-has-annotations | assert 01-assert-service.yaml | Service has both custom annotations, correct labels, headless (clusterIP: None), port 11211 |
| update-annotations | patch 02-patch-update-annotations.yaml | Replace annotations with external-dns.alpha.kubernetes.io/hostname: memcached-updated.example.com and prometheus.io/scrape: "true" |
| assert-service-annotations-updated | assert 03-assert-service-updated.yaml | Service annotations contain the updated key-value pairs |
| remove-annotations | patch 04-patch-remove-annotations.yaml | Patch spec.service: null to remove all annotations |
| assert-service-no-annotations | assert 05-assert-service-no-annotations.yaml | Service has correct labels and spec; JMESPath expression asserts annotations are absent or empty |
CRD fields tested:
spec.service.annotations— Custom annotations propagated to the managed Service
14. PDB maxUnavailable (MO-0034 REQ-001)
Directory: test/e2e/pdb-max-unavailable/
Verifies that configuring PDB with maxUnavailable (instead of minAvailable) creates a PodDisruptionBudget with the correct maxUnavailable setting, and that updating it propagates to the PDB.
| Step | Operation | Assertion |
|---|---|---|
| create-memcached-with-pdb-max-unavailable | apply 00-memcached.yaml | CR with replicas=3, PDB enabled, maxUnavailable=1 |
| assert-deployment-ready | assert 01-assert-deployment | Deployment with 3 replicas |
| assert-pdb-max-unavailable | assert 01-assert-pdb | PDB with maxUnavailable=1, correct selector, labels |
| update-max-unavailable | patch maxUnavailable=2 | — |
| assert-pdb-updated | assert 03-assert-pdb-updated | PDB with maxUnavailable=2 |
CRD fields tested:
spec.highAvailability.podDisruptionBudget.enabled— Enables the PDBspec.highAvailability.podDisruptionBudget.maxUnavailable— Sets maxUnavailable on the PDB
15. Verbosity and Extra Args (MO-0034 REQ-002, REQ-003)
Directory: test/e2e/verbosity-extra-args/
Verifies that setting memcached.verbosity and memcached.extraArgs propagates to the Deployment container args, and that updating them triggers a rolling update with the correct args.
| Step | Operation | Assertion |
|---|---|---|
| create-memcached-with-verbosity-and-extra-args | apply 00-memcached.yaml | CR with verbosity=1, extraArgs=["-o", "modern"] |
| assert-initial-args | assert 01-assert-deployment | Args include -v -o modern after standard flags |
| update-verbosity-and-extra-args | patch verbosity=2, extraArgs=["--max-reqs-per-event", "20"] | — |
| assert-updated-args | assert 03-assert-deployment | Args include -vv --max-reqs-per-event 20 |
CRD fields tested:
spec.memcached.verbosity— Controls verbosity flag (0=none, 1=-v, 2=-vv)spec.memcached.extraArgs— Additional command-line arguments appended after standard flags
16. Custom Exporter Image (MO-0034 REQ-004)
Directory: test/e2e/custom-exporter-image/
Verifies that specifying a custom exporter image in the monitoring config uses that image for the exporter sidecar instead of the default.
| Step | Operation | Assertion |
|---|---|---|
| create-memcached-with-custom-exporter | apply 00-memcached.yaml | CR with monitoring enabled, exporterImage=v0.14.0 |
| assert-custom-exporter-image | assert 01-assert-deployment | Exporter sidecar uses custom image v0.14.0 |
| update-exporter-image | patch exporterImage=v0.15.4 | — |
| assert-updated-exporter-image | assert 03-assert-deployment | Exporter sidecar uses updated image v0.15.4 |
CRD fields tested:
spec.monitoring.enabled— Enables the exporter sidecarspec.monitoring.exporterImage— Custom image for the exporter sidecar
17. Security Contexts (MO-0034 REQ-005, REQ-006)
Directory: test/e2e/security-contexts/
Verifies that custom pod and container security contexts defined in spec.security are propagated to the Deployment pod template, and that updating them triggers a rolling update with the new settings.
| Step | Operation | Assertion |
|---|---|---|
| create-memcached-with-security-contexts | apply 00-memcached.yaml | CR with runAsNonRoot, readOnlyRootFilesystem, drop ALL |
| assert-security-contexts | assert 01-assert-deployment | Pod and container security contexts match CR spec |
| update-security-contexts | patch runAsUser=1000, fsGroup=1000 | — |
| assert-updated-security-contexts | assert 03-assert-deployment | Updated security contexts with runAsUser=1000 |
CRD fields tested:
spec.security.podSecurityContext— Pod-level security context (runAsNonRoot, fsGroup)spec.security.containerSecurityContext— Container-level security context (readOnlyRootFilesystem, capabilities)
18. Hard Anti-Affinity (MO-0034 REQ-007)
Directory: test/e2e/hard-anti-affinity/
Verifies that setting antiAffinityPreset to "hard" configures requiredDuringSchedulingIgnoredDuringExecution pod anti-affinity on the Deployment, with the correct topology key and label selector.
| Step | Operation | Assertion |
|---|---|---|
| create-memcached-with-hard-anti-affinity | apply 00-memcached.yaml | CR with antiAffinityPreset="hard" |
| assert-hard-anti-affinity | assert 01-assert-deployment | requiredDuringScheduling anti-affinity with topologyKey and instance label selector |
CRD fields tested:
spec.highAvailability.antiAffinityPreset— Controls pod anti-affinity ("soft" or "hard")
19. Status Degraded (MO-0035 REQ-E2E-SD-001, REQ-E2E-SD-002)
Directory: test/e2e/status-degraded/
Verifies that a Memcached CR with a non-existent container image reports Degraded=True and Available=False status conditions. The operator creates the Deployment, but pods fail to pull the image (ImagePullBackOff), causing zero ready replicas and triggering the degraded status path.
| Step | Operation | Assertion |
|---|---|---|
| create-memcached-cr | apply 00-memcached.yaml | CR with image memcached:nonexistent-tag-does-not-exist (test-degraded) |
| assert-deployment-created | assert 01-assert-deployment | Deployment exists with invalid image, correct labels, owner reference |
| assert-status-degraded | assert 01-assert-status | Degraded=True (reason: Degraded), Available=False (reason: Unavailable), Progressing=False (reason: ProgressingComplete) |
CRD fields tested:
spec.replicas— Desired replica count (1)spec.image— Non-existent image triggers degraded status
20. Scale to Zero (MO-0035 REQ-E2E-SZ-001, REQ-E2E-SZ-002)
Directory: test/e2e/scale-to-zero/
Verifies that patching a healthy Memcached CR from replicas=1 to replicas=0 results in Available=False, Progressing=False, and Degraded=False. This is a two-phase apply-assert-patch-assert test that first confirms a healthy starting state before scaling down.
| Step | Operation | Assertion |
|---|---|---|
| create-memcached-cr | apply 00-memcached.yaml | CR with replicas=1 (test-scale-zero) |
| assert-initial-status | assert 01-assert-status-available | Available=True, readyReplicas=1 |
| scale-to-zero | patch 02-patch-scale-zero.yaml | Patch spec.replicas to 0 |
| assert-deployment-scaled | assert 03-assert-deployment | Deployment.spec.replicas=0 |
| assert-status-unavailable | assert 03-assert-status | Available=False (Unavailable), Progressing=False (ProgressingComplete), Degraded=False (NotDegraded) |
CRD fields tested:
spec.replicas— Scale-to-zero behavior (patched from 1 to 0)
Controller change: The computeConditions function in internal/controller/status.go was updated to return Available=False when desiredReplicas=0 (previously scale-to-zero incorrectly reported Available=True).
21. Owner References GC Chain (MO-0036 REQ-OR-001 through REQ-OR-006)
Directory: test/e2e/owner-references/
Verifies that all child resources created by the operator have correct ownerReferences pointing to the parent Memcached CR with controller=true and blockOwnerDeletion=true. This validates the mechanism (ownerReferences set on creation) separately from the cr-deletion test that validates the outcome (resources cleaned up on deletion).
| Step | Operation | Assertion |
|---|---|---|
| create-memcached-with-all-features | apply 00-memcached.yaml | CR with monitoring, PDB, and NetworkPolicy enabled (test-owner-refs) |
| assert-deployment-owner-reference | assert 01-assert-deployment.yaml | Deployment ownerReferences: kind=Memcached, apiVersion=memcached.c5c3.io/v1alpha1, controller=true, blockOwnerDeletion=true |
| assert-service-owner-reference | assert 01-assert-service.yaml | Service ownerReferences: kind=Memcached, controller=true, blockOwnerDeletion=true |
| assert-pdb-owner-reference | assert 01-assert-pdb.yaml | PDB ownerReferences: kind=Memcached, controller=true, blockOwnerDeletion=true |
| assert-networkpolicy-owner-reference | assert 01-assert-networkpolicy.yaml | NetworkPolicy ownerReferences: kind=Memcached, controller=true, blockOwnerDeletion=true |
| assert-servicemonitor-owner-reference | assert 01-assert-servicemonitor.yaml | ServiceMonitor ownerReferences: kind=Memcached, controller=true, blockOwnerDeletion=true |
The test does not delete the CR or assert resource cleanup — that is covered by the cr-deletion test (scenario 8). This separation allows pinpointing whether a GC failure is due to missing ownerReferences or a different Kubernetes issue.
CRD fields tested (indirectly via ownerReferences on child resources):
spec.monitoring.enabled— Creates ServiceMonitor with ownerReferencespec.highAvailability.podDisruptionBudget.enabled— Creates PDB with ownerReferencespec.security.networkPolicy.enabled— Creates NetworkPolicy with ownerReference
22. Autoscaling Enable (MO-0042 REQ-001, REQ-006, REQ-007)
Directory: test/e2e/autoscaling-enable/
Verifies that creating a Memcached CR with autoscaling.enabled=true produces an HPA with the correct scaleTargetRef, minReplicas, maxReplicas, defaulted CPU utilization metric at 80%, and defaulted scaleDown stabilization window of 300 seconds. The Deployment must exist but must NOT have a hardcoded replica count (HPA controls replicas).
| Step | Operation | Assertion |
|---|---|---|
| create-memcached-cr | apply 00-memcached.yaml | CR with autoscaling.enabled: true, minReplicas=2, maxReplicas=10, resources.requests.cpu: 50m (test-autoscaling-enable) |
| assert-hpa-created | assert 01-assert-hpa | HPA with scaleTargetRef=Deployment/test-autoscaling-enable, minReplicas=2, maxReplicas=10, CPU metric at 80%, scaleDown 300s |
| assert-deployment-created | assert 01-assert-deployment | Deployment exists with standard labels; no spec.replicas field (HPA controls scaling) |
| assert-status-available | assert 02-assert-status | Status conditions indicate availability |
HPA assertion details:
scaleTargetRef: apiVersion=apps/v1, kind=Deployment, name=test-autoscaling-enablemetrics[0]: type=Resource, resource.name=cpu, target.type=Utilization, averageUtilization=80behavior.scaleDown.stabilizationWindowSeconds: 300- Labels:
app.kubernetes.io/name=memcached,app.kubernetes.io/instance=test-autoscaling-enable,app.kubernetes.io/managed-by=memcached-operator
CRD fields tested:
spec.autoscaling.enabled— Enables HPA creationspec.autoscaling.minReplicas— HPA minimum replicasspec.autoscaling.maxReplicas— HPA maximum replicasspec.resources.requests.cpu— Required for CPU utilization metric (validated by webhook)
23. Autoscaling Disable (MO-0042 REQ-002, REQ-006, REQ-007)
Directory: test/e2e/autoscaling-disable/
Verifies that disabling autoscaling on a running Memcached CR deletes the HPA and that setting spec.replicas takes effect on the Deployment. This is a two-phase test: first create with autoscaling enabled (assert HPA exists), then patch to disable autoscaling with explicit replicas.
| Step | Operation | Assertion |
|---|---|---|
| create-memcached-with-autoscaling | apply 00-memcached.yaml | CR with autoscaling.enabled: true, minReplicas=2, maxReplicas=10 (test-autoscaling-disable) |
| assert-hpa-created | assert 01-assert-hpa | HPA exists with scaleTargetRef=Deployment/test-autoscaling-disable |
| disable-autoscaling | patch 02-patch-disable.yaml | Set autoscaling.enabled: false and spec.replicas: 3 |
| assert-hpa-deleted | error 03-error-hpa-gone | HPA no longer exists (autoscaling/v2 HPA for test-autoscaling-disable) |
| assert-deployment-replicas | assert 03-assert-deployment | Deployment.spec.replicas=3, status.readyReplicas=3 |
CRD fields tested:
spec.autoscaling.enabled— Set tofalseto trigger HPA deletionspec.replicas— Set explicitly when disabling autoscaling (must be provided in the same patch)
24. Autoscaling Update (MO-0042 REQ-004, REQ-006, REQ-007)
Directory: test/e2e/autoscaling-update/
Verifies that updating minReplicas and maxReplicas on a running autoscaled Memcached CR propagates the changes to the HPA without deleting and recreating it.
| Step | Operation | Assertion |
|---|---|---|
| create-memcached-with-autoscaling | apply 00-memcached.yaml | CR with autoscaling.enabled: true, minReplicas=2, maxReplicas=10 (test-autoscaling-update) |
| assert-initial-hpa | assert 01-assert-hpa | HPA with minReplicas=2, maxReplicas=10, scaleTargetRef=Deployment/test-autoscaling-update |
| update-autoscaling-bounds | patch 02-patch-update.yaml | Patch autoscaling.minReplicas: 3, autoscaling.maxReplicas: 15 |
| assert-hpa-updated | assert 03-assert-hpa-updated | HPA with minReplicas=3, maxReplicas=15; scaleTargetRef unchanged; labels preserved |
CRD fields tested:
spec.autoscaling.minReplicas— Updated from 2 to 3spec.autoscaling.maxReplicas— Updated from 10 to 15
Test Patterns
Partial Object Matching
Chainsaw asserts on partial objects — only the fields specified in the assertion YAML must match. This avoids brittleness from defaulted or controller-managed fields.
# Only checks these specific fields, ignores everything else
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-basic
labels:
app.kubernetes.io/name: memcached
app.kubernetes.io/instance: test-basic
app.kubernetes.io/managed-by: memcached-operator
spec:
replicas: 1
template:
spec:
containers:
- name: memcached
args: ["-m", "64", "-c", "1024", "-t", "4", "-I", "1m"]Apply-Assert-Patch-Assert Flow
Most tests follow a four-phase pattern:
- Apply — Create the initial Memcached CR
- Assert — Verify the initial resource state
- Patch — Modify the CR spec (scaling, config change, feature toggle)
- Assert — Verify the updated resource state
Error Expectations for Webhook Tests
Webhook rejection tests use Chainsaw's expect mechanism on apply operations to assert that resource creation fails:
steps:
- name: reject-insufficient-memory-limit
try:
- apply:
file: 00-invalid-memory-limit.yaml
expect:
- check:
($error != null): trueNegative Assertions for Deletion Tests
Deletion tests use the error operation type, which succeeds when the resource does not exist:
steps:
- name: assert-all-resources-garbage-collected
try:
- error:
file: 02-error-deployment-gone.yamlWhere the error file contains a resource reference that should no longer exist:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deletionNamespace Isolation
Chainsaw automatically creates a unique namespace for each test and cleans it up afterward. Test resources do not specify a namespace — Chainsaw injects it at runtime. This provides complete isolation between test cases.
Prerequisite Resource Ordering (Security Tests)
Security tests require resources to exist before the Memcached CR is applied:
SASL — The SASL Secret must be created first because the validating webhook checks that
credentialsSecretRef.namereferences an existing Secret. Applying the CR before the Secret causes a webhook rejection.TLS/mTLS — cert-manager Issuer and Certificate must be created first, and the Certificate must reach
Ready=Truebefore the CR is applied. This ensures the TLS Secret exists so the operator can mount it as a volume.
This is implemented as separate Chainsaw steps with apply followed by assert (for the Certificate readiness check) before the CR apply step.
Spec-Level Assertions (Security Tests)
Security tests assert exclusively on Kubernetes resource specs — they do not verify runtime protocol behavior. This means:
- No test step connects to memcached via TLS or SASL
- All assertions target Deployment spec (volumes, mounts, args, ports), Service spec (ports), or CR status (conditions)
- Tests pass in a kind cluster without any memcached client tools
- Tests complete deterministically within the 120s assert timeout
Requirement Coverage Matrix
| REQ-ID | Requirement | Test Scenario | Key Assertions |
|---|---|---|---|
| REQ-001 | Chainsaw configuration and Makefile target | All (infrastructure) | .chainsaw.yaml config, make test-e2e target |
| REQ-002 | Basic deployment: Deployment, Service, status | basic-deployment | Labels, container args, headless Service, Available=True |
| REQ-003 | Scaling: replicas up and down | scaling | Deployment.spec.replicas, status.readyReplicas |
| REQ-004 | Configuration changes: container args updated | configuration-changes | Args reflect maxMemoryMB, threads, maxItemSize |
| REQ-005 | Monitoring toggle: exporter sidecar, ServiceMonitor | monitoring-toggle | Container count, port 9150, Service metrics port, ServiceMonitor labels/endpoints, disable removes sidecar and ServiceMonitor |
| REQ-006 | PDB creation and deletion: minAvailable, selector | pdb-creation | PDB spec, selector labels, owner reference, disable removes PDB |
| REQ-007 | Graceful rolling update: strategy, preStop, image update | graceful-rolling-update | maxSurge=1, maxUnavailable=0, preStop hook, new image |
| REQ-008 | Webhook rejection: invalid CRs rejected | webhook-rejection | Ten invalid CR variants all rejected (memory, PDB, graceful shutdown, SASL, TLS, autoscaling) |
| REQ-009 | CR deletion: garbage collection | cr-deletion | Deployment, Service, PDB, ServiceMonitor, CR all removed |
| REQ-010 | Makefile integration | All (infrastructure) | make test-e2e runs chainsaw test |
Security E2E Tests (MO-0032)
| REQ-ID | Requirement | Test Scenario | Key Assertions |
|---|---|---|---|
| MO-0032-001 | SASL Secret and CR configuration propagation | sasl-authentication | Secret with password-file key, CR with sasl.enabled: true and credentialsSecretRef |
| MO-0032-002 | SASL Deployment volume, mount, and args | sasl-authentication | Volume sasl-credentials, mount at /etc/memcached/sasl, args -Y /etc/memcached/sasl/password-file |
| MO-0032-003 | TLS cert-manager Certificate creation | tls-encryption | Self-signed Issuer, Certificate with Ready=True, Secret with tls.crt/tls.key |
| MO-0032-004 | TLS Deployment volume, mount, args, and port | tls-encryption | Volume tls-certificates, mount at /etc/memcached/tls, args -Z -o ssl_chain_cert -o ssl_key, port 11212 |
| MO-0032-005 | TLS Service port configuration | tls-encryption | Service port memcached-tls on 11212 targeting memcached-tls |
| MO-0032-006 | mTLS ca.crt volume projection and ssl_ca_cert arg | tls-mtls | Volume items include ca.crt, args include -o ssl_ca_cert=/etc/memcached/tls/ca.crt |
| MO-0032-007 | mTLS preserves standard TLS configuration | tls-mtls | All TLS assertions (volume, mount, args, ports) plus ca.crt additions |
| MO-0032-008 | Security tests follow Chainsaw conventions | All security tests | Numbered YAML files, apply/assert flow, partial object matching, standard timeouts, test-{name} CR naming |
| MO-0032-009 | Tests are spec-level assertions only (no runtime verification) | All security tests | Assertions on Deployment spec, Service spec, CR status — no pod logs or protocol connections |
Network & Service E2E Tests (MO-0033)
| REQ-ID | Requirement | Test Scenario | Key Assertions |
|---|---|---|---|
| REQ-E2E-NP-001 | NetworkPolicy creation with podSelector and port 11211 | network-policy | NetworkPolicy with operator labels, policyTypes: [Ingress], ingress port 11211/TCP |
| REQ-E2E-NP-002 | allowedSources propagation to NetworkPolicy ingress from field | network-policy | Ingress from contains podSelector with app: allowed-client |
| REQ-E2E-NP-003 | TLS port 11212 added to NetworkPolicy when TLS enabled | network-policy | Ingress ports include 11211/TCP, 11212/TCP, 9150/TCP after enabling TLS and monitoring |
| REQ-E2E-NP-004 | NetworkPolicy deleted when networkPolicy disabled | network-policy | Error assertion confirms NetworkPolicy no longer exists after disabling |
| REQ-E2E-NP-005 | Monitoring port 9150 added to NetworkPolicy when monitoring enabled | network-policy | Ingress ports include 9150/TCP alongside 11211/TCP and 11212/TCP |
| REQ-E2E-SA-001 | Service annotations propagated from CR spec | service-annotations | Service metadata.annotations contains custom annotations, labels and headless spec preserved |
| REQ-E2E-SA-002 | Service annotations cleared when removed from CR spec | service-annotations | Service metadata.annotations empty after patching spec.service: null, Service spec unchanged |
| REQ-E2E-DOC-001 | Documentation updated with new test entries | (this document) | network-policy and service-annotations sections, file structure, requirement coverage matrix |
Deployment Config E2E Tests (MO-0034)
| REQ-ID | Requirement | Test Scenario | Key Assertions |
|---|---|---|---|
| MO-0034-001 | PDB with maxUnavailable creates correct PDB and supports updates | pdb-max-unavailable | PDB with maxUnavailable=1, correct selector/labels; update to maxUnavailable=2 propagates |
| MO-0034-002 | Verbosity level propagates to container args (-v, -vv) | verbosity-extra-args | Args include -v for verbosity=1, -vv for verbosity=2, placed after standard flags |
| MO-0034-003 | extraArgs appended to container args after standard flags | verbosity-extra-args | Args include -o modern after standard flags; update to new extraArgs propagates |
| MO-0034-004 | Custom exporter image used for monitoring sidecar | custom-exporter-image | Exporter sidecar uses custom image v0.14.0; update to v0.15.4 propagates |
| MO-0034-005 | Pod security context propagated to Deployment | security-contexts | Pod securityContext with runAsNonRoot, fsGroup; update to runAsUser=1000 propagates |
| MO-0034-006 | Container security context propagated to Deployment | security-contexts | Container securityContext with readOnlyRootFilesystem, drop ALL; update propagates |
| MO-0034-007 | Hard anti-affinity creates requiredDuringScheduling affinity | hard-anti-affinity | requiredDuringSchedulingIgnoredDuringExecution with topologyKey and instance label selector |
Status & Scale E2E Tests (MO-0035)
| REQ-ID | Requirement | Test Scenario | Key Assertions |
|---|---|---|---|
| REQ-E2E-SD-001 | Degraded status when non-existent image specified | status-degraded | Degraded=True (Degraded), Available=False (Unavailable), Progressing=False (ProgressingComplete) |
| REQ-E2E-SD-002 | Deployment created despite invalid image | status-degraded | Deployment exists with correct labels and owner reference, pods in ImagePullBackOff |
| REQ-E2E-SZ-001 | Scale-to-zero transitions Available to False | scale-to-zero | After patching replicas=0: Available=False (Unavailable), Progressing=False (ProgressingComplete), Degraded=False (NotDegraded) |
| REQ-E2E-SZ-002 | Scale-to-zero sets Deployment replicas to 0 | scale-to-zero | Deployment.spec.replicas=0 after patching CR |
| REQ-CTL-SZ-001 | computeConditions returns Available=False when desiredReplicas=0 | (unit test) | Unit test in status_test.go verifies Available=False for 0 desired, 0 ready replicas |
| REQ-DOC-001 | Documentation updated with new test entries | (this document) | status-degraded and scale-to-zero sections, file structure, requirement coverage matrix |
Owner References GC Chain E2E Tests (MO-0036)
| REQ-ID | Requirement | Test Scenario | Key Assertions |
|---|---|---|---|
| REQ-OR-001 | Memcached CR with all features enabled | owner-references | CR with monitoring, PDB, and NetworkPolicy enabled; Deployment reaches readyReplicas=2 |
| REQ-OR-002 | Deployment ownerReferences set correctly | owner-references | ownerReferences: kind=Memcached, apiVersion=memcached.c5c3.io/v1alpha1, name=test-owner-refs, controller=true, blockOwnerDeletion=true |
| REQ-OR-003 | Service ownerReferences set correctly | owner-references | ownerReferences: kind=Memcached, apiVersion=memcached.c5c3.io/v1alpha1, controller=true, blockOwnerDeletion=true |
| REQ-OR-004 | PDB ownerReferences set correctly | owner-references | ownerReferences: kind=Memcached, controller=true, blockOwnerDeletion=true |
| REQ-OR-005 | NetworkPolicy ownerReferences set correctly | owner-references | ownerReferences: kind=Memcached, controller=true, blockOwnerDeletion=true |
| REQ-OR-006 | ServiceMonitor ownerReferences set correctly | owner-references | ownerReferences: kind=Memcached, controller=true, blockOwnerDeletion=true |
| REQ-OR-007 | Single test with CR creation and 5 assertion steps | owner-references | One chainsaw-test.yaml with create + 5 individual assertion steps; does NOT delete CR |
| REQ-OR-008 | Documentation updated with owner-references test | (this document) | File structure, test scenario section, requirement coverage matrix all include owner-references |
Autoscaling E2E Tests (MO-0042)
| REQ-ID | Requirement | Test Scenario | Key Assertions |
|---|---|---|---|
| MO-0042-001 | HPA created with correct scaleTargetRef, metrics, and behavior | autoscaling-enable | HPA scaleTargetRef=Deployment, CPU metric at 80%, scaleDown stabilization 300s, minReplicas=2, maxReplicas=10, standard labels |
| MO-0042-002 | Deployment has no hardcoded replicas when autoscaling enabled | autoscaling-enable | Deployment exists without spec.replicas field; HPA controls scaling |
| MO-0042-003 | HPA deleted and Deployment replicas set when autoscaling disabled | autoscaling-disable | HPA no longer exists (error assertion); Deployment.spec.replicas=3, readyReplicas=3 |
| MO-0042-004 | HPA updated when minReplicas and maxReplicas patched | autoscaling-update | HPA minReplicas=3 and maxReplicas=15 after patching; scaleTargetRef unchanged |
| MO-0042-005 | Webhook rejects CR with spec.replicas and autoscaling.enabled=true | webhook-rejection | Apply returns $error != null for CR with replicas=3 and autoscaling.enabled=true |
| MO-0042-006 | Webhook rejects CR with autoscaling.minReplicas > maxReplicas | webhook-rejection | Apply returns $error != null for CR with minReplicas=10 and maxReplicas=5 |
| MO-0042-007 | Webhook rejects CR with CPU metric but no resources.requests.cpu | webhook-rejection | Apply returns $error != null for CR with CPU utilization metric and no cpu request |
| MO-0042-008 | Documentation updated with autoscaling test scenarios and coverage matrix | (this document) | File structure, three test scenario sections, webhook rejection table, requirement coverage matrix all include autoscaling |
Known Limitations
| Limitation | Impact | Mitigation |
|---|---|---|
| Pod scheduling time varies | Assert timeouts may need adjustment in slow CI | Global assert timeout set to 120s |
| cert-manager required | Webhook and TLS/mTLS tests fail without cert-manager | Documented as prerequisite; tests fail clearly with connection refused |
| ServiceMonitor CRD required | monitoring-toggle and cr-deletion tests fail without CRD | Documented as prerequisite; Chainsaw reports clear assertion error |
| Sequential execution | Full suite takes longer than parallel execution | parallel: 1 avoids resource contention on small clusters |
| No runtime protocol testing | SASL/TLS/mTLS tests verify Deployment spec, not actual memcached protocol | By design: tests are fast, deterministic, and need no memcached client |
| Certificate issuance delay | cert-manager may take time to issue certificates in CI | Explicit assert-certificate-ready step waits for Ready=True within 120s |
No absence assertion for ssl_ca_cert in TLS test | Chainsaw asserts presence but not absence; TLS test cannot verify ssl_ca_cert absent when enableClientCert is false | mTLS test asserts ssl_ca_cert present only when enableClientCert: true; combined, both tests confirm correct behavior |
| Annotation removal uses JMESPath absence check | service-annotations test uses JMESPath to assert annotations are absent or empty after removal | Assertion actively fails if annotations remain on the Service; upgrades confidence over simple field omission |
| Hard anti-affinity with single-node kind | hard-anti-affinity test uses replicas=1 to avoid scheduling failures on single-node kind; verifies Deployment spec, not scheduling | Spec assertion confirms operator translates antiAffinityPreset: hard to requiredDuringSchedulingIgnoredDuringExecution |
| Degraded test depends on image pull timing | status-degraded test relies on kubelet reporting ImagePullBackOff within 120s for operator to set Degraded=True | The 120s timeout is generous; image pull failures are typically reported within seconds by the kubelet |
| Scale-to-zero Available=False behavior change | computeConditions changed to return Available=False when desiredReplicas=0; previously returned Available=True | Intentional: zero replicas cannot serve traffic, so Available=False is correct; existing tests updated accordingly |
Troubleshooting
cert-manager not ready
If webhook tests fail with connection refused or TLS handshake errors, cert-manager may not be fully ready:
# Check cert-manager pods are Running
kubectl get pods -n cert-manager
# Wait for webhook to be ready
kubectl wait --for=condition=Available deployment/cert-manager-webhook \
-n cert-manager --timeout=120s
# Verify certificates are issued
kubectl get certificates -ATLS/mTLS Certificate not ready
If TLS or mTLS tests fail at the assert-certificate-ready step, the cert-manager Certificate may not have been issued:
# Check Certificate status in the test namespace
kubectl get certificates -A
kubectl describe certificate test-tls-cert -n <chainsaw-namespace>
# Check cert-manager logs for issuance errors
kubectl logs -n cert-manager deployment/cert-manager -c cert-manager --tail=20
# Verify the Issuer is ready
kubectl get issuers -ACommon causes:
- cert-manager pods not yet running (check
kubectl get pods -n cert-manager) - cert-manager webhook not ready (self-signed Issuer needs the webhook to validate)
- Namespace mismatch (Chainsaw auto-injects namespaces; the Issuer and Certificate must be in the same namespace)
ServiceMonitor CRD missing
The monitoring-toggle and cr-deletion tests require the ServiceMonitor CRD. If assertions fail with no matches for kind "ServiceMonitor":
# Install Prometheus Operator CRDs
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
# Verify the CRD is installed
kubectl get crd servicemonitors.monitoring.coreos.comPod scheduling timeout
If assertions timeout waiting for pods to become ready:
# Check pending pods and events
kubectl get pods -A --field-selector=status.phase!=Running
kubectl get events --sort-by='.lastTimestamp' -A | tail -20
# Check node resources
kubectl describe nodes | grep -A 5 "Allocated resources"
# Increase assert timeout if needed (in .chainsaw.yaml)
# spec.timeouts.assert: 180sDebugging test failures with kubectl logs
# Check operator logs for reconciliation errors
kubectl logs -n memcached-operator-system deployment/memcached-operator-controller-manager \
-c manager --tail=50
# Check specific test namespace (Chainsaw creates unique namespaces)
kubectl get ns | grep chainsaw
kubectl get all -n <chainsaw-namespace>
# Run a single test with verbose output
$(LOCALBIN)/chainsaw test --test-dir test/e2e/monitoring-toggle/ -v 3Adding a New E2E Test
1. Create the test directory
mkdir test/e2e/my-new-test/2. Create the test definition
# test/e2e/my-new-test/chainsaw-test.yaml
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
name: my-new-test
spec:
description: >
Verify that <feature> works end-to-end (REQ-XXX).
steps:
- name: create-memcached-cr
try:
- apply:
file: 00-memcached.yaml
- name: assert-expected-state
try:
- assert:
file: 01-assert-result.yaml3. Create resource and assertion files
Use the naming convention:
00-*.yaml— Initial resource to apply01-assert-*.yaml— Assertions on initial state02-patch-*.yaml— Patches to modify state03-assert-*.yaml— Assertions on modified state0N-error-*-gone.yaml— Negative assertions (resource should not exist)
4. Follow conventions
- Use partial objects in assertions — only specify fields you care about
- Use the standard label set:
app.kubernetes.io/name,app.kubernetes.io/instance,app.kubernetes.io/managed-by - Reference shared fixtures from
test/e2e/resources/when the minimal CR template applies - For webhook rejection tests, use
expectwith($error != null): trueonapply - For deletion tests, use
erroroperations with resource references
5. Run the test
# Run all E2E tests
make test-e2e
# Run a specific test directory
$(LOCALBIN)/chainsaw test --test-dir test/e2e/my-new-test/