Skip to content

Chainsaw E2E Tests

Reference documentation for the Kyverno Chainsaw end-to-end test suite that validates the Memcached operator against a real kind cluster, covering deployment, scaling, configuration changes, monitoring, PDB management, graceful rolling updates, webhook validation, garbage collection, SASL authentication, TLS encryption, mutual TLS (mTLS), NetworkPolicy lifecycle, Service annotation propagation, status degraded detection, scale-to-zero behavior, owner reference GC chain validation, and HPA autoscaling lifecycle.

Source: test/e2e/

Overview

The E2E test suite exercises the operator end-to-end by deploying it to a kind cluster and applying Memcached custom resources via kubectl. Unlike envtest integration tests that run against an in-process API server, these tests validate the full operator lifecycle including controller watches, leader election, webhook TLS, and Kubernetes garbage collection.

The suite uses Kyverno Chainsaw v0.2.12, a declarative Kubernetes E2E testing framework. Each test scenario is defined in YAML with steps that apply resources, patch them, and assert on the resulting cluster state using partial object matching.


Test Infrastructure

Chainsaw Configuration (.chainsaw.yaml)

The global configuration at the project root controls timeouts and execution:

yaml
apiVersion: chainsaw.kyverno.io/v1alpha2
kind: Configuration
metadata:
  name: memcached-operator-e2e
spec:
  timeouts:
    apply: 30s      # Resource creation timeout
    assert: 120s    # Assertion timeout (allows for pod scheduling)
    cleanup: 60s    # Namespace cleanup timeout
    delete: 30s     # Deletion timeout
    error: 30s      # Error assertion timeout
  cleanup:
    skipDelete: false
  execution:
    failFast: true   # Stop on first failure
    parallel: 1      # Sequential execution across test cases
  discovery:
    testDirs:
      - test/e2e

Key timeout rationale:

  • assert: 120s — Pod scheduling and readiness can vary significantly in CI; 120s accommodates slow schedulers without causing false positives.
  • cleanup: 60s — Allows Kubernetes garbage collection to cascade through owner references before the namespace is force-deleted.
  • parallel: 1 — Tests run sequentially to avoid resource contention on small kind clusters.

Makefile Target

bash
make test-e2e

Downloads Chainsaw v0.2.12 via go install (using the same go-install-tool pattern as controller-gen, kustomize, and other project tools) and runs the test suite:

makefile
CHAINSAW ?= $(LOCALBIN)/chainsaw
CHAINSAW_VERSION ?= v0.2.12

.PHONY: test-e2e
test-e2e: chainsaw ## Run end-to-end tests against a kind cluster using Chainsaw.
    $(CHAINSAW) test

Prerequisites

Before running make test-e2e, the following must be in place:

PrerequisitePurposeSetup Command
kind cluster runningTarget cluster for testskind create cluster
Operator deployedController manager running in clustermake deploy IMG=<image>
cert-manager installedWebhook TLS certificateskubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.1/cert-manager.yaml
ServiceMonitor CRDRequired for monitoring-toggle testInstall via Prometheus Operator CRDs

Shared Test Fixtures (test/e2e/resources/)

Reusable YAML templates referenced by multiple test scenarios:

FilePurpose
memcached-minimal.yamlMinimal valid Memcached CR (1 replica, memcached:1.6, 64Mi maxMemoryMB)
assert-deployment.yamlPartial Deployment assertion (labels, replicas, container args, port)
assert-service.yamlPartial headless Service assertion (clusterIP: None, port 11211, selectors)
assert-status-available.yamlStatus assertion (readyReplicas: 1, Available=True)

File Structure

text
test/e2e/
├── autoscaling-disable/
│   ├── chainsaw-test.yaml          # Test definition
│   ├── 00-memcached.yaml           # CR with autoscaling enabled (test-autoscaling-disable)
│   ├── 01-assert-hpa.yaml          # HPA exists before disabling
│   ├── 02-patch-disable.yaml       # Patch autoscaling.enabled=false, replicas=3
│   ├── 03-error-hpa-gone.yaml      # HPA deleted assertion
│   └── 03-assert-deployment.yaml   # Deployment with replicas=3
├── autoscaling-enable/
│   ├── chainsaw-test.yaml          # Test definition
│   ├── 00-memcached.yaml           # CR with autoscaling enabled (test-autoscaling-enable)
│   ├── 01-assert-hpa.yaml          # HPA with scaleTargetRef, metrics, behavior
│   ├── 01-assert-deployment.yaml   # Deployment without hardcoded replicas
│   └── 02-assert-status.yaml       # Status condition assertions
├── autoscaling-update/
│   ├── chainsaw-test.yaml          # Test definition
│   ├── 00-memcached.yaml           # CR with autoscaling min=2, max=10 (test-autoscaling-update)
│   ├── 01-assert-hpa.yaml          # HPA with initial minReplicas=2, maxReplicas=10
│   ├── 02-patch-update.yaml        # Patch minReplicas=3, maxReplicas=15
│   └── 03-assert-hpa-updated.yaml  # HPA with updated bounds
├── basic-deployment/
│   ├── chainsaw-test.yaml          # Test definition
│   ├── 00-memcached.yaml           # Minimal CR (test-basic)
│   ├── 01-assert-deployment.yaml   # Deployment assertions
│   ├── 01-assert-service.yaml      # Service assertions
│   └── 02-assert-status.yaml       # Status condition assertions
├── scaling/
│   ├── chainsaw-test.yaml
│   ├── 00-memcached.yaml           # CR with replicas=1
│   ├── 01-assert-one-replica.yaml
│   ├── 02-patch-scale-up.yaml      # Patch replicas to 3
│   ├── 03-assert-three-replicas.yaml
│   ├── 03-assert-status-scaled.yaml
│   ├── 04-patch-scale-down.yaml    # Patch replicas to 1
│   └── 05-assert-one-replica.yaml
├── configuration-changes/
│   ├── chainsaw-test.yaml
│   ├── 00-memcached.yaml           # CR with default config
│   ├── 01-assert-initial-args.yaml
│   ├── 02-patch-config.yaml        # Patch maxMemoryMB, threads, maxItemSize
│   └── 03-assert-updated-args.yaml
├── monitoring-toggle/
│   ├── chainsaw-test.yaml
│   ├── 00-memcached.yaml           # CR without monitoring
│   ├── 01-assert-no-exporter.yaml
│   ├── 02-patch-enable-monitoring.yaml
│   ├── 03-assert-exporter.yaml     # Exporter sidecar on port 9150
│   ├── 03-assert-service-metrics.yaml
│   ├── 03-assert-servicemonitor.yaml  # ServiceMonitor with labels and endpoints
│   ├── 04-patch-disable-monitoring.yaml
│   ├── 05-assert-no-exporter.yaml  # Exporter sidecar removed
│   └── 05-error-servicemonitor-gone.yaml
├── pdb-creation/
│   ├── chainsaw-test.yaml
│   ├── 00-memcached.yaml           # CR with PDB enabled (replicas=3)
│   ├── 01-assert-deployment.yaml
│   ├── 01-assert-pdb.yaml          # PDB with minAvailable=1
│   ├── 02-patch-disable-pdb.yaml
│   └── 03-error-pdb-gone.yaml
├── graceful-rolling-update/
│   ├── chainsaw-test.yaml
│   ├── 00-memcached.yaml           # CR with gracefulShutdown enabled
│   ├── 01-assert-deployment.yaml   # Strategy + preStop + terminationGracePeriod
│   ├── 02-patch-update-image.yaml  # Image change to trigger rollout
│   └── 03-assert-rolling-update.yaml
├── webhook-rejection/
│   ├── chainsaw-test.yaml
│   ├── 00-invalid-memory-limit.yaml
│   ├── 01-invalid-pdb-both.yaml
│   ├── 02-invalid-graceful-shutdown.yaml
│   ├── 03-invalid-sasl-no-secret.yaml
│   ├── 04-invalid-tls-no-secret.yaml
│   ├── 05-invalid-pdb-neither.yaml
│   ├── 06-invalid-pdb-min-ge-replicas.yaml
│   ├── 07-invalid-autoscaling-replicas-conflict.yaml
│   ├── 08-invalid-autoscaling-min-gt-max.yaml
│   └── 09-invalid-autoscaling-cpu-no-request.yaml
├── cr-deletion/
│   ├── chainsaw-test.yaml
│   ├── 00-memcached.yaml           # CR with monitoring and PDB enabled
│   ├── 01-assert-deployment.yaml
│   ├── 01-assert-service.yaml
│   ├── 01-assert-pdb.yaml
│   ├── 01-assert-servicemonitor.yaml
│   ├── 02-error-deployment-gone.yaml
│   ├── 02-error-service-gone.yaml
│   ├── 02-error-pdb-gone.yaml
│   ├── 02-error-servicemonitor-gone.yaml
│   └── 02-error-cr-gone.yaml
├── sasl-authentication/
│   ├── chainsaw-test.yaml          # Test definition
│   ├── 00-sasl-secret.yaml         # Opaque Secret with password-file key
│   ├── 00-memcached.yaml           # CR with security.sasl.enabled: true
│   ├── 01-assert-deployment.yaml   # SASL volume, mount, and args assertions
│   └── 02-assert-status.yaml       # Status condition assertions
├── tls-encryption/
│   ├── chainsaw-test.yaml          # Test definition
│   ├── 00-cert-manager.yaml        # Self-signed Issuer + Certificate
│   ├── 00-assert-certificate-ready.yaml  # Certificate Ready=True assertion
│   ├── 01-memcached.yaml           # CR with security.tls.enabled: true
│   ├── 02-assert-deployment.yaml   # TLS volume, mount, args, port assertions
│   ├── 02-assert-service.yaml      # Service TLS port assertion
│   └── 03-assert-status.yaml       # Status condition assertions
├── tls-mtls/
│   ├── chainsaw-test.yaml          # Test definition
│   ├── 00-cert-manager.yaml        # Self-signed Issuer + Certificate (with CA)
│   ├── 00-assert-certificate-ready.yaml  # Certificate Ready=True assertion
│   ├── 01-memcached.yaml           # CR with tls.enabled + enableClientCert
│   ├── 02-assert-deployment.yaml   # mTLS volume (ca.crt), args (ssl_ca_cert)
│   ├── 02-assert-service.yaml      # Service TLS port assertion
│   └── 03-assert-status.yaml       # Status condition assertions
├── network-policy/
│   ├── chainsaw-test.yaml          # Test definition
│   ├── 00-memcached.yaml           # CR with networkPolicy.enabled: true
│   ├── 01-assert-deployment.yaml   # Deployment ready assertion
│   ├── 01-assert-networkpolicy.yaml # NetworkPolicy with podSelector, port 11211
│   ├── 02-patch-allowed-sources.yaml # Patch allowedSources with podSelector
│   ├── 03-assert-networkpolicy-allowed-sources.yaml # NetworkPolicy with from peer
│   ├── 04-cert-manager.yaml        # Self-signed Issuer + Certificate
│   ├── 04-assert-certificate-ready.yaml # Certificate Ready=True assertion
│   ├── 05-patch-enable-tls-monitoring.yaml # Enable TLS and monitoring
│   ├── 06-assert-networkpolicy-all-ports.yaml # Ports 11211, 11212, 9150
│   ├── 07-patch-disable-networkpolicy.yaml # Disable networkPolicy
│   └── 08-error-networkpolicy-gone.yaml # NetworkPolicy deleted assertion
├── service-annotations/
│   ├── chainsaw-test.yaml          # Test definition
│   ├── 00-memcached.yaml           # CR with service.annotations
│   ├── 01-assert-service.yaml      # Service with custom annotations
│   ├── 02-patch-update-annotations.yaml # Patch with new annotations
│   ├── 03-assert-service-updated.yaml # Service with updated annotations
│   ├── 04-patch-remove-annotations.yaml # Remove annotations (service: null)
│   └── 05-assert-service-no-annotations.yaml # Service without annotations
├── pdb-max-unavailable/
│   ├── chainsaw-test.yaml          # Test definition
│   ├── 00-memcached.yaml           # CR with PDB maxUnavailable=1 (replicas=3)
│   ├── 01-assert-deployment.yaml   # Deployment ready assertion
│   ├── 01-assert-pdb.yaml          # PDB with maxUnavailable=1
│   ├── 02-patch-max-unavailable.yaml # Patch maxUnavailable to 2
│   └── 03-assert-pdb-updated.yaml  # PDB with maxUnavailable=2
├── verbosity-extra-args/
│   ├── chainsaw-test.yaml          # Test definition
│   ├── 00-memcached.yaml           # CR with verbosity=1 and extraArgs
│   ├── 01-assert-deployment.yaml   # Args with -v and -o modern
│   ├── 02-patch-config.yaml        # Patch verbosity=2, new extraArgs
│   └── 03-assert-deployment.yaml   # Args with -vv and new extraArgs
├── custom-exporter-image/
│   ├── chainsaw-test.yaml          # Test definition
│   ├── 00-memcached.yaml           # CR with custom exporterImage
│   ├── 01-assert-deployment.yaml   # Exporter with custom image
│   ├── 02-patch-exporter-image.yaml # Patch to default exporter image
│   └── 03-assert-deployment.yaml   # Exporter with updated image
├── security-contexts/
│   ├── chainsaw-test.yaml          # Test definition
│   ├── 00-memcached.yaml           # CR with pod and container security contexts
│   ├── 01-assert-deployment.yaml   # Security contexts on pod and container
│   ├── 02-patch-security-contexts.yaml # Patch with runAsUser=1000
│   └── 03-assert-deployment.yaml   # Updated security contexts
├── hard-anti-affinity/
│   ├── chainsaw-test.yaml          # Test definition
│   ├── 00-memcached.yaml           # CR with antiAffinityPreset=hard
│   └── 01-assert-deployment.yaml   # requiredDuringScheduling anti-affinity
├── status-degraded/
│   ├── chainsaw-test.yaml          # Test definition
│   ├── 00-memcached.yaml           # CR with non-existent image (test-degraded)
│   ├── 01-assert-deployment.yaml   # Deployment created with invalid image
│   └── 01-assert-status.yaml       # Degraded=True, Available=False, Progressing=False (ProgressingComplete)
├── scale-to-zero/
│   ├── chainsaw-test.yaml          # Test definition
│   ├── 00-memcached.yaml           # CR with replicas=1 (test-scale-zero)
│   ├── 01-assert-status-available.yaml # Initial Available=True, readyReplicas=1
│   ├── 02-patch-scale-zero.yaml    # Patch replicas to 0
│   ├── 03-assert-deployment.yaml   # Deployment.spec.replicas=0
│   └── 03-assert-status.yaml       # Available=False, Progressing=False, Degraded=False
├── owner-references/
│   ├── chainsaw-test.yaml          # Test definition
│   ├── 00-memcached.yaml           # CR with all features enabled (test-owner-refs)
│   ├── 01-assert-deployment.yaml   # Deployment ownerReferences assertion
│   ├── 01-assert-service.yaml      # Service ownerReferences assertion
│   ├── 01-assert-pdb.yaml          # PDB ownerReferences assertion
│   ├── 01-assert-networkpolicy.yaml # NetworkPolicy ownerReferences assertion
│   └── 01-assert-servicemonitor.yaml # ServiceMonitor ownerReferences assertion
└── resources/
    ├── memcached-minimal.yaml
    ├── assert-deployment.yaml
    ├── assert-service.yaml
    └── assert-status-available.yaml

Test Scenarios

1. Basic Deployment (REQ-002)

Directory: test/e2e/basic-deployment/

Verifies that creating a minimal Memcached CR produces the expected Deployment, headless Service, and status conditions.

StepOperationAssertion
create-memcached-crapply 00-memcached.yamlCR created
assert-deployment-createdassert 01-assert-deployment.yamlDeployment with correct labels, args (-m 64 -c 1024 -t 4 -I 1m), port 11211
assert-service-createdassert 01-assert-service.yamlHeadless Service (clusterIP: None), port 11211, correct selectors
assert-status-availableassert 02-assert-status.yamlreadyReplicas: 1, Available=True

Owner references on Deployment and Service are verified as part of the Deployment and Service assertion files (Chainsaw partial matching includes metadata.ownerReferences).

2. Scaling (REQ-003)

Directory: test/e2e/scaling/

Verifies that updating spec.replicas scales the Deployment and updates status.readyReplicas.

StepOperationAssertion
create-memcached-crapplyCR with replicas=1
assert-initial-deploymentassertDeployment.spec.replicas=1
scale-up-to-3patch replicas=3
assert-scaled-deploymentassertDeployment.spec.replicas=3
assert-scaled-statusassertstatus.readyReplicas=3
scale-down-to-1patch replicas=1
assert-scaled-downassertDeployment.spec.replicas=1

3. Configuration Changes (REQ-004)

Directory: test/e2e/configuration-changes/

Verifies that changing memcached config fields triggers a rolling update with correct container args.

StepOperationAssertion
create-memcached-crapplyCR with maxMemoryMB=64, threads=4
assert-initial-argsassertContainer args: -m 64 -c 1024 -t 4 -I 1m
update-configurationpatch maxMemoryMB=256, threads=8, maxItemSize=2m
assert-updated-argsassertContainer args: -m 256 ... -t 8 -I 2m

4. Monitoring Toggle (REQ-005)

Directory: test/e2e/monitoring-toggle/

Verifies that enabling monitoring injects the exporter sidecar and adds a metrics port to the Service.

StepOperationAssertion
create-memcached-without-monitoringapplyCR without monitoring
assert-no-exporter-sidecarassertDeployment has 1 container (memcached only)
enable-monitoringpatch monitoring.enabled=true
assert-exporter-sidecar-injectedassert2 containers: memcached (port 11211) + exporter (port 9150)
assert-service-metrics-portassertService has metrics port
assert-servicemonitor-createdassertServiceMonitor with correct labels, endpoints, and selector
disable-monitoringpatch monitoring.enabled=false
assert-exporter-sidecar-removedassertDeployment has 1 container (memcached only)
assert-servicemonitor-deletederrorServiceMonitor is removed

Prerequisite: ServiceMonitor CRD must be installed in the cluster.

5. PDB Creation (REQ-006)

Directory: test/e2e/pdb-creation/

Verifies that enabling PDB creates a PodDisruptionBudget with correct settings.

StepOperationAssertion
create-memcached-with-pdbapplyCR with replicas=3, PDB enabled, minAvailable=1
assert-deployment-readyassertDeployment with 3 replicas
assert-pdb-createdassertPDB with minAvailable=1, correct selector, owner reference
disable-pdbpatch PDB enabled=false
assert-pdb-deletederrorPDB is removed

6. Graceful Rolling Update (REQ-007)

Directory: test/e2e/graceful-rolling-update/

Verifies that graceful shutdown configures preStop hooks and the RollingUpdate strategy, and that image changes trigger a correct rolling update.

StepOperationAssertion
create-memcached-with-graceful-shutdownapplyCR with gracefulShutdown enabled
assert-graceful-shutdown-configassertRollingUpdate (maxSurge=1, maxUnavailable=0), preStop hook, terminationGracePeriodSeconds
trigger-rolling-updatepatch image
assert-rolling-update-strategyassertAll pods running new image, strategy preserved

7. Webhook Rejection (REQ-008)

Directory: test/e2e/webhook-rejection/

Verifies that the validating webhook rejects invalid CRs. Each step uses Chainsaw's expect with ($error != null): true to assert that the apply operation fails.

StepInvalid CRExpected Rejection Reason
reject-insufficient-memory-limitmaxMemoryMB=64, memory limit=32MiMemory limit < maxMemoryMB + 32Mi overhead
reject-pdb-mutual-exclusivityBoth minAvailable and maxUnavailable setMutually exclusive fields
reject-graceful-shutdown-invalid-periodterminationGracePeriodSeconds <= preStopDelaySecondsTermination period must exceed pre-stop delay
reject-sasl-without-secret-refsasl.enabled=true, no credentialsSecretRef.nameMissing required secret reference
reject-tls-without-secret-reftls.enabled=true, no certificateSecretRef.nameMissing required secret reference
reject-pdb-neither-setPDB enabled, neither minAvailable nor maxUnavailableExactly one of minAvailable or maxUnavailable required
reject-pdb-min-available-ge-replicasPDB minAvailable >= replicasminAvailable must be less than replicas
reject-autoscaling-replicas-conflictspec.replicas=3 and autoscaling.enabled=truespec.replicas and autoscaling.enabled are mutually exclusive
reject-autoscaling-min-gt-maxautoscaling.minReplicas=10, maxReplicas=5minReplicas must not exceed maxReplicas
reject-autoscaling-cpu-no-requestCPU utilization metric without resources.requests.cpuCPU utilization metric requires resources.requests.cpu

8. CR Deletion & Garbage Collection (REQ-009)

Directory: test/e2e/cr-deletion/

Verifies that deleting a Memcached CR garbage-collects all owned resources.

StepOperationAssertion
create-memcached-with-all-resourcesapplyCR with monitoring and PDB enabled
assert-all-resources-existassertDeployment, Service, PDB, ServiceMonitor all present
delete-memcached-crdelete Memcached/test-deletion
assert-all-resources-garbage-collectederrorDeployment, Service, PDB, ServiceMonitor, and CR are all gone

The error operation asserts that the specified resource does not exist — the assertion succeeds when the GET returns NotFound.

9. SASL Authentication (MO-0032 REQ-001, REQ-006, REQ-008)

Directory: test/e2e/sasl-authentication/

Verifies that enabling SASL authentication creates the correct Secret volume, volumeMount, and container args (-Y <authfile>) in the Deployment.

StepOperationAssertion
create-sasl-secretapply 00-sasl-secret.yamlOpaque Secret with password-file key created
create-memcached-crapply 00-memcached.yamlCR with security.sasl.enabled: true, credentialsSecretRef.name: test-sasl-credentials
assert-deployment-saslassert 01-assert-deploymentVolume sasl-credentials with item {key: password-file, path: password-file}, mount at /etc/memcached/sasl (readOnly), args include -Y /etc/memcached/sasl/password-file
assert-status-availableassert 02-assert-statusreadyReplicas: 1, Available=True

The SASL Secret must be created before the Memcached CR because the validating webhook requires credentialsSecretRef.name to reference an existing Secret.

CRD fields tested:

  • spec.security.sasl.enabled — Enables SASL authentication
  • spec.security.sasl.credentialsSecretRef.name — References the Secret containing the password file

10. TLS Encryption (MO-0032 REQ-002, REQ-003, REQ-004, REQ-007, REQ-008, REQ-009)

Directory: test/e2e/tls-encryption/

Verifies that enabling TLS encryption creates a cert-manager Certificate, adds the TLS volume, volumeMount, -Z and ssl_chain_cert/ssl_key container args, and configures port 11212 on the Deployment and Service.

StepOperationAssertion
create-cert-manager-resourcesapply 00-cert-manager.yamlSelf-signed Issuer + Certificate (secretName: test-tls-certs)
assert-certificate-readyassert 00-assert-certificate-readyCertificate status Ready=True
create-memcached-crapply 01-memcached.yamlCR with security.tls.enabled: true, certificateSecretRef.name: test-tls-certs
assert-deployment-tlsassert 02-assert-deploymentVolume tls-certificates with items tls.crt, tls.key; mount at /etc/memcached/tls (readOnly); args include -Z -o ssl_chain_cert=... -o ssl_key=...; port memcached-tls on 11212
assert-service-tls-portassert 02-assert-serviceService ports include memcached-tls on port 11212 targeting memcached-tls
assert-status-availableassert 03-assert-statusreadyReplicas: 1, Available=True

The Certificate must reach Ready=True before applying the Memcached CR to ensure the TLS Secret exists (avoiding a race condition where the operator cannot mount the Secret volume).

CRD fields tested:

  • spec.security.tls.enabled — Enables TLS encryption
  • spec.security.tls.certificateSecretRef.name — References the cert-manager Secret

cert-manager resources:

  • Self-signed Issuer (test-tls-selfsigned)
  • Certificate (test-tls-cert) generating Secret test-tls-certs with tls.crt and tls.key

11. Mutual TLS / mTLS (MO-0032 REQ-004, REQ-005, REQ-008, REQ-009)

Directory: test/e2e/tls-mtls/

Verifies that enabling TLS with enableClientCert: true adds the ca.crt key projection to the TLS volume and the ssl_ca_cert arg to the container, in addition to the standard TLS configuration.

StepOperationAssertion
create-cert-manager-resourcesapply 00-cert-manager.yamlSelf-signed Issuer + Certificate (secretName: test-mtls-certs)
assert-certificate-readyassert 00-assert-certificate-readyCertificate status Ready=True
create-memcached-crapply 01-memcached.yamlCR with tls.enabled: true, enableClientCert: true, certificateSecretRef.name: test-mtls-certs
assert-deployment-mtlsassert 02-assert-deploymentVolume items include ca.crt alongside tls.crt/tls.key; args include -o ssl_ca_cert=/etc/memcached/tls/ca.crt; port memcached-tls on 11212
assert-service-tls-portassert 02-assert-serviceService ports include memcached-tls on port 11212
assert-status-availableassert 03-assert-statusreadyReplicas: 1, Available=True

The mTLS test extends TLS by verifying that enableClientCert: true causes the operator to project the ca.crt key from the Secret and add the ssl_ca_cert=/etc/memcached/tls/ca.crt arg to enable client certificate verification.

CRD fields tested:

  • spec.security.tls.enabled — Enables TLS encryption
  • spec.security.tls.enableClientCert — Enables mutual TLS (client cert verification)
  • spec.security.tls.certificateSecretRef.name — References the cert-manager Secret

Difference from TLS test: The TLS volume includes three items (tls.crt, tls.key, ca.crt) instead of two, and the container args include an additional -o ssl_ca_cert=/etc/memcached/tls/ca.crt.

12. NetworkPolicy Lifecycle (MO-0033 REQ-E2E-NP-001 through NP-005)

Directory: test/e2e/network-policy/

Verifies the full NetworkPolicy lifecycle: creation with correct podSelector and ingress port 11211, allowedSources propagation, port adaptation when TLS and monitoring are enabled (11211, 11212, 9150), and deletion when networkPolicy is disabled.

StepOperationAssertion
create-memcached-with-networkpolicyapply 00-memcached.yamlCR with security.networkPolicy.enabled: true (test-netpol)
assert-deployment-readyassert 01-assert-deployment.yamlDeployment with correct labels
assert-networkpolicy-createdassert 01-assert-networkpolicy.yamlNetworkPolicy with podSelector matching operator labels, policyTypes: [Ingress], port 11211/TCP
patch-allowed-sourcespatch 02-patch-allowed-sources.yamlAdd allowedSources with podSelector app: allowed-client
assert-networkpolicy-allowed-sourcesassert 03-assert-networkpolicy-allowed-sources.yamlNetworkPolicy ingress from field contains podSelector with app: allowed-client
create-cert-manager-resourcesapply 04-cert-manager.yamlSelf-signed Issuer + Certificate (secretName: test-netpol-certs)
assert-certificate-readyassert 04-assert-certificate-ready.yamlCertificate status Ready=True
patch-enable-tls-monitoringpatch 05-patch-enable-tls-monitoring.yamlEnable TLS (certificateSecretRef.name: test-netpol-certs) and monitoring
assert-networkpolicy-all-portsassert 06-assert-networkpolicy-all-ports.yamlNetworkPolicy ingress ports: 11211/TCP, 11212/TCP, 9150/TCP; from peer preserved
disable-networkpolicypatch 07-patch-disable-networkpolicy.yamlPatch security.networkPolicy.enabled: false
assert-networkpolicy-deletederror 08-error-networkpolicy-gone.yamlNetworkPolicy resource no longer exists

Prerequisite: cert-manager must be installed in the cluster (required for the TLS port adaptation step).

CRD fields tested:

  • spec.security.networkPolicy.enabled — Enables/disables the NetworkPolicy
  • spec.security.networkPolicy.allowedSources — Configures ingress from peers
  • spec.security.tls.enabled — Adds port 11212 to the NetworkPolicy
  • spec.monitoring.enabled — Adds port 9150 to the NetworkPolicy

13. Service Annotations (MO-0033 REQ-E2E-SA-001, REQ-E2E-SA-002)

Directory: test/e2e/service-annotations/

Verifies that custom annotations defined in spec.service.annotations are propagated to the managed headless Service, that updating annotations propagates the changes, and that removing annotations clears them from the Service.

StepOperationAssertion
create-memcached-with-annotationsapply 00-memcached.yamlCR with two annotations: external-dns.alpha.kubernetes.io/hostname and service.beta.kubernetes.io/aws-load-balancer-internal (test-svc-ann)
assert-service-has-annotationsassert 01-assert-service.yamlService has both custom annotations, correct labels, headless (clusterIP: None), port 11211
update-annotationspatch 02-patch-update-annotations.yamlReplace annotations with external-dns.alpha.kubernetes.io/hostname: memcached-updated.example.com and prometheus.io/scrape: "true"
assert-service-annotations-updatedassert 03-assert-service-updated.yamlService annotations contain the updated key-value pairs
remove-annotationspatch 04-patch-remove-annotations.yamlPatch spec.service: null to remove all annotations
assert-service-no-annotationsassert 05-assert-service-no-annotations.yamlService has correct labels and spec; JMESPath expression asserts annotations are absent or empty

CRD fields tested:

  • spec.service.annotations — Custom annotations propagated to the managed Service

14. PDB maxUnavailable (MO-0034 REQ-001)

Directory: test/e2e/pdb-max-unavailable/

Verifies that configuring PDB with maxUnavailable (instead of minAvailable) creates a PodDisruptionBudget with the correct maxUnavailable setting, and that updating it propagates to the PDB.

StepOperationAssertion
create-memcached-with-pdb-max-unavailableapply 00-memcached.yamlCR with replicas=3, PDB enabled, maxUnavailable=1
assert-deployment-readyassert 01-assert-deploymentDeployment with 3 replicas
assert-pdb-max-unavailableassert 01-assert-pdbPDB with maxUnavailable=1, correct selector, labels
update-max-unavailablepatch maxUnavailable=2
assert-pdb-updatedassert 03-assert-pdb-updatedPDB with maxUnavailable=2

CRD fields tested:

  • spec.highAvailability.podDisruptionBudget.enabled — Enables the PDB
  • spec.highAvailability.podDisruptionBudget.maxUnavailable — Sets maxUnavailable on the PDB

15. Verbosity and Extra Args (MO-0034 REQ-002, REQ-003)

Directory: test/e2e/verbosity-extra-args/

Verifies that setting memcached.verbosity and memcached.extraArgs propagates to the Deployment container args, and that updating them triggers a rolling update with the correct args.

StepOperationAssertion
create-memcached-with-verbosity-and-extra-argsapply 00-memcached.yamlCR with verbosity=1, extraArgs=["-o", "modern"]
assert-initial-argsassert 01-assert-deploymentArgs include -v -o modern after standard flags
update-verbosity-and-extra-argspatch verbosity=2, extraArgs=["--max-reqs-per-event", "20"]
assert-updated-argsassert 03-assert-deploymentArgs include -vv --max-reqs-per-event 20

CRD fields tested:

  • spec.memcached.verbosity — Controls verbosity flag (0=none, 1=-v, 2=-vv)
  • spec.memcached.extraArgs — Additional command-line arguments appended after standard flags

16. Custom Exporter Image (MO-0034 REQ-004)

Directory: test/e2e/custom-exporter-image/

Verifies that specifying a custom exporter image in the monitoring config uses that image for the exporter sidecar instead of the default.

StepOperationAssertion
create-memcached-with-custom-exporterapply 00-memcached.yamlCR with monitoring enabled, exporterImage=v0.14.0
assert-custom-exporter-imageassert 01-assert-deploymentExporter sidecar uses custom image v0.14.0
update-exporter-imagepatch exporterImage=v0.15.4
assert-updated-exporter-imageassert 03-assert-deploymentExporter sidecar uses updated image v0.15.4

CRD fields tested:

  • spec.monitoring.enabled — Enables the exporter sidecar
  • spec.monitoring.exporterImage — Custom image for the exporter sidecar

17. Security Contexts (MO-0034 REQ-005, REQ-006)

Directory: test/e2e/security-contexts/

Verifies that custom pod and container security contexts defined in spec.security are propagated to the Deployment pod template, and that updating them triggers a rolling update with the new settings.

StepOperationAssertion
create-memcached-with-security-contextsapply 00-memcached.yamlCR with runAsNonRoot, readOnlyRootFilesystem, drop ALL
assert-security-contextsassert 01-assert-deploymentPod and container security contexts match CR spec
update-security-contextspatch runAsUser=1000, fsGroup=1000
assert-updated-security-contextsassert 03-assert-deploymentUpdated security contexts with runAsUser=1000

CRD fields tested:

  • spec.security.podSecurityContext — Pod-level security context (runAsNonRoot, fsGroup)
  • spec.security.containerSecurityContext — Container-level security context (readOnlyRootFilesystem, capabilities)

18. Hard Anti-Affinity (MO-0034 REQ-007)

Directory: test/e2e/hard-anti-affinity/

Verifies that setting antiAffinityPreset to "hard" configures requiredDuringSchedulingIgnoredDuringExecution pod anti-affinity on the Deployment, with the correct topology key and label selector.

StepOperationAssertion
create-memcached-with-hard-anti-affinityapply 00-memcached.yamlCR with antiAffinityPreset="hard"
assert-hard-anti-affinityassert 01-assert-deploymentrequiredDuringScheduling anti-affinity with topologyKey and instance label selector

CRD fields tested:

  • spec.highAvailability.antiAffinityPreset — Controls pod anti-affinity ("soft" or "hard")

19. Status Degraded (MO-0035 REQ-E2E-SD-001, REQ-E2E-SD-002)

Directory: test/e2e/status-degraded/

Verifies that a Memcached CR with a non-existent container image reports Degraded=True and Available=False status conditions. The operator creates the Deployment, but pods fail to pull the image (ImagePullBackOff), causing zero ready replicas and triggering the degraded status path.

StepOperationAssertion
create-memcached-crapply 00-memcached.yamlCR with image memcached:nonexistent-tag-does-not-exist (test-degraded)
assert-deployment-createdassert 01-assert-deploymentDeployment exists with invalid image, correct labels, owner reference
assert-status-degradedassert 01-assert-statusDegraded=True (reason: Degraded), Available=False (reason: Unavailable), Progressing=False (reason: ProgressingComplete)

CRD fields tested:

  • spec.replicas — Desired replica count (1)
  • spec.image — Non-existent image triggers degraded status

20. Scale to Zero (MO-0035 REQ-E2E-SZ-001, REQ-E2E-SZ-002)

Directory: test/e2e/scale-to-zero/

Verifies that patching a healthy Memcached CR from replicas=1 to replicas=0 results in Available=False, Progressing=False, and Degraded=False. This is a two-phase apply-assert-patch-assert test that first confirms a healthy starting state before scaling down.

StepOperationAssertion
create-memcached-crapply 00-memcached.yamlCR with replicas=1 (test-scale-zero)
assert-initial-statusassert 01-assert-status-availableAvailable=True, readyReplicas=1
scale-to-zeropatch 02-patch-scale-zero.yamlPatch spec.replicas to 0
assert-deployment-scaledassert 03-assert-deploymentDeployment.spec.replicas=0
assert-status-unavailableassert 03-assert-statusAvailable=False (Unavailable), Progressing=False (ProgressingComplete), Degraded=False (NotDegraded)

CRD fields tested:

  • spec.replicas — Scale-to-zero behavior (patched from 1 to 0)

Controller change: The computeConditions function in internal/controller/status.go was updated to return Available=False when desiredReplicas=0 (previously scale-to-zero incorrectly reported Available=True).

21. Owner References GC Chain (MO-0036 REQ-OR-001 through REQ-OR-006)

Directory: test/e2e/owner-references/

Verifies that all child resources created by the operator have correct ownerReferences pointing to the parent Memcached CR with controller=true and blockOwnerDeletion=true. This validates the mechanism (ownerReferences set on creation) separately from the cr-deletion test that validates the outcome (resources cleaned up on deletion).

StepOperationAssertion
create-memcached-with-all-featuresapply 00-memcached.yamlCR with monitoring, PDB, and NetworkPolicy enabled (test-owner-refs)
assert-deployment-owner-referenceassert 01-assert-deployment.yamlDeployment ownerReferences: kind=Memcached, apiVersion=memcached.c5c3.io/v1alpha1, controller=true, blockOwnerDeletion=true
assert-service-owner-referenceassert 01-assert-service.yamlService ownerReferences: kind=Memcached, controller=true, blockOwnerDeletion=true
assert-pdb-owner-referenceassert 01-assert-pdb.yamlPDB ownerReferences: kind=Memcached, controller=true, blockOwnerDeletion=true
assert-networkpolicy-owner-referenceassert 01-assert-networkpolicy.yamlNetworkPolicy ownerReferences: kind=Memcached, controller=true, blockOwnerDeletion=true
assert-servicemonitor-owner-referenceassert 01-assert-servicemonitor.yamlServiceMonitor ownerReferences: kind=Memcached, controller=true, blockOwnerDeletion=true

The test does not delete the CR or assert resource cleanup — that is covered by the cr-deletion test (scenario 8). This separation allows pinpointing whether a GC failure is due to missing ownerReferences or a different Kubernetes issue.

CRD fields tested (indirectly via ownerReferences on child resources):

  • spec.monitoring.enabled — Creates ServiceMonitor with ownerReference
  • spec.highAvailability.podDisruptionBudget.enabled — Creates PDB with ownerReference
  • spec.security.networkPolicy.enabled — Creates NetworkPolicy with ownerReference

22. Autoscaling Enable (MO-0042 REQ-001, REQ-006, REQ-007)

Directory: test/e2e/autoscaling-enable/

Verifies that creating a Memcached CR with autoscaling.enabled=true produces an HPA with the correct scaleTargetRef, minReplicas, maxReplicas, defaulted CPU utilization metric at 80%, and defaulted scaleDown stabilization window of 300 seconds. The Deployment must exist but must NOT have a hardcoded replica count (HPA controls replicas).

StepOperationAssertion
create-memcached-crapply 00-memcached.yamlCR with autoscaling.enabled: true, minReplicas=2, maxReplicas=10, resources.requests.cpu: 50m (test-autoscaling-enable)
assert-hpa-createdassert 01-assert-hpaHPA with scaleTargetRef=Deployment/test-autoscaling-enable, minReplicas=2, maxReplicas=10, CPU metric at 80%, scaleDown 300s
assert-deployment-createdassert 01-assert-deploymentDeployment exists with standard labels; no spec.replicas field (HPA controls scaling)
assert-status-availableassert 02-assert-statusStatus conditions indicate availability

HPA assertion details:

  • scaleTargetRef: apiVersion=apps/v1, kind=Deployment, name=test-autoscaling-enable
  • metrics[0]: type=Resource, resource.name=cpu, target.type=Utilization, averageUtilization=80
  • behavior.scaleDown.stabilizationWindowSeconds: 300
  • Labels: app.kubernetes.io/name=memcached, app.kubernetes.io/instance=test-autoscaling-enable, app.kubernetes.io/managed-by=memcached-operator

CRD fields tested:

  • spec.autoscaling.enabled — Enables HPA creation
  • spec.autoscaling.minReplicas — HPA minimum replicas
  • spec.autoscaling.maxReplicas — HPA maximum replicas
  • spec.resources.requests.cpu — Required for CPU utilization metric (validated by webhook)

23. Autoscaling Disable (MO-0042 REQ-002, REQ-006, REQ-007)

Directory: test/e2e/autoscaling-disable/

Verifies that disabling autoscaling on a running Memcached CR deletes the HPA and that setting spec.replicas takes effect on the Deployment. This is a two-phase test: first create with autoscaling enabled (assert HPA exists), then patch to disable autoscaling with explicit replicas.

StepOperationAssertion
create-memcached-with-autoscalingapply 00-memcached.yamlCR with autoscaling.enabled: true, minReplicas=2, maxReplicas=10 (test-autoscaling-disable)
assert-hpa-createdassert 01-assert-hpaHPA exists with scaleTargetRef=Deployment/test-autoscaling-disable
disable-autoscalingpatch 02-patch-disable.yamlSet autoscaling.enabled: false and spec.replicas: 3
assert-hpa-deletederror 03-error-hpa-goneHPA no longer exists (autoscaling/v2 HPA for test-autoscaling-disable)
assert-deployment-replicasassert 03-assert-deploymentDeployment.spec.replicas=3, status.readyReplicas=3

CRD fields tested:

  • spec.autoscaling.enabled — Set to false to trigger HPA deletion
  • spec.replicas — Set explicitly when disabling autoscaling (must be provided in the same patch)

24. Autoscaling Update (MO-0042 REQ-004, REQ-006, REQ-007)

Directory: test/e2e/autoscaling-update/

Verifies that updating minReplicas and maxReplicas on a running autoscaled Memcached CR propagates the changes to the HPA without deleting and recreating it.

StepOperationAssertion
create-memcached-with-autoscalingapply 00-memcached.yamlCR with autoscaling.enabled: true, minReplicas=2, maxReplicas=10 (test-autoscaling-update)
assert-initial-hpaassert 01-assert-hpaHPA with minReplicas=2, maxReplicas=10, scaleTargetRef=Deployment/test-autoscaling-update
update-autoscaling-boundspatch 02-patch-update.yamlPatch autoscaling.minReplicas: 3, autoscaling.maxReplicas: 15
assert-hpa-updatedassert 03-assert-hpa-updatedHPA with minReplicas=3, maxReplicas=15; scaleTargetRef unchanged; labels preserved

CRD fields tested:

  • spec.autoscaling.minReplicas — Updated from 2 to 3
  • spec.autoscaling.maxReplicas — Updated from 10 to 15

Test Patterns

Partial Object Matching

Chainsaw asserts on partial objects — only the fields specified in the assertion YAML must match. This avoids brittleness from defaulted or controller-managed fields.

yaml
# Only checks these specific fields, ignores everything else
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-basic
  labels:
    app.kubernetes.io/name: memcached
    app.kubernetes.io/instance: test-basic
    app.kubernetes.io/managed-by: memcached-operator
spec:
  replicas: 1
  template:
    spec:
      containers:
        - name: memcached
          args: ["-m", "64", "-c", "1024", "-t", "4", "-I", "1m"]

Apply-Assert-Patch-Assert Flow

Most tests follow a four-phase pattern:

  1. Apply — Create the initial Memcached CR
  2. Assert — Verify the initial resource state
  3. Patch — Modify the CR spec (scaling, config change, feature toggle)
  4. Assert — Verify the updated resource state

Error Expectations for Webhook Tests

Webhook rejection tests use Chainsaw's expect mechanism on apply operations to assert that resource creation fails:

yaml
steps:
  - name: reject-insufficient-memory-limit
    try:
      - apply:
          file: 00-invalid-memory-limit.yaml
          expect:
            - check:
                ($error != null): true

Negative Assertions for Deletion Tests

Deletion tests use the error operation type, which succeeds when the resource does not exist:

yaml
steps:
  - name: assert-all-resources-garbage-collected
    try:
      - error:
          file: 02-error-deployment-gone.yaml

Where the error file contains a resource reference that should no longer exist:

yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-deletion

Namespace Isolation

Chainsaw automatically creates a unique namespace for each test and cleans it up afterward. Test resources do not specify a namespace — Chainsaw injects it at runtime. This provides complete isolation between test cases.

Prerequisite Resource Ordering (Security Tests)

Security tests require resources to exist before the Memcached CR is applied:

  1. SASL — The SASL Secret must be created first because the validating webhook checks that credentialsSecretRef.name references an existing Secret. Applying the CR before the Secret causes a webhook rejection.

  2. TLS/mTLS — cert-manager Issuer and Certificate must be created first, and the Certificate must reach Ready=True before the CR is applied. This ensures the TLS Secret exists so the operator can mount it as a volume.

This is implemented as separate Chainsaw steps with apply followed by assert (for the Certificate readiness check) before the CR apply step.

Spec-Level Assertions (Security Tests)

Security tests assert exclusively on Kubernetes resource specs — they do not verify runtime protocol behavior. This means:

  • No test step connects to memcached via TLS or SASL
  • All assertions target Deployment spec (volumes, mounts, args, ports), Service spec (ports), or CR status (conditions)
  • Tests pass in a kind cluster without any memcached client tools
  • Tests complete deterministically within the 120s assert timeout

Requirement Coverage Matrix

REQ-IDRequirementTest ScenarioKey Assertions
REQ-001Chainsaw configuration and Makefile targetAll (infrastructure).chainsaw.yaml config, make test-e2e target
REQ-002Basic deployment: Deployment, Service, statusbasic-deploymentLabels, container args, headless Service, Available=True
REQ-003Scaling: replicas up and downscalingDeployment.spec.replicas, status.readyReplicas
REQ-004Configuration changes: container args updatedconfiguration-changesArgs reflect maxMemoryMB, threads, maxItemSize
REQ-005Monitoring toggle: exporter sidecar, ServiceMonitormonitoring-toggleContainer count, port 9150, Service metrics port, ServiceMonitor labels/endpoints, disable removes sidecar and ServiceMonitor
REQ-006PDB creation and deletion: minAvailable, selectorpdb-creationPDB spec, selector labels, owner reference, disable removes PDB
REQ-007Graceful rolling update: strategy, preStop, image updategraceful-rolling-updatemaxSurge=1, maxUnavailable=0, preStop hook, new image
REQ-008Webhook rejection: invalid CRs rejectedwebhook-rejectionTen invalid CR variants all rejected (memory, PDB, graceful shutdown, SASL, TLS, autoscaling)
REQ-009CR deletion: garbage collectioncr-deletionDeployment, Service, PDB, ServiceMonitor, CR all removed
REQ-010Makefile integrationAll (infrastructure)make test-e2e runs chainsaw test

Security E2E Tests (MO-0032)

REQ-IDRequirementTest ScenarioKey Assertions
MO-0032-001SASL Secret and CR configuration propagationsasl-authenticationSecret with password-file key, CR with sasl.enabled: true and credentialsSecretRef
MO-0032-002SASL Deployment volume, mount, and argssasl-authenticationVolume sasl-credentials, mount at /etc/memcached/sasl, args -Y /etc/memcached/sasl/password-file
MO-0032-003TLS cert-manager Certificate creationtls-encryptionSelf-signed Issuer, Certificate with Ready=True, Secret with tls.crt/tls.key
MO-0032-004TLS Deployment volume, mount, args, and porttls-encryptionVolume tls-certificates, mount at /etc/memcached/tls, args -Z -o ssl_chain_cert -o ssl_key, port 11212
MO-0032-005TLS Service port configurationtls-encryptionService port memcached-tls on 11212 targeting memcached-tls
MO-0032-006mTLS ca.crt volume projection and ssl_ca_cert argtls-mtlsVolume items include ca.crt, args include -o ssl_ca_cert=/etc/memcached/tls/ca.crt
MO-0032-007mTLS preserves standard TLS configurationtls-mtlsAll TLS assertions (volume, mount, args, ports) plus ca.crt additions
MO-0032-008Security tests follow Chainsaw conventionsAll security testsNumbered YAML files, apply/assert flow, partial object matching, standard timeouts, test-{name} CR naming
MO-0032-009Tests are spec-level assertions only (no runtime verification)All security testsAssertions on Deployment spec, Service spec, CR status — no pod logs or protocol connections

Network & Service E2E Tests (MO-0033)

REQ-IDRequirementTest ScenarioKey Assertions
REQ-E2E-NP-001NetworkPolicy creation with podSelector and port 11211network-policyNetworkPolicy with operator labels, policyTypes: [Ingress], ingress port 11211/TCP
REQ-E2E-NP-002allowedSources propagation to NetworkPolicy ingress from fieldnetwork-policyIngress from contains podSelector with app: allowed-client
REQ-E2E-NP-003TLS port 11212 added to NetworkPolicy when TLS enablednetwork-policyIngress ports include 11211/TCP, 11212/TCP, 9150/TCP after enabling TLS and monitoring
REQ-E2E-NP-004NetworkPolicy deleted when networkPolicy disablednetwork-policyError assertion confirms NetworkPolicy no longer exists after disabling
REQ-E2E-NP-005Monitoring port 9150 added to NetworkPolicy when monitoring enablednetwork-policyIngress ports include 9150/TCP alongside 11211/TCP and 11212/TCP
REQ-E2E-SA-001Service annotations propagated from CR specservice-annotationsService metadata.annotations contains custom annotations, labels and headless spec preserved
REQ-E2E-SA-002Service annotations cleared when removed from CR specservice-annotationsService metadata.annotations empty after patching spec.service: null, Service spec unchanged
REQ-E2E-DOC-001Documentation updated with new test entries(this document)network-policy and service-annotations sections, file structure, requirement coverage matrix

Deployment Config E2E Tests (MO-0034)

REQ-IDRequirementTest ScenarioKey Assertions
MO-0034-001PDB with maxUnavailable creates correct PDB and supports updatespdb-max-unavailablePDB with maxUnavailable=1, correct selector/labels; update to maxUnavailable=2 propagates
MO-0034-002Verbosity level propagates to container args (-v, -vv)verbosity-extra-argsArgs include -v for verbosity=1, -vv for verbosity=2, placed after standard flags
MO-0034-003extraArgs appended to container args after standard flagsverbosity-extra-argsArgs include -o modern after standard flags; update to new extraArgs propagates
MO-0034-004Custom exporter image used for monitoring sidecarcustom-exporter-imageExporter sidecar uses custom image v0.14.0; update to v0.15.4 propagates
MO-0034-005Pod security context propagated to Deploymentsecurity-contextsPod securityContext with runAsNonRoot, fsGroup; update to runAsUser=1000 propagates
MO-0034-006Container security context propagated to Deploymentsecurity-contextsContainer securityContext with readOnlyRootFilesystem, drop ALL; update propagates
MO-0034-007Hard anti-affinity creates requiredDuringScheduling affinityhard-anti-affinityrequiredDuringSchedulingIgnoredDuringExecution with topologyKey and instance label selector

Status & Scale E2E Tests (MO-0035)

REQ-IDRequirementTest ScenarioKey Assertions
REQ-E2E-SD-001Degraded status when non-existent image specifiedstatus-degradedDegraded=True (Degraded), Available=False (Unavailable), Progressing=False (ProgressingComplete)
REQ-E2E-SD-002Deployment created despite invalid imagestatus-degradedDeployment exists with correct labels and owner reference, pods in ImagePullBackOff
REQ-E2E-SZ-001Scale-to-zero transitions Available to Falsescale-to-zeroAfter patching replicas=0: Available=False (Unavailable), Progressing=False (ProgressingComplete), Degraded=False (NotDegraded)
REQ-E2E-SZ-002Scale-to-zero sets Deployment replicas to 0scale-to-zeroDeployment.spec.replicas=0 after patching CR
REQ-CTL-SZ-001computeConditions returns Available=False when desiredReplicas=0(unit test)Unit test in status_test.go verifies Available=False for 0 desired, 0 ready replicas
REQ-DOC-001Documentation updated with new test entries(this document)status-degraded and scale-to-zero sections, file structure, requirement coverage matrix

Owner References GC Chain E2E Tests (MO-0036)

REQ-IDRequirementTest ScenarioKey Assertions
REQ-OR-001Memcached CR with all features enabledowner-referencesCR with monitoring, PDB, and NetworkPolicy enabled; Deployment reaches readyReplicas=2
REQ-OR-002Deployment ownerReferences set correctlyowner-referencesownerReferences: kind=Memcached, apiVersion=memcached.c5c3.io/v1alpha1, name=test-owner-refs, controller=true, blockOwnerDeletion=true
REQ-OR-003Service ownerReferences set correctlyowner-referencesownerReferences: kind=Memcached, apiVersion=memcached.c5c3.io/v1alpha1, controller=true, blockOwnerDeletion=true
REQ-OR-004PDB ownerReferences set correctlyowner-referencesownerReferences: kind=Memcached, controller=true, blockOwnerDeletion=true
REQ-OR-005NetworkPolicy ownerReferences set correctlyowner-referencesownerReferences: kind=Memcached, controller=true, blockOwnerDeletion=true
REQ-OR-006ServiceMonitor ownerReferences set correctlyowner-referencesownerReferences: kind=Memcached, controller=true, blockOwnerDeletion=true
REQ-OR-007Single test with CR creation and 5 assertion stepsowner-referencesOne chainsaw-test.yaml with create + 5 individual assertion steps; does NOT delete CR
REQ-OR-008Documentation updated with owner-references test(this document)File structure, test scenario section, requirement coverage matrix all include owner-references

Autoscaling E2E Tests (MO-0042)

REQ-IDRequirementTest ScenarioKey Assertions
MO-0042-001HPA created with correct scaleTargetRef, metrics, and behaviorautoscaling-enableHPA scaleTargetRef=Deployment, CPU metric at 80%, scaleDown stabilization 300s, minReplicas=2, maxReplicas=10, standard labels
MO-0042-002Deployment has no hardcoded replicas when autoscaling enabledautoscaling-enableDeployment exists without spec.replicas field; HPA controls scaling
MO-0042-003HPA deleted and Deployment replicas set when autoscaling disabledautoscaling-disableHPA no longer exists (error assertion); Deployment.spec.replicas=3, readyReplicas=3
MO-0042-004HPA updated when minReplicas and maxReplicas patchedautoscaling-updateHPA minReplicas=3 and maxReplicas=15 after patching; scaleTargetRef unchanged
MO-0042-005Webhook rejects CR with spec.replicas and autoscaling.enabled=truewebhook-rejectionApply returns $error != null for CR with replicas=3 and autoscaling.enabled=true
MO-0042-006Webhook rejects CR with autoscaling.minReplicas > maxReplicaswebhook-rejectionApply returns $error != null for CR with minReplicas=10 and maxReplicas=5
MO-0042-007Webhook rejects CR with CPU metric but no resources.requests.cpuwebhook-rejectionApply returns $error != null for CR with CPU utilization metric and no cpu request
MO-0042-008Documentation updated with autoscaling test scenarios and coverage matrix(this document)File structure, three test scenario sections, webhook rejection table, requirement coverage matrix all include autoscaling

Known Limitations

LimitationImpactMitigation
Pod scheduling time variesAssert timeouts may need adjustment in slow CIGlobal assert timeout set to 120s
cert-manager requiredWebhook and TLS/mTLS tests fail without cert-managerDocumented as prerequisite; tests fail clearly with connection refused
ServiceMonitor CRD requiredmonitoring-toggle and cr-deletion tests fail without CRDDocumented as prerequisite; Chainsaw reports clear assertion error
Sequential executionFull suite takes longer than parallel executionparallel: 1 avoids resource contention on small clusters
No runtime protocol testingSASL/TLS/mTLS tests verify Deployment spec, not actual memcached protocolBy design: tests are fast, deterministic, and need no memcached client
Certificate issuance delaycert-manager may take time to issue certificates in CIExplicit assert-certificate-ready step waits for Ready=True within 120s
No absence assertion for ssl_ca_cert in TLS testChainsaw asserts presence but not absence; TLS test cannot verify ssl_ca_cert absent when enableClientCert is falsemTLS test asserts ssl_ca_cert present only when enableClientCert: true; combined, both tests confirm correct behavior
Annotation removal uses JMESPath absence checkservice-annotations test uses JMESPath to assert annotations are absent or empty after removalAssertion actively fails if annotations remain on the Service; upgrades confidence over simple field omission
Hard anti-affinity with single-node kindhard-anti-affinity test uses replicas=1 to avoid scheduling failures on single-node kind; verifies Deployment spec, not schedulingSpec assertion confirms operator translates antiAffinityPreset: hard to requiredDuringSchedulingIgnoredDuringExecution
Degraded test depends on image pull timingstatus-degraded test relies on kubelet reporting ImagePullBackOff within 120s for operator to set Degraded=TrueThe 120s timeout is generous; image pull failures are typically reported within seconds by the kubelet
Scale-to-zero Available=False behavior changecomputeConditions changed to return Available=False when desiredReplicas=0; previously returned Available=TrueIntentional: zero replicas cannot serve traffic, so Available=False is correct; existing tests updated accordingly

Troubleshooting

cert-manager not ready

If webhook tests fail with connection refused or TLS handshake errors, cert-manager may not be fully ready:

bash
# Check cert-manager pods are Running
kubectl get pods -n cert-manager

# Wait for webhook to be ready
kubectl wait --for=condition=Available deployment/cert-manager-webhook \
  -n cert-manager --timeout=120s

# Verify certificates are issued
kubectl get certificates -A

TLS/mTLS Certificate not ready

If TLS or mTLS tests fail at the assert-certificate-ready step, the cert-manager Certificate may not have been issued:

bash
# Check Certificate status in the test namespace
kubectl get certificates -A
kubectl describe certificate test-tls-cert -n <chainsaw-namespace>

# Check cert-manager logs for issuance errors
kubectl logs -n cert-manager deployment/cert-manager -c cert-manager --tail=20

# Verify the Issuer is ready
kubectl get issuers -A

Common causes:

  • cert-manager pods not yet running (check kubectl get pods -n cert-manager)
  • cert-manager webhook not ready (self-signed Issuer needs the webhook to validate)
  • Namespace mismatch (Chainsaw auto-injects namespaces; the Issuer and Certificate must be in the same namespace)

ServiceMonitor CRD missing

The monitoring-toggle and cr-deletion tests require the ServiceMonitor CRD. If assertions fail with no matches for kind "ServiceMonitor":

bash
# Install Prometheus Operator CRDs
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml

# Verify the CRD is installed
kubectl get crd servicemonitors.monitoring.coreos.com

Pod scheduling timeout

If assertions timeout waiting for pods to become ready:

bash
# Check pending pods and events
kubectl get pods -A --field-selector=status.phase!=Running
kubectl get events --sort-by='.lastTimestamp' -A | tail -20

# Check node resources
kubectl describe nodes | grep -A 5 "Allocated resources"

# Increase assert timeout if needed (in .chainsaw.yaml)
# spec.timeouts.assert: 180s

Debugging test failures with kubectl logs

bash
# Check operator logs for reconciliation errors
kubectl logs -n memcached-operator-system deployment/memcached-operator-controller-manager \
  -c manager --tail=50

# Check specific test namespace (Chainsaw creates unique namespaces)
kubectl get ns | grep chainsaw
kubectl get all -n <chainsaw-namespace>

# Run a single test with verbose output
$(LOCALBIN)/chainsaw test --test-dir test/e2e/monitoring-toggle/ -v 3

Adding a New E2E Test

1. Create the test directory

bash
mkdir test/e2e/my-new-test/

2. Create the test definition

yaml
# test/e2e/my-new-test/chainsaw-test.yaml
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
  name: my-new-test
spec:
  description: >
    Verify that <feature> works end-to-end (REQ-XXX).
  steps:
    - name: create-memcached-cr
      try:
        - apply:
            file: 00-memcached.yaml
    - name: assert-expected-state
      try:
        - assert:
            file: 01-assert-result.yaml

3. Create resource and assertion files

Use the naming convention:

  • 00-*.yaml — Initial resource to apply
  • 01-assert-*.yaml — Assertions on initial state
  • 02-patch-*.yaml — Patches to modify state
  • 03-assert-*.yaml — Assertions on modified state
  • 0N-error-*-gone.yaml — Negative assertions (resource should not exist)

4. Follow conventions

  • Use partial objects in assertions — only specify fields you care about
  • Use the standard label set: app.kubernetes.io/name, app.kubernetes.io/instance, app.kubernetes.io/managed-by
  • Reference shared fixtures from test/e2e/resources/ when the minimal CR template applies
  • For webhook rejection tests, use expect with ($error != null): true on apply
  • For deletion tests, use error operations with resource references

5. Run the test

bash
# Run all E2E tests
make test-e2e

# Run a specific test directory
$(LOCALBIN)/chainsaw test --test-dir test/e2e/my-new-test/