Skip to content

Infrastructure Manifests

Reference documentation for the FluxCD infrastructure manifests. These manifests define HelmRepository sources, HelmRelease operators, and infrastructure custom resources that provision the shared platform services required by OpenStack operators. Deployment is split into two phases: base resources (namespaces, sources, releases) and CRD-dependent infrastructure resources (applied after operators install their CRDs).

Directory Layout

text
deploy/
└── flux-system/
    ├── kustomization.yaml                Base kustomize overlay (namespaces, FluxInstance, sources, releases)
    ├── namespaces.yaml                   Namespace resources for all components
    ├── fluxinstance.yaml                 FluxInstance CR driving the flux-operator
    ├── sources/                          FluxCD HelmRepository CRs
    │   ├── cert-manager.yaml             Jetstack Helm chart registry
    │   ├── mariadb-operator.yaml         MariaDB Operator Helm chart registry
    │   ├── external-secrets.yaml         External Secrets Operator Helm chart registry
    │   ├── openbao.yaml                  OpenBao Helm chart registry
    │   ├── c5c3-charts.yaml              C5C3 shared OCI chart registry
    │   ├── prometheus-community.yaml     Prometheus Community OCI chart registry
    │   └── chaos-mesh.yaml               Chaos Mesh Helm chart registry (kind-only addon — see "Kind Overlay Demo Addons")
    ├── releases/                         FluxCD HelmRelease CRs
    │   ├── cert-manager.yaml             cert-manager
    │   ├── prometheus-operator-crds.yaml Prometheus Operator CRDs
    │   ├── mariadb-operator-crds.yaml    MariaDB Operator CRDs
    │   ├── mariadb-operator.yaml         MariaDB Operator
    │   ├── external-secrets.yaml         External Secrets Operator
    │   ├── memcached-operator.yaml       Memcached Operator (from c5c3-charts)
    │   ├── openbao.yaml                  OpenBao HA Raft cluster
    │   ├── keystone-operator.yaml        Keystone Operator (from c5c3-charts)
    │   └── chaos-mesh.yaml               Chaos Mesh (kind-only addon — see "Kind Overlay Demo Addons")
    └── infrastructure/                   CRD-dependent infrastructure resources
        ├── kustomization.yaml            Infrastructure kustomize overlay
        ├── cluster-issuer.yaml           Self-signed ClusterIssuer (requires cert-manager CRDs)
        ├── mariadb.yaml                  MariaDB Galera cluster for OpenStack
        └── memcached.yaml                Memcached cluster for OpenStack

All YAML files carry the SPDX Apache-2.0 license header (3 lines: copyright, blank comment, license identifier).

Namespaces

Seven Namespace resources are defined in namespaces.yaml and included as the first entry in the base kustomization. Kustomize applies Namespace resources before other resource kinds, ensuring target namespaces exist before any namespaced resources are created.

NamespacePurpose
cert-managercert-manager operator and its resources
mariadb-systemMariaDB Operator
external-secretsExternal Secrets Operator
monitoring-systemPrometheus Operator CRDs
memcached-systemMemcached Operator
openstackInfrastructure instance CRs (MariaDB cluster, Memcached cluster)
openbao-systemOpenBao HA Raft cluster

The chaos-mesh namespace is not part of the production base. It is created inline by the kind-only opt-in overlay at deploy/kind/chaos-mesh/ when WITH_CHAOS_MESH=true make deploy-infra is used. See Chaos Mesh (kind-only opt-in) below.

Note: The install.createNamespace: true setting on HelmReleases instructs FluxCD's helm-controller to create namespaces when installing charts. However, this does not help when applying HelmRelease CRs via kubectl apply -k — the target namespace must already exist for the API server to accept namespaced resources. The explicit Namespace resources solve this chicken-and-egg problem.

FluxInstance

File: deploy/flux-system/fluxinstance.yaml

A single FluxInstance CR drives the flux-operator, which replaces the imperative flux install / flux bootstrap path with a declarative, operator-managed Flux lifecycle. The flux-operator reconciles the Flux controller Deployments from this spec and publishes a FluxReport/flux summarizing the installation state.

PropertyValue
API versionfluxcd.controlplane.io/v1
KindFluxInstance
Nameflux
Namespaceflux-system

Spec fields:

FieldValuePurpose
distribution.version"2.x"Minor-version track pinned by the operator; picks the latest Flux 2.x release
distribution.registryghcr.io/fluxcdController image registry
componentssource-controller, kustomize-controller, helm-controller, notification-controllerFour Flux controllers installed — image-automation and image-reflector controllers are omitted (not used in this project)
cluster.typekubernetesGeneric Kubernetes distribution (not OpenShift/EKS-specific)
cluster.sizesmallSmall resource profile suitable for single-node kind and low-traffic management clusters
cluster.multitenantfalseCross-namespace references allowed — simplifies the single-tenant management cluster model
cluster.networkPolicyfalseNo NetworkPolicies applied to flux-system (kind overlay assumes a permissive default; production overlays opt in)

No spec.sync block. The kind Quick Start applies Helm sources and releases directly via kubectl apply -k deploy/kind/base/, so the FluxInstance here does not carry a GitRepository sync. Production overlays that want continuous reconciliation from Git add a spec.sync block on top of this base.

Kustomize ordering. Kustomize applies Namespace resources first by default, so flux-system exists before the FluxInstance is created. The flux-operator itself is installed out-of-band by hack/deploy-infra.sh (pinned FLUX_OPERATOR_VERSION, applied via kubectl apply -f install.yaml) before this kustomization is applied.

HelmRepository Sources

Six HelmRepository CRs define the Helm chart registries that FluxCD pulls from. All use apiVersion: source.toolkit.fluxcd.io/v1, are deployed to the flux-system namespace, and poll at interval: 1h.

Filemetadata.nameRegistry URLType
sources/cert-manager.yamlcert-managerhttps://charts.jetstack.ioHTTPS
sources/mariadb-operator.yamlmariadb-operatorhttps://mariadb-operator.github.io/mariadb-operatorHTTPS
sources/external-secrets.yamlexternal-secretshttps://charts.external-secrets.ioHTTPS
sources/openbao.yamlopenbaohttps://openbao.github.io/openbao-helmHTTPS
sources/c5c3-charts.yamlc5c3-chartsoci://ghcr.io/c5c3/chartsOCI
sources/prometheus-community.yamlprometheus-communityoci://ghcr.io/prometheus-community/chartsOCI

The chaos-mesh HelmRepository ships in the kind-only opt-in overlay at deploy/kind/chaos-mesh/source.yaml — it is intentionally absent from deploy/flux-system/{sources,kustomization.yaml}. See Chaos Mesh (kind-only opt-in).

The c5c3-charts and prometheus-community repositories are OCI-type sources (spec.type: oci). c5c3-charts hosts internally-built operator charts (e.g., memcached-operator) in the GitHub Container Registry. prometheus-community hosts Prometheus community charts (e.g., prometheus-operator-crds). All other repositories use standard HTTPS Helm registries.

HelmRelease Operators

Eight HelmRelease CRs deploy the infrastructure operators and CRD charts. All use apiVersion: helm.toolkit.fluxcd.io/v2 and share these common settings:

SettingValuePurpose
spec.interval30mReconciliation interval
spec.install.crdsCreateReplaceInstall CRDs if missing, replace if outdated
spec.install.createNamespacetrueAuto-create target namespace
spec.upgrade.crdsCreateReplaceUpdate CRDs on chart upgrade
spec.upgrade.remediation.retries3Retry failed upgrades up to 3 times

Dependency Order

cert-manager is the base layer (no dependsOn). The CRD-only charts (prometheus-operator-crds, mariadb-operator-crds) also have no dependencies. All other operators depend on cert-manager because they require TLS certificates for webhook servers. Some operators have additional dependencies on CRD charts or other operators:

text
cert-manager              (base — no dependencies)
prometheus-operator-crds  (no dependencies)
mariadb-operator-crds     (no dependencies)
├── mariadb-operator      dependsOn: cert-manager, mariadb-operator-crds
├── external-secrets      dependsOn: cert-manager
├── memcached-operator    dependsOn: cert-manager, prometheus-operator-crds
├── openbao               dependsOn: cert-manager
└── keystone-operator     dependsOn: cert-manager, mariadb-operator, memcached-operator, external-secrets

FluxCD resolves this dependency graph and installs operators in the correct order. If cert-manager is not ready, dependent operators are held in a pending state.

The kind-only chaos-mesh HelmRelease (deploy/kind/chaos-mesh/) also declares dependsOn: cert-manager but is only installed when WITH_CHAOS_MESH=true make deploy-infra is used. Production overlays do not install it. See Chaos Mesh (kind-only opt-in).

cert-manager

File: deploy/flux-system/releases/cert-manager.yaml

PropertyValue
Target namespacecert-manager
Chartcert-manager
Version constraint>=1.16.0 <2.0.0
Sourcecert-manager HelmRepository
DependenciesNone (base layer)

Helm values:

KeyValuePurpose
crds.enabledtrueInstall CRDs via the Helm chart
prometheus.enabledfalsePrometheus metrics disabled
startupapicheck.enabledfalseDisable startup API check job

Prometheus Operator CRDs

File: deploy/flux-system/releases/prometheus-operator-crds.yaml

PropertyValue
Target namespacemonitoring-system
Chartprometheus-operator-crds
Version constraint>=17.0.0 <20.0.0
Sourceprometheus-community HelmRepository
DependenciesNone

The Prometheus Operator CRDs chart installs ServiceMonitor, PodMonitor, PrometheusRule, and related monitoring.coreos.com CRDs. These are required by the memcached-operator controller, which unconditionally watches ServiceMonitor resources via Owns().

MariaDB Operator CRDs

File: deploy/flux-system/releases/mariadb-operator-crds.yaml

PropertyValue
Target namespacemariadb-system
Chartmariadb-operator-crds
Version constraint>=0.30.0 <1.0.0
Sourcemariadb-operator HelmRepository
DependenciesNone

A separate CRD chart is required since mariadb-operator v0.35.0. Must be installed before mariadb-operator so CRDs are available for the operator and for infrastructure CRs (e.g., MariaDB Galera cluster).

MariaDB Operator

File: deploy/flux-system/releases/mariadb-operator.yaml

PropertyValue
Target namespacemariadb-system
Chartmariadb-operator
Version constraint>=0.30.0 <1.0.0
Sourcemariadb-operator HelmRepository
Dependenciescert-manager in cert-manager namespace, mariadb-operator-crds in mariadb-system namespace

Helm values:

KeyValuePurpose
metrics.enabledfalsePrometheus metrics disabled
webhook.enabledtrueEnable admission webhooks for MariaDB CRDs

External Secrets Operator

File: deploy/flux-system/releases/external-secrets.yaml

PropertyValue
Target namespaceexternal-secrets
Chartexternal-secrets
Version constraint>=0.10.0 <1.0.0
Sourceexternal-secrets HelmRepository
Dependenciescert-manager in cert-manager namespace

Helm values:

KeyValuePurpose
installCRDstrueInstall CRDs via the Helm chart
webhook.port9443Webhook server listen port
certController.enabledtrueManage webhook TLS certificates

Memcached Operator

File: deploy/flux-system/releases/memcached-operator.yaml

PropertyValue
Target namespacememcached-system
Chartmemcached-operator
Version constraint>=0.1.0 <1.0.0
Sourcec5c3-charts HelmRepository (shared OCI registry)
Dependenciescert-manager in cert-manager namespace, prometheus-operator-crds in monitoring-system namespace

Source reference: The Memcached Operator chart is published to the shared c5c3-charts OCI registry (oci://ghcr.io/c5c3/charts), not a dedicated HelmRepository. The sourceRef.name is c5c3-charts, matching the OCI HelmRepository in sources/.

Helm values:

KeyValuePurpose
metrics.enabledtrueExpose Prometheus metrics
webhook.enabledtrueEnable admission webhooks for Memcached CRDs

OpenBao

File: deploy/flux-system/releases/openbao.yaml

PropertyValue
Target namespaceopenbao-system
Chartopenbao
Version constraint>=0.5.0 <1.0.0
Sourceopenbao HelmRepository
Dependenciescert-manager in cert-manager namespace

OpenBao is deployed as a 3-replica HA Raft cluster with TLS enabled. The injector is disabled. TLS certificates are sourced from a cert-manager-provisioned Secret (openbao-tls). See architecture/docs/09-implementation/09-openbao-deployment.md for design rationale.

Helm values:

KeyValuePurpose
global.tlsDisablefalseEnable TLS globally
server.authDelegator.enabledtrueEnable ClusterRoleBinding for TokenReview API (ESO auth)
server.ha.enabledtrueEnable HA mode
server.ha.replicas33-node Raft cluster
server.ha.raft.enabledtrueUse Raft storage backend
server.dataStorage.size10GiPersistent volume size
injector.enabledfalseDisable the Vault/Bao agent injector

Keystone Operator

File: deploy/flux-system/releases/keystone-operator.yaml

PropertyValue
Target namespaceopenstack
Chartkeystone-operator
Version constraint>=0.1.0 <1.0.0
Sourcec5c3-charts HelmRepository (shared OCI registry)
Dependenciescert-manager, mariadb-operator, memcached-operator, external-secrets

The Keystone Operator manages OpenStack Keystone identity service instances. It depends on four upstream operators: cert-manager for TLS, mariadb-operator for database provisioning, memcached-operator for caching, and external-secrets for secret management.

Helm values:

KeyValuePurpose
replicas2Run 2 controller replicas for HA
leaderElection.enabledtrueEnable leader election for HA
image.taglatestUse latest image until a versioned release publishes a semver tag

HelmRelease–HelmRepository Cross-Reference

Each HelmRelease sourceRef.name must match a HelmRepository metadata.name in sources/. This table shows the mapping:

HelmReleasesourceRef.nameHelmRepository file
cert-managercert-managersources/cert-manager.yaml
prometheus-operator-crdsprometheus-communitysources/prometheus-community.yaml
mariadb-operator-crdsmariadb-operatorsources/mariadb-operator.yaml
mariadb-operatormariadb-operatorsources/mariadb-operator.yaml
external-secretsexternal-secretssources/external-secrets.yaml
memcached-operatorc5c3-chartssources/c5c3-charts.yaml
openbaoopenbaosources/openbao.yaml
keystone-operatorc5c3-chartssources/c5c3-charts.yaml

The kind-only chaos-mesh HelmRelease ships in the opt-in overlay at deploy/kind/chaos-mesh/release.yaml, with its own local source.yaml. It is intentionally absent from this always-on table because production overlays do not install it.

Infrastructure Custom Resources

Infrastructure CRs are instance-level resources managed by the operators installed via HelmReleases above. They are separated into their own kustomization (infrastructure/kustomization.yaml) because they depend on CRDs that are only available after the corresponding operator HelmReleases install their Helm charts.

Self-Signed ClusterIssuer

File: deploy/flux-system/infrastructure/cluster-issuer.yaml

PropertyValue
API versioncert-manager.io/v1
KindClusterIssuer
Nameselfsigned-cluster-issuer
ScopeCluster-scoped (no namespace)

The self-signed ClusterIssuer provides a default certificate issuer for development environments. It requires cert-manager CRDs (cert-manager.io/v1) which are installed by the cert-manager HelmRelease.

MariaDB Galera Cluster

File: deploy/flux-system/infrastructure/mariadb.yaml

PropertyValue
API versionk8s.mariadb.com/v1alpha1
KindMariaDB
Nameopenstack-db
Namespaceopenstack
Replicas3
GaleraEnabled (spec.galera.enabled: true)
MaxScaleEnabled, 2 replicas (spec.maxScale.enabled: true, spec.maxScale.replicas: 2)
Storage100Gi, storage class ceph-rbd

The MariaDB CR provisions a 3-node Galera cluster with synchronous replication managed by the mariadb-operator. MaxScale is enabled with 2 replicas to provide intelligent query routing and read/write splitting across the Galera nodes.

The root password is sourced from a Kubernetes Secret (mariadb-root-password, key password) — secret provisioning is handled by the External Secrets Operator integration.

Services:

ServiceTypePurpose
PrimaryClusterIPRead-write endpoint for application connections
SecondaryClusterIPRead-only endpoint for read replicas

Monitoring: Prometheus metrics are enabled (spec.metrics.enabled: true).

Memcached Cluster

File: deploy/flux-system/infrastructure/memcached.yaml

PropertyValue
API versionmemcached.c5c3.io/v1beta1
KindMemcached
Nameopenstack-memcached
Namespaceopenstack
Replicas3
Imagememcached:1.6

The Memcached CR provisions a 3-replica Memcached cluster for OpenStack session and token caching. The memcached-operator manages pod lifecycle and provides stable DNS-based service discovery for operator consumers.

API group: The API group is memcached.c5c3.io, matching the CRD definition shipped by the memcached-operator Helm chart.

Kustomization

Deployment is split into two kustomize overlays to separate base resources from CRD-dependent infrastructure resources:

Base Kustomization

File: deploy/flux-system/kustomization.yaml

The base kustomization uses apiVersion: kustomize.config.k8s.io/v1beta1 and includes namespaces, the FluxInstance CR, HelmRepository sources, and HelmRelease operators. These resources do not depend on any custom CRDs.

Resource count: 16 files producing 22 Kubernetes resources.

CategoryCountResources
Namespace7cert-manager, mariadb-system, external-secrets, monitoring-system, memcached-system, openstack, openbao-system
FluxInstance1flux (drives the flux-operator)
HelmRepository6cert-manager, mariadb-operator, external-secrets, openbao, c5c3-charts, prometheus-community
HelmRelease8cert-manager, prometheus-operator-crds, mariadb-operator-crds, mariadb-operator, external-secrets, memcached-operator, openbao, keystone-operator
Total22

The chaos-mesh HelmRepository, HelmRelease, and Namespace ship in the kind-only opt-in overlay at deploy/kind/chaos-mesh/ and are not counted here.

Infrastructure Kustomization

File: deploy/flux-system/infrastructure/kustomization.yaml

The infrastructure kustomization includes CRD-dependent resources that require their operator CRDs to be installed first. This kustomization must be applied after the base kustomization and after operators have finished installing their CRDs.

Resource count: 3 files producing 3 Kubernetes resources.

CategoryCountResources
ClusterIssuer1selfsigned-cluster-issuer (requires cert-manager CRDs)
MariaDB1openstack-db (requires mariadb-operator CRDs)
Memcached1openstack-memcached (requires memcached-operator CRDs)
Total3

Deployment

Step 1: Apply base resources

bash
kubectl apply -k deploy/flux-system/

This applies 22 resources: 7 namespaces, 1 FluxInstance, 6 HelmRepository sources, and 8 HelmRelease operators. FluxCD resolves the dependency graph between HelmReleases and installs operators in the correct order. Wait for all operators to finish installing before proceeding to step 2.

Step 2: Apply infrastructure resources

bash
kubectl apply -k deploy/flux-system/infrastructure/

This applies 3 CRD-dependent resources: the ClusterIssuer, MariaDB cluster, and Memcached cluster. These resources require CRDs that are installed by the operator HelmReleases in step 1. If CRDs are not yet available, the apply will fail — wait for the operators to finish installing and retry.

Expected transient failure: The MariaDB cluster references a rootPasswordSecretKeyRef Secret (mariadb-root-password) that is provisioned by the External Secrets Operator integration. Until that Secret exists, the mariadb-operator will enter a failed reconciliation loop with Secret "mariadb-root-password" not found errors. This is expected and resolves automatically once OpenBao bootstrap is applied.

Validate manifests locally

bash
kustomize build deploy/flux-system/
kustomize build deploy/flux-system/infrastructure/

These commands render the manifest output without applying it. Use them to verify YAML syntax and resource inclusion before deployment.

Prerequisites

  • A Kubernetes cluster with FluxCD installed (source-controller and helm-controller)
  • kubectl configured with cluster access
  • For local validation only: kustomize CLI

Extensibility

The manifest structure is designed for straightforward extension. Adding a new operator (e.g., OpenBao) requires four steps:

  1. Add a source file in sources/ (e.g., sources/openbao.yaml) — or reuse an existing HelmRepository if the chart is in a shared registry
  2. Add a release file in releases/ (e.g., releases/openbao.yaml) with the HelmRelease CR, dependsOn for cert-manager, and the standard install/upgrade settings
  3. Add both paths to the resources list in kustomization.yaml
  4. Add the operator namespace to namespaces.yaml (e.g., openbao-system) so the namespace exists before kubectl apply -k creates the namespaced HelmRelease CR

Infrastructure instance CRs (e.g., a new database or cache cluster) follow the same pattern: add a file in infrastructure/ and list it in infrastructure/kustomization.yaml.

Design Decisions

Two-phase kustomization

Resources are split into a base kustomization (namespaces, sources, releases) and an infrastructure kustomization (CRD-dependent resources). This separation ensures that kubectl apply -k does not attempt to create CRD-dependent resources before the corresponding CRDs exist. The base kustomization can be applied independently, and the infrastructure kustomization is applied after operators have installed their CRDs.

In FluxCD-managed clusters, this pattern maps to two FluxCD Kustomization CRs where the infrastructure Kustomization depends on the base Kustomization (using spec.dependsOn), eliminating noisy first-apply failures.

Explicit namespace resources

All target namespaces are defined as explicit Namespace resources in namespaces.yaml. While HelmReleases set install.createNamespace: true for FluxCD's helm-controller, the explicit namespace resources ensure namespaces exist before kubectl apply -k attempts to create namespaced resources (HelmRelease CRs specify a target namespace in their metadata).

Namespace auto-creation

All HelmReleases set install.createNamespace: true as a safety net for FluxCD deployments. This is complementary to the explicit Namespace resources — the explicit resources handle the kubectl apply -k path, while createNamespace handles edge cases in FluxCD reconciliation.

No secret configuration

The manifests intentionally contain no password, credential, or secret configuration. Secret management is handled by the External Secrets Operator integration, which provisions secrets from an external vault into the cluster.

Memcached Operator source

The Memcached Operator chart is sourced from the shared c5c3-charts OCI registry rather than a dedicated HelmRepository. This follows the project convention of publishing internally-built charts to oci://ghcr.io/c5c3/charts (see architecture/docs/09-implementation/07-ci-cd-and-packaging.md).

Kind Overlay Demo Addons

The kind overlay (deploy/kind/base/kustomization.yaml) layers a small set of kind-only demo manifests on top of the production base. These files live under deploy/kind/base/ and are not referenced from deploy/flux-system/kustomization.yaml, so they never reach production clusters. The section below catalogues these addons; earlier kind-only manifests (Headlamp, OpenBao UI patch) are documented in the Quick Start. Chaos Mesh ships as a separate opt-in kind overlay at deploy/kind/chaos-mesh/ — applied only when WITH_CHAOS_MESH=true is set on make deploy-infra; see Chaos Mesh (kind-only opt-in) below.

Flux Web UI ResourceSet

File: deploy/kind/base/flux-web.yaml

A single ResourceSet CR drives the flux-operator's bundled Flux Web UI as a demo surface for the kind Quick Start (Step 4a). The ResourceSet renders two sibling resources — an OCIRepository pointing at the official flux-operator Helm chart and a HelmRelease that installs that chart with only the Web UI sub-chart enabled.

PropertyValue
API versionfluxcd.controlplane.io/v1
KindResourceSet
Nameflux-web
Namespaceflux-system
Chart URLoci://ghcr.io/controlplaneio-fluxcd/charts/flux-operator
Version pin (input)0.47.x — SemVer range locked to the minor track of FLUX_OPERATOR_VERSION in hack/deploy-infra.sh

Helm values on the nested HelmRelease:

KeyValuePurpose
web.serverOnlytrueRender only the Web UI Deployment + Service; skip the operator Deployment, CRDs, and RBAC that the original install.yaml bootstrap already owns
installCRDsfalseThe flux-operator CRDs (FluxInstance, ResourceSet, ResourceSetInputProvider, …) are already installed by the out-of-band install.yaml apply in hack/deploy-infra.sh — re-applying them here would fight the bootstrap on every reconcile
fullnameOverrideflux-webGive the Web UI Deployment / Service / ServiceAccount a distinct identity so they do not collide with the operator's own flux-operator-* workload names

Version tracking. The spec.inputs[0].version SemVer range is updated automatically by a Renovate customManager entry in renovate.json that targets deploy/kind/base/flux-web.yaml and pulls release metadata from controlplaneio-fluxcd/flux-operator GitHub releases. The customManager shares the same packageRules as hack/deploy-infra.sh — major upgrades are disabled, minor/patch upgrades auto-merge after a three-day minimumReleaseAge cooldown.

Production opt-out. deploy/flux-system/kustomization.yaml deliberately does not list deploy/kind/base/flux-web.yaml. The flux-operator Web UI ships without token authentication, without TLS termination, and without an Ingress story — it is safe as a localhost port-forward demo on a single-node kind cluster, not as a shared-cluster surface. Production overlays can opt back in explicitly once upstream adds those prerequisites.

Access (kind Quick Start, Step 4a):

bash
kubectl port-forward svc/flux-web -n flux-system 9080:9080

Browse http://localhost:9080 — no login required. The Web UI complements Headlamp by rendering the three flux-operator-specific CRDs (ResourceSet, ResourceSetInputProvider, FluxReport) that the generic Headlamp Flux plugin does not know about.

Chaos Mesh (kind-only opt-in)

File: deploy/kind/chaos-mesh/kustomization.yaml

Chaos Mesh ships as a separate opt-in kind overlay. The default make deploy-infra flow does not install it — first-run deployments skip the privileged chaos-daemon DaemonSet, the chaos-mesh namespace, and the upstream HelmRepository / HelmRelease pair so that developers who never run chaos E2E suites pay zero install cost. The production deploy/flux-system/ overlay also does not install Chaos Mesh.

The overlay is self-contained: the HelmRepository lives in deploy/kind/chaos-mesh/source.yaml and the HelmRelease in deploy/kind/chaos-mesh/release.yaml (both relocated from the former deploy/flux-system/{sources,releases}/chaos-mesh.yaml locations). The overlay bundles them with:

PropertyValue
Target namespacechaos-mesh (created inline with the privileged PodSecurity label required by chaos-daemon's host PID/network access)
Chartchaos-mesh
Version constraint>=2.6.0 <3.0.0
Sourcechaos-mesh HelmRepository (deploy/kind/chaos-mesh/source.yaml)
Dependenciescert-manager in cert-manager namespace

Kind-tuning patch (relocated here from deploy/kind/base/kustomization.yaml because kustomize requires the patch target to live in the same overlay):

Helm valueOverridePurpose
chaosDaemon.runtimecontainerdMatch the kind node's container runtime
chaosDaemon.socketPath/run/containerd/containerd.sockMount the kind containerd socket so chaos-daemon can attack pods
chaosDaemon.resources25m / 64Mi requestsReduce footprint on single-node kind
dashboard.createfalseDashboard is unnecessary in CI
controllerManager.resources25m / 64Mi requestsReduce footprint on single-node kind

These overrides diverge intentionally from the upstream chart defaults (dashboard enabled, larger resource requests, auto-detected runtime), which target multi-node production clusters. Because the patch and the HelmRelease both live in the kind-only overlay, production environments that opt into Chaos Mesh start from the upstream defaults instead of inheriting the kind-tuning values.

No load-restrictor flag required. The overlay has no parent-directory ../../ references — every resource (namespace.yaml, source.yaml, release.yaml) lives under deploy/kind/chaos-mesh/. Kustomize's default LoadRestrictionsRootOnly security check is therefore satisfied without --load-restrictor=LoadRestrictionsNone, which matters because kubectl's embedded kustomize does not expose that flag (kubernetes/kubectl#948) and hack/deploy-infra.sh invokes the apply via kubectl apply -k.

Opt-in usage:

bash
WITH_CHAOS_MESH=true make deploy-infra

This is the prerequisite for make e2e-chaos. See Chaos E2E Tests for the full workflow.

kube-prometheus-stack (kind-only opt-in, CC-0100)

File: deploy/kind/prometheus/kustomization.yaml

kube-prometheus-stack ships as a separate opt-in kind overlay (CC-0100). The default make deploy-infra flow does not install it — the monitoring namespace stays absent, and Prometheus / Grafana / the prometheus-operator pods do not consume any of the kind node's CPU or memory budget unless a contributor explicitly opts in. The production deploy/flux-system/ overlay also does not install the stack: production clusters are expected to run their own Prometheus and widen its serviceMonitorSelector to pick up the keystone-operator chart's ServiceMonitor (see Enable Keystone Operator Metrics for that wiring path).

The overlay is self-contained: the Namespace and HelmRelease live in deploy/kind/prometheus/namespace.yaml and deploy/kind/prometheus/release.yaml, and the upstream prometheus-community HelmRepository in deploy/flux-system/sources/prometheus-community.yaml is reused (it is already present for the prometheus-operator-crds HelmRelease in the production base, so no new source manifest is added to the production tree). The overlay bundles the resources with:

PropertyValue
Target namespacemonitoring (created inline; no PodSecurity label override required)
Chartkube-prometheus-stack
Version constraint>=65.0.0 <70.0.0
Sourceprometheus-community HelmRepository (reused from deploy/flux-system/sources/)
Dependenciescert-manager in cert-manager namespace

Kind-tuned values (deliberately too lean for a real workload — they exist so the stack fits in a single-node kind cluster alongside Flux, the operators, and the OpenStack control plane — CC-0100, REQ-002, REQ-003):

Helm valueOverridePurpose
crds.enabledfalseThe monitoring.coreos.com CRDs are already installed by the production-base prometheus-operator-crds HelmRelease — re-installing them from the chart would fight that release on every reconcile
alertmanager.enabledfalseNo alert routing in a developer cluster
nodeExporter.enabledfalseSingle-node kind has no meaningful node-level metrics worth scraping
kubeStateMetrics.enabledfalseKube-state-metrics adds noise the kind dashboards do not consume
prometheus.prometheusSpec.retention6hShort retention keeps the Prometheus PVC tiny on kind
prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValuesfalseAllow the operator chart's ServiceMonitor to be scraped without forcing a release: kube-prometheus-stack label on it
prometheus.prometheusSpec.serviceMonitorSelector{}Match every ServiceMonitor in the cluster (kind only — production overlays should use a tighter selector)
prometheus.prometheusSpec.serviceMonitorNamespaceSelector{}Match every namespace (kind only — see above)
prometheus.prometheusSpec.resources / grafana.resources100m CPU / 256Mi mem capsHard cap on kind resource use

Dashboard provisioning (CC-0100, REQ-004). The overlay also adds a configMapGenerator that bundles the keystone-operator dashboard JSON (operators/keystone/dashboards/keystone-operator.json — the single source of truth, never forked into the overlay) with the grafana_dashboard: "1" and app.kubernetes.io/part-of: kube-prometheus-stack labels. Grafana's sidecar discovers the labelled ConfigMap on startup and imports it into the Dashboards → Keystone Operator entry without any manual API call. Because the dashboard JSON lives outside the overlay directory, hack/deploy-infra.sh performs an idempotent copy into deploy/kind/prometheus/keystone-operator.json immediately before kubectl apply -k runs — this satisfies kustomize's default LoadRestrictionsRootOnly constraint (the overlay has no ../ references) without requiring --load-restrictor=LoadRestrictionsNone.

Local validation (make stage-prometheus-dashboard). The staged deploy/kind/prometheus/keystone-operator.json is git-ignored — the canonical file lives only at operators/keystone/dashboards/keystone-operator.json. Developers who want to run kustomize build deploy/kind/prometheus/, kubectl apply -k deploy/kind/prometheus/, or chainsaw lint against the overlay without running WITH_PROMETHEUS=true make deploy-infra first must stage the dashboard manually:

bash
make stage-prometheus-dashboard

The target performs the same cp -f that hack/deploy-infra.sh runs at deploy time, so local renders match CI exactly. make deploy-infra re-runs the copy on every invocation, so explicit staging is not needed when going through the full deploy path.

ServiceMonitor enablement (CC-0100, REQ-005). The keystone-operator chart defaults to monitoring.serviceMonitor.enabled=false so production overlays inherit the safe default. When WITH_PROMETHEUS=true, hack/deploy-infra.sh waits for the kube-prometheus-stack HelmRelease to become Ready, then runs:

bash
kubectl patch helmrelease keystone-operator -n openstack --type=merge \
  -p '{"spec":{"values":{"monitoring":{"serviceMonitor":{"enabled":true}}}}}'

…and waits for the keystone-operator HelmRelease to reconcile back to Ready=True on the new values. The patch is only applied when WITH_PROMETHEUS=true — the chart values themselves are never modified, which keeps the production posture unchanged.

Opt-in usage:

bash
WITH_PROMETHEUS=true make deploy-infra

This is the prerequisite for make e2e-prometheus (see CI / e2e-prometheus job for the workflow). For the kind UI walkthrough — port-forward, default Grafana credentials, the bundled Keystone Operator dashboard, and a Prometheus targets sanity-check — see Extended Quick Start — Step 4c.

Posture summary. Reviewers checking new kind-only opt-ins should treat this entry as a parallel of the Chaos Mesh (kind-only opt-in) example above: the production omission is explicit, the opt-in flag has a single documented name (WITH_PROMETHEUS), and the kind overlay is self-contained under deploy/kind/prometheus/ so the production kustomization root is untouched. The document-intentional-environment-divergence-in-overlays review pattern's CC-0100 follow-up section catalogues the full surface area.