Skip to content

Observability & Diagnostics

How to read what the Keystone operator is doing — without tailing controller logs.

Prerequisites: A running Keystone CR from the Quick Start (Extended) (Steps 1–9).

The operator surfaces its state through three complementary channels:

ChannelPurposePrimary audience
Print columnsOne-line health summary for every Keystone CRHumans, kubectl get
Status conditionsStructured, programmatic stateAutomation, CI, alerts
EventsTimestamped audit trail of transitionsHumans investigating incidents

kubectl get keystones exposes a compact summary via four printer columns:

bash
kubectl get keystones -A
NAMESPACE   NAME       READY   ENDPOINT                                              RELEASE   AGE
openstack   keystone   True    http://keystone.openstack.svc.cluster.local:5000      2025.2    12m
ColumnSourceMeaning
READY.status.conditions[?(@.type=='Ready')].statusAggregate health
ENDPOINT.status.endpointIn-cluster Keystone API URL
RELEASE.status.installedReleaseOpenStack release currently deployed
AGE.metadata.creationTimestampCR age

Status conditions

.status.conditions[] follows the standard Kubernetes pattern (type, status, reason, message, lastTransitionTime, observedGeneration). Eleven sub-conditions feed into the aggregate Ready — some are only reported when the matching optional spec field is set:

ConditionMeans
SecretsReadyReferenced database and admin Secrets are available
FernetKeysReadyFernet key Secret and rotation CronJob exist
CredentialKeysReadyCredential key Secret and rotation CronJob exist (if spec.credentialKeys is set)
DatabaseReadydb_sync Job completed successfully (and schema check passed)
PolicyValidReadyspec.policyOverrides validated against oslo.policy
DeploymentReadyAPI Deployment has available replicas
KeystoneAPIReadyKeystone API is responding to /v3 health probes
HPAReadyHorizontalPodAutoscaler created (if spec.autoscaling is set)
NetworkPolicyReadyNetworkPolicy created (if spec.networkPolicy is set)
BootstrapReadyBootstrap Job completed (admin user, region, endpoints)
TrustFlushReadyTrust-flush CronJob created — defaults to hourly
ReadyAll of the above are True

Read them as a tree:

bash
kubectl get keystone keystone -n openstack \
  -o jsonpath='{range .status.conditions[*]}{.type}{"\t"}{.status}{"\t"}{.reason}{"\t"}{.message}{"\n"}{end}' \
  | column -t -s $'\t'

Or wait for a specific one:

bash
kubectl wait keystone/keystone -n openstack \
  --for=condition=DatabaseReady --timeout=5m

Diagnosing a stuck CR

The first status=False condition from the top is usually the bottleneck:

  • SecretsReady=False → check that keystone-db and keystone-admin Secrets exist in the same namespace
  • DatabaseReady=False → look at Events for DBSyncFailed or SchemaDriftDetected
  • DeploymentReady=Falsekubectl describe deploy keystone — usually image pull or probe failures

Upgrade status fields

During an image-tag change, three additional status fields track progress (see Day 2 Operations — Image upgrade):

FieldOutside upgradeDuring upgrade
.status.installedReleaseCurrently deployed releasePrevious release (not yet changed)
.status.targetRelease""Target release
.status.upgradePhase""ExpandingMigratingRollingUpdateContracting

Watch the upgrade live:

bash
kubectl get keystone keystone -n openstack -w \
  -o custom-columns=NAME:.metadata.name,PHASE:.status.upgradePhase,FROM:.status.installedRelease,TO:.status.targetRelease

Events

Every lifecycle transition emits a Kubernetes Event with a stable, PascalCase reason. Events are deduplicated by (object, reason, message) — repeated reconciles do not spam the event stream.

Show everything for a CR

bash
kubectl describe keystone keystone -n openstack

The bottom of the output lists the Events in reverse-chronological order. Alternatively, a timeline view:

bash
kubectl get events -n openstack \
  --field-selector involvedObject.kind=Keystone \
  --sort-by='.lastTimestamp'

Common reasons

ReasonTypeWhen you see it
BootstrapCompleteNormalFirst reconciliation finished
DatabaseSyncedNormaldb_sync finished, schema matches Alembic head
FernetKeysGeneratedNormalFernet Secret was created or rotated
UpgradeInitiatedNormalspec.image.tag change triggered an upgrade
ExpandComplete / MigrateCompleteNormalUpgrade phase boundary reached
UpgradeCompleteNormalFull expand-migrate-contract pipeline finished, installedRelease advanced
ContractFailedWarningContract phase db_sync --contract returned non-zero
DBSyncFailedWarningdb_sync Job returned non-zero
SchemaDriftDetectedWarningSchema check found unexpected drift
DowngradeNotSupportedWarningTarget tag is older than installedRelease
UpgradePathInvalidWarningTarget tag skips a release (non-sequential)

The full catalogue is in Keystone Controller Events.


Keystone application logs

For the Keystone API itself (as opposed to the operator), tail the workload pods directly:

bash
kubectl logs -n openstack -l app.kubernetes.io/name=keystone --tail=200 -f

Two distinct streams are interleaved on the same stdout/stderr:

  • uWSGI access lines — emitted per HTTP request via --log-master --log-format, e.g. GET /v3/auth/tokens => generated 1234 bytes in 12 msecs (HTTP/1.1 201). Useful for traffic-shape questions (latency, status code distribution).
  • oslo.log application records — emitted by Keystone code paths (auth, federation, middleware), formatted by oslo.log per spec.logging. Useful for "why did this request fail" questions.

Both streams honour spec.logging.format. The default is text (oslo.log line format, human-readable). Switch to JSON for direct ingest by Loki/OpenSearch:

bash
kubectl patch keystone -n openstack keystone --type=merge \
  -p '{"spec":{"logging":{"format":"json"}}}'

# Wait for the rollout, then verify each oslo.log record is jq-parseable:
kubectl logs -n openstack -l app.kubernetes.io/name=keystone --tail=20 \
  | grep -v 'generated.*bytes in' \
  | jq -e .

jq -e . exits non-zero if any input line is not valid JSON, giving a binary pass/fail signal for the format toggle. See spec.logging in the CRD reference for the full field semantics, including level, debug, and perLoggerLevels.


Controller logs (last resort)

If status and events don't explain a failure, read the operator logs directly:

bash
kubectl logs -n openstack -l app.kubernetes.io/name=keystone-operator \
  --tail=200 -f

The operator uses structured logr output — every line includes the reconciled object's namespace/name and the sub-reconciler that produced the log. Filter a specific CR:

bash
kubectl logs -n openstack -l app.kubernetes.io/name=keystone-operator --tail=500 \
  | grep '"Keystone":"openstack/keystone"'

Further reading