Building Kubernetes Manifests: a Practical Workflow for Production Teams
We standardize on Helm charts with organizational policies, enforce compliance via Kyverno.
We standardize on Helm charts with organizational policies, enforce compliance via Kyverno, and use ArgoCD for GitOps-driven deployments.
Workflow: From Template to Deployment
-
Start with a Helm chart
- Use an internal, versioned Helm chart as the base for all services.
- Chart includes defaults for:
- Resource limits/requests
- Liveness/readiness probes
- Ingress templates (with TLS enforcement)
- Common labels (team, env, app)
- Example:
helm create my-service --starter-chart internal-base-chart
-
Customize with Kustomize
- For non-chart-supported cases, overlay with Kustomize patches.
- Example structure:
kustomization.yaml patches/ 001-add-cronjob.yaml - Apply with:
kubectl apply -k ./kustomize
-
Enforce policies with Kyverno
- Automatically mutate or block non-compliant manifests.
- Example: Enforce rootless containers:
apiVersion: kyverno.io/v1 kind: ClusterPolicy metadata: name: enforce-rootless spec: validationFailureAction: enforce rules: - name: check-rootless match: resources: kinds: - Pod validate: message: "Containers must run as non-root" pattern: spec: containers: - securityContext: runAsNonRoot: true
-
Deploy via ArgoCD
- Sync manifests from Git to cluster.
- Require PR approvals and automated checks before sync.
- Use
argocd syncto manually reconcile drift if needed.
Tooling Stack
- Helm: Versioned, opinionated base charts.
- Kustomize: Lightweight overlays for edge cases.
- ArgoCD: GitOps-driven deployment visibility.
- Kyverno: Policy enforcement at scale.
- K6: Test manifests under load before prod.
Tradeoff: Flexibility vs. Maintenance
Helm’s flexibility is a double-edged sword. Over-customization leads to snowflake charts that break during upgrades. We limit overrides to 5% of use cases, forcing teams to upstream common needs into the base chart.
Common Failures & Fixes
- Image pull errors:
- Check
imagePullPolicyin Helm values (default toIfNotPresent). - Verify image exists in private registry:
skopeo inspect registry.example.com/my-image:tag.
- Check
- Missing labels:
- ArgoCD sync fails? Check Kyverno validation logs:
kubectl get clusterpolicies -o wide.
- ArgoCD sync fails? Check Kyverno validation logs:
- Policy violations:
- Debug with:
kubectl describe pod <name> | grep -i securityContext.
- Debug with:
Prevention: Policy as Code
Embed compliance checks into CI:
# Lint manifests
kube-linter --config=k8s-linter.yaml .
# Validate policies
kyverno generate bundle --input ./policies --output ./bundle.yaml
This catches 80% of issues before they hit the cluster.
In my experience, this workflow reduces deployment failures by ~60% while keeping teams autonomous. The key is balancing guardrails with flexibility—let devs move fast, but not break things.
Source thread: Writing K8s manifests for a new microservice — what’s your team’s actual process?

Share this post
Twitter
Google+
Facebook
Reddit
LinkedIn
StumbleUpon
Pinterest
Email