Automating Namespace-per-customer Provisioning with Gitlab CI

A practical guide to automating namespace-per-customer provisioning in Kubernetes using GitLab CI, including workflow steps.

JR

2 minute read

A practical guide to automating namespace-per-customer provisioning in Kubernetes using GitLab CI, including workflow steps, policy examples, and troubleshooting tips.

Workflow Overview

  1. Repository Structure:

    • Use a dedicated Git repo for customer onboarding manifests (e.g., customer-infra).
    • Include namespace.yaml, resourcequotas.yaml, and limitranges.yaml per customer.
  2. Customer Onboarding Trigger:

    • Integrate with your SaaS billing/authentication system (e.g., Stripe webhook or internal API).
    • Store customer metadata (e.g., tier, region) in a database or GitLab CI variables.
  3. GitLab CI Pipeline:

    • On merge to main, trigger provisioning:
      provision-customer:  
        script:  
          - kubectl apply -f manifests/$CUSTOMER_NAME/namespace.yaml  
          - kubectl apply -f manifests/$CUSTOMER_NAME/resourcequotas.yaml  
      
    • Use GitLab environment variables for CUSTOMER_NAME, NAMESPACE, and CLUSTER_ENV.
  4. Policy Application:

    • Apply default quotas/limits and network policies per namespace.
    • Sync with external systems (e.g., monitoring, logging) via service accounts.
  5. Post-Provisioning:

    • Notify customer via email or in-app notification.
    • Update internal service discovery (e.g., Consul, Etcd).

Policy Example

ResourceQuota for Freemium Tier:

apiVersion: v1  
kind: ResourceQuota  
metadata:  
  name: freemium-quota  
spec:  
  hard:  
    cores: "2"  
    memory: "4Gi"  
    pods: "5"  
    namespaces: "1"  

Apply this to each customer namespace using labels or a templating tool (e.g., Helm, Kustomize).

Tooling

  • GitLab CI: For pipeline orchestration.
  • Kubernetes CLI: For manifest apply/validate.
  • ArgoCD/Flux: Optional for GitOps-driven sync (if using declarative repos).
  • Customer DB: Store metadata (e.g., PostgreSQL, Redis).

Tradeoffs

  • Namespace Sprawl: Frequent provisioning increases cluster management overhead. Mitigate with automated cleanup jobs for inactive customers.
  • CI Rate Limits: GitLab CI has pipeline minute limits; consider self-managed runners for scale.
  • Policy Drift: Manual overrides in namespaces can break automation. Use admission controllers (e.g., OPA Gatekeeper) to enforce policies.

Troubleshooting

  • Permission Issues:
    • Check GitLab runner service account RBAC permissions.
    • Example fix: Add namespace: <customer-ns> to the runner’s Role.
  • Manifest Failures:
    • Run kubectl apply --validate locally before CI.
    • Use kubectl explain to debug schema errors.
  • Quota Exhaustion:
    • Monitor with kubectl get quota --show-details.
    • Alert on requests.memory or limits.cpu nearing hard limits.

This approach balances automation with control, but requires ongoing monitoring to handle scale and policy compliance.

Source thread: Freemium SaaS on K8s: Automating namespace-per-customer provisioning with GitLab CI, who’s doing this?

comments powered by Disqus