Secure Cluster Access Via Vpn and Oidc
Use a VPN with OIDC authentication for secure.
Use a VPN with OIDC authentication for secure, role-based access to Kubernetes clusters without relying on jumphosts or local YAML files.
Workflow: Connect via VPN + OIDC
-
Establish VPN connection
- Use OpenVPN, WireGuard, or similar to tunnel traffic into the cluster’s VPC.
- Validate connectivity:
ping <cluster-api-server-ip>orcurl -v https://api-server.
-
Configure OIDC authentication
- Deploy an OIDC provider (e.g., Keycloak, Auth0) integrated with your identity source (LDAP, SAML, etc.).
- Configure the Kubernetes API server with OIDC parameters in
/etc/kubernetes/manifests/kube-apiserver.yaml:- --oidc-issuer-url=https://oidc-provider.example.com - --oidc-client-id=kubernetes - --oidc-groups-claim=groups
-
Map groups to RBAC roles
- Create ClusterRoles and RoleBindings that reference OIDC groups:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cluster-admins subjects: - kind: Group name: cluster-admins@example.com apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io
- Create ClusterRoles and RoleBindings that reference OIDC groups:
-
Configure kubectl
- Use
kubectl config set-credentialswith the OIDC token:kubectl config set-credentials user@example.com --token=$(oidc-cli token -c client_id -c client_secret) - Point kubeconfig to the cluster API server:
kubectl config set-cluster production --server=https://api-server:6443
- Use
Policy Example: Restrict Access to OIDC Groups
Only users in the cluster-admins or developers OIDC groups can authenticate:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: developers
subjects:
- kind: Group
name: developers@example.com
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: edit
apiGroup: rbac.authorization.k8s.io
Tooling
- VPN: OpenVPN (stable), WireGuard (lightweight), Tailscale (easy but proprietary).
- OIDC: Keycloak (self-hosted), Auth0 (managed), Dex (Kubernetes-native).
- CLI:
oidc-clifor token acquisition,kubectlfor cluster interaction.
Tradeoffs
- Pros: Eliminates jumphost management, centralizes access control, enforces MFA via OIDC.
- Cons: Adds latency if VPN gateway is geographically distant; requires maintaining OIDC provider and certificate rotation.
Troubleshooting
- Authentication failures:
- Check token expiration:
oidc-cli token --verbose. - Validate group claims:
kubectl config view --minify | grep -A 3 users.
- Check token expiration:
- Network issues:
- Test API server reachability:
curl -v https://api-server/healthz. - Check VPN logs for IP conflicts or routing misconfigurations.
- Test API server reachability:
- RBAC misconfigurations:
- Verify group-to-role bindings:
kubectl get clusterrolebindings -o yaml. - Use
kubectl auth can-ito test permissions.
- Verify group-to-role bindings:
This approach balances security and usability while avoiding the fragility of jumphosts or local YAML files. Adjust based on your org’s identity provider maturity and network topology.
Source thread: How do you connect to your clusters?

Share this post
Twitter
Google+
Facebook
Reddit
LinkedIn
StumbleUpon
Pinterest
Email