Migrating K3s from Baremetal to AWS Eks: a Pragmatic Approach
Migrating k3s from baremetal to AWS EKS requires careful planning, data transfer.
Migrating k3s from baremetal to AWS EKS requires careful planning, data transfer, and validation to ensure minimal downtime and data integrity.
Workflow: Baremetal to EKS Migration
-
Assess and Inventory
- List all workloads, persistent volumes, and ingress configurations:
kubectl get pods,svc,ingress,pv,pvc --all-namespaces - Document network dependencies (e.g., CNI plugins, MTU settings).
- Identify stateful components requiring data migration (e.g., databases, NFS mounts).
- List all workloads, persistent volumes, and ingress configurations:
-
Prepare AWS Environment
- Use
eksctlto provision an EKS cluster matching your k3s version:eksctl create cluster --name my-cluster --region us-west-2 --nodegroup-name worker-nodes --node-type t3.medium --nodes 3 - Configure IAM roles for service accounts (IRSA) to match baremetal permissions.
- Use
-
Transfer Data and Configurations
- Use Velero for backup/restore:
velero backup create baremetal-backup --include-namespaces=* velero backup export baremetal-backup > backup.yaml - Sync persistent data to AWS using
rsync, S3, or direct disk transfers.
- Use Velero for backup/restore:
-
Deploy and Validate
- Restore workloads and configs to EKS:
velero backup restore create --from-backup baremetal-backup - Test connectivity and application health:
kubectl exec -it <pod-name> -- curl -v http://<service-name>
- Restore workloads and configs to EKS:
-
Cutover and Monitor
- Update DNS records to point to EKS ingress.
- Monitor logs and metrics for anomalies post-migration.
Policy Example: Backup Strategy
apiVersion: v1
kind: ConfigMap
metadata:
name: velero-backup-policy
data:
backup-schedule: "0 2 * * *" # Daily at 2 AM
backup-retention: "30" # Days to retain backups
Note: Ensure backups are stored in a cross-region S3 bucket for disaster recovery.
Tooling
- Velero: For cluster backups and restores (supports k3s and EKS).
- eksctl: For streamlined EKS cluster provisioning.
- Terraform: For infrastructure-as-code to replicate networking/storage.
- kubectl: For direct cluster interaction and debugging.
Tradeoffs and Caveats
- Cost vs. Downtime: Migrating large datasets to AWS may incur egress fees; consider incremental backups to reduce costs.
- Stateful Workloads: Databases or apps with local disk dependencies may require re-architecture (e.g., moving to RDS or EBS).
- Network Latency: Baremetal-to-AWS peering can introduce latency; test performance early.
Troubleshooting Common Issues
- Permission Errors: Verify IAM roles match k3s service account permissions.
aws iam get-role --role-name <role-name> - Network Misconfigurations: Check VPC subnets, security groups, and route tables for ingress/egress rules.
- Data Transfer Failures: Use compressed backups (
--storage-location=compressed) for large datasets. - CNI Plugin Conflicts: Ensure EKS uses the same CNI (e.g., CoreDNS) as k3s to avoid networking glitches.
Final Tip: Test the migration process in a staging environment first. I’ve seen migrations fail due to overlooked persistent volumes or DNS TTL settings—always have a rollback plan.
Source thread: baremetal k3s migration to AWS EKS?

Share this post
Twitter
Google+
Facebook
Reddit
LinkedIn
StumbleUpon
Pinterest
Email