Selective OpenShift IPI on AWS GovCloud without Route 53

Bring your own DNS to an OpenShift IPI deployment on AWS

JR Morgan

4 minute read


If you’ve deployed OpenShift on AWS using the IPI (Installer Provisioned Infrastructure) deployment method then you’re aware of the hardline requirement for Route 53 public/private zones, depending on the publish method set in your OpenShift install-config.yaml. This typically doesn’t present a problem for most customers, but select companies disallow use of Route 53 in favor of their own managed DNS (e.g. Infoblox). Unfortunately this limitation forces most customers to pursue a UPI (User Provisioned Infrastructure) deployment, which may (read: should) require custom terraform/ansible automation to standup all other prerequisities such as subnets, security groups, load balancers, EC2 instances, S3 buckets, etc. There’s good news, though: an alternative to using UPI or IPI with Route 53 exists, and involves simple customizations to terraform configs embedded in the openshift-install binary. This has the added benefit (over UPI) of precreating all machineSets, cloud provider integrations, etc. and sets you up to immediately take advantage of cluster autoscaling.

The details below are focused primarily on an AWS GovCloud deployment without use of Route 53, but this should apply to other situations where you’d like to pursue an IPI deployment when options/parameters for your situation/limitation aren’t baked-in. I’m still following official steps to remove ingress operator management of DNS for ingress loadbalancers, but I’m also completely eliminating the need for public/private zone provisioning, or even record creation in a pre-existing Route 53 zone since my customers do not allow any Route 53 use.

Customize openshift-install

Install go

curl -o go1.16.3.linux-amd64.tar.gz
sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.16.3.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
go version

Clone repo and customize

You can clone my fork with a pre-commented/omitted dns module, or clone the official upstream repo and incorporate your own changes:

Ensure you check out the appropriate branch matching your pre-existing or future Red Hat CoreOS AMIs.

## Clone repo with commented route53 provider:

git clone
cd installer

## Optional but recommended to checkout a specific branch which should correlate to the AMI in your VPC
git checkout release-4.7

## Make edits as needed to provider files in ./data/data/aws/ or if you're using my repo continue to build

## Build the binary

Deploy your cluster via IPI

NOTE: This is not an officially supported method of deployment/customization.

Prepare install-config.yaml

My install-config.yaml matches customer environments as much as possible. See the sample install-config.yaml I’m using, or create your own. It’s best to create a new working directory for your installation, and this file should reside in your current working directory for the remaining command. You may need to reference your customized openshift-install binary via absolute path as shown below.

Prepare Manifests

Prior to openshift-install execution, make sure you set OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE for your environment. Failure to set this environment variable may result in your deployment attempting to consume invalid images from OpenShift CI builds.

## Set alternative release image source since it defaults to invalid/incorrect production images: &&

## Create Manifests
/home/ec2-user/installer/bin/openshift-install create manifests --log-level debug

Clean out DNS from manifests

Due to my install-config.yaml value for publish I’m only required to purge the privateZone, but I’m including both public & private zones here in the event your manifests have both. Original source:

python -c '
import yaml;
path = "manifests/cluster-dns-02-config.yml";
data = yaml.load(open(path));
del data["spec"]["publicZone"];
del data["spec"]["privateZone"];
open(path, "w").write(yaml.dump(data, default_flow_style=False))'

Prepare Ignition Configs

/home/ec2-user/installer/bin/openshift-install create ignition-configs --log-level debug

Deploy cluster

This is the easy part, but mind the notes below on timing and install/provisioning resumption if failures are encountered:

/home/ec2-user/installer/bin/openshift-install create cluster --log-level debug

During installation you’ll need to obtain the AWS Load Balancer DNS name and establish CNAME records for api.cluster.domain.tld, api-int.cluster.domain.tld, and later in the install process, *.apps.cluster.domain.tld pointing to the separate load balancer for your ingress controller. I’m using CloudFlare for my DNS and actually pre-create these records before installation with a short 2 minute TTL. It’s not critical if you’re installation fails due to DNS resolution timeouts as you can easily resume via wait-for subcommands. Should you encounter a failure, ensure your DNS records are properly associated with IPI-created load balancers then resume bootstrapping & cluster install via:

/home/ec2-user/installer/bin/openshift-install wait-for bootstrap-complete

## Assuming success from bootstrap completion:
/home/ec2-user/installer/bin/openshift-install wait-for install-complete

comments powered by Disqus