Recently I had to work with one of my colleagues (David) on something that was new to me : Openshift. I never really looked at OpenShift but knew the basic concepts, at least on OKD 3.x.
With 4.x, OCP is completely different as instead of deploying "normal" Linux distro (like CentOS in our case), it's now using RHCOS (so CoreOS) as it's foundation. The goal of this blog post is not to dive into all the technical steps required to deploy/bootstrap the openshift cluster, but to discuss of one particular 'issue' that I found myself annoying while deploying: how to disable dhcp on the CoreOS provisioned nodes.
To cut a long story short, you can read the basic steps needed to deploy Openshift on bare-metal in the official doc
Have you read it ? Good, now we can move forward :)
After we had configured our install-config.yaml (with our needed values) and also generated the manifests with openshift-install create manifests --dir=/path/
we thought that it would be just deploying with the ignition files built by the openshift-install create ignition-configs --dir=/path
step (see in the above doc for all details)
It's true that we ended up with some ignition files like:
- bootstrap.ign
- worker.ign
- master.ign
Those ignition files are (more or less) like traditional kickstart files to let you automate the RHCOS deploy on bare-metal. The other part is really easy, as it's a matter (with ansible in our case) to just configure the tftp boot argument, and call an ad-hoc task to remotely force a physical reinstall of the machine (through ipmi):
So we kicked off first the bootstrap node (ephemeral node being used as a temporary master, from which the real master forming the etcd cluster will get their initial config from), but then we realized that, while RHCOS was installed and responding with the fixed IP we set through pxeboot kernel parameters (and correctly applied on the reboot), each RHCOS node was also trying by default to activate all present NICs on the machine.
That was suddenly "interesting" as we don't fully control the network where those machines are, and each physical node has 4 NICs, all in the same vlan , in which we have also a small dhcp range for other deployments. Do you see the problem about etcd and members in the same subnet and multiple IP addresses ? yeah, it wasn't working as we saw some requests coming from the dhcp interfaces instead of the first properly configured NIC in each system.
The "good" thing is that you can still ssh into each deployed RHCOS (even if not adviced to) , to troubleshoot this. We discovered that RHCOS still uses NetworkManager but that default settings would be to enable all NICs with DHCP if nothing else declared which is what we need to disable.
After some research and help from Colin Walters, we were pointed to this bug report for coreos
With the traditional "CentOS Linux" sysadmin mindset, I thought : "good, we can just automate with ansible ssh'ing into each provisioned rhcos to just disable it", but there should be a clever other way to deal with this, as it was also impacting our initial bootstrap and master nodes (so no way to get cluster up)
That's then that we found this : Customing deployment with Day0 config : here is a simple example for Chrony
That's how I understood the concept of MachineConfig and how that's then supposed to work for a provisioned cluster, but also for the bootstrap process. Let's so use those informations to create what we need and start a fresh deploy.
Assuming that we want to create our manifest in
openshift-install create manifests --dir=/<path>/
And now that we have manifests, let's inject our machine configs : You'll see that because it's YAML all over the place, injecting Yaml in Yaml would be "interesting" so the concept here is to inject content as base64 encoded string, everywhere.
Let's suppose that we want the /etc/NetworkManager.conf.d/disabledhcp.conf having this content on each provisioned node (master and worker) to tell NetworkManager to not default to auto/dhcp:
[main]
no-auto-default=*
Let's first encode it to base64:
/etc/NetworkManager.conf.d/disabledhcp.conf
cat << EOF | base64
[main]
no-auto-default=*
EOF
Our base64 value is W21haW5dCm5vLWF1dG8tZGVmYXVsdD0qCg==
So now that we have content, let's create manifests to create automatically that file at provisioning time :
pushd <path>
# To ensure that provisioned master will try to become master as soon as they are installed
sed -i 's/mastersSchedulable: true/mastersSchedulable: false/g' manifests/cluster-scheduler-02-config.yml
pushd openshift
for variant in master worker; do
cat << EOF > ./99_openshift-machineconfig_99-${variant}-nm-nodhcp.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: ${variant}
name: nm-${variant}-nodhcp
spec:
config:
ignition:
config: {}
security:
tls: {}
timeouts: {}
version: 2.2.0
networkd: {}
passwd: {}
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,W21haW5dCm5vLWF1dG8tZGVmYXVsdD0qCg==
verification: {}
filesystem: root
mode: 0644
path: /etc/NetworkManager/conf.d/disabledhcp.conf
osImageURL: ""
EOF
done
popd
popd
I think this snipped is pretty straight-forward, and you see in the source how we "inject" the content of the file itself (previous base64 value we got in previous step)
Now that we have added our customizations, we can just proceed with the openshift-install create ignition-configs --dir=/<path>
command again, retrieve our .ign file, and call ansible again to redeploy the nodes, and this time they were deployed correctly with only the IP coming from ansible inventory and no other nic in dhcp.
And also that it works, deploying/adding more workers node in the OCP cluster is just a matter to calling ansible and physical nodes are deployed in a matter of ~5minutes (as RHCOS is just extracting its own archive on disk and reboot)
I don't know if I'll have to take multiple deep dives into OpenShift in the future , but at least I learned multiple things, and yes : you always learn more when you have to deploy something for the first time and that it doesn't work straight away .. so while you try to learn the basics from official doc, you have also to find other resources/docs elsewhere :-)
Hope that it can help people in the same situation when having to deploy OpenShift on premises/bare-metal.