ai-blur-codes-577585 (1)

Creating a Private Hybrid Kubernetes Cluster Pt. 1

This is the first of a two-part post detailing my work to create a private hybrid Kubernetes cluster.

I was asked recently to set up a hybrid Kubernetes (K8s) cluster so that it could run both Windows and Linux workloads. Full disclosure: I’m highly opinionated that anything that Windows can do Linux can do better; however, despite my opinions, customers are allowed to make choices with which I disagree, so I got to work.

When undertaking a project like this, I generally try to do it once in a development environment, which helps me identify and resolve issues ahead of time, before working on customer resources.

And so we begin.

The customer wanted to run all of this on VMware-virtualized hardware. Fortunately, I had an ESXi machine that is extremely similar to VMware vSphere in all relevant aspects. I created two Linux (Ubuntu 16.04) nodes with 3GB (for the master) and 2GB (node/ingress) of memory. This is far too weak for any real work but this is just a proof of concept, so no problem. I also created a Windows server 2016 VM and gave it 4GB (later 6GB) of memory. The increase was due to problems with Docker failing to start up with a mere 4GB of memory. 6GB worked better.

The first plan was to use Rancher to configure a cluster consisting of 2 Linux nodes (1 master, 1 worker) and 1 Windows node (using Windows Server 2016). I wanted to deploy Rancher 2.1 so that I’d get all of the benefits of HA and the newest features. Rancher 2.1 is the first that “experimentally” supports Windows.

Rancher makes setting up the Kubernetes on Linux nodes trivially easy. Once I had finished creating the cluster, I “edited” it to add a node and was presented with the following command to run on my Windows host:

PowerShell -Sta -NoLogo -NonInteractive -Command "& {docker run --rm -v C:/:C:/host rancher/rancher-agent:v2.1.0-nanoserver-1803 -server https://[master-ip] -token [token] -caChecksum b4ba0dab664049f8d1cac6ae90c66c43abc1a8cf2109612d4d4fdb25e7e0703e; if($?){& c:/etc/rancher/run.ps1;}}"

Executing the command yielded an error about Windows version compatibility. It also had an error about “True” not being a valid command, which sent me down the wrong road for a bit. (The command MUST be executed in cmd and not Powershell).

Eventually, I looked for help on the Kubernetes Slack channel (which I highly recommend to anyone interested in debugging their K8s troubles). I was informed that Windows Server 2016 (version 1607 / build 14393) is too old to work with Rancher 2.1. The Windows Server version needs 1803 or newer.

So, the Rancher plan was discarded. What next?

To the Internet!

After some googling, I found a few links that appeared to be perfect for my task:

Because there were so many guides on this, I assumed it’d just take another day or so to go through the guides and get everything working. In order to facilitate that, I started from scratch. This time, using two xen Linux nodes along with my Windows Server 2016 instance, I had to do all of the kubernetes setup that Rancher took care of for me.

First, I created the two Linux nodes using Ubuntu xenial again. This time around I gave them 3GB of memory each (my Xen Dom0 has more available resources than my ESXi host).

Then I followed a few different guides:

There were likely a few others (my life is a sea of tabs).

I knew that Windows nodes depend on the Flannel Pod network, but only after my flannel pod failed to start with a rather ambiguous error involving a pod Cidr did I find the problem: when using Flannel, one must append –pod-network-cidr=10.244.0.0/16 to the kubeadm init command.

After executing:

$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 

On my master node, I was thrilled to see a simple way to add other (Linux) nodes to the cluster. I copied and pasted the command (after saving /etc/kubernetes/admin.conf as my kubectl config) and the second node joined up easily. I now had a working–albeit underpowered–K8s cluster.

Now to the fun part…

As I mentioned above, I’m generally no fan of Windows.

I thought, “How hard could it be? I hear great things about the advances in Powershell, and with such good guides success should be mere hours away.”

I was wrong.

First I followed: Kubernetes on Windows, which was a mistake because it assumes Windows server 1709, astute readers will see is not the same as 1607. But seriously, how different could they be, right?

It turns out that they are different “enough”. The helper scripts in the linked repo rely on built-in functions: New-HNSNetwork, Get-HnsNetworkand others that don’t exist in Windows server 2016-1607.

Okay, no problem, right? I mean according to Kubernetes on Windows, it is possible to run Kubernetes on Windows Server 2016.

I then tried running the scripts (after correcting the IP addresses to match my configuration) anyway and, while I saw many errors, I eventually was greeted with this response on the master and client sides.

$ kubectl get nodes 
NAME              STATUS ROLES  AGE VERSION
win-node-vmware   NotReady worker  119s v1.9.11
xen-lin-node-1    Ready master  20m v1.12.1
xen-lin-node-2    Ready worker  15m v1.12.1

So close! Now I just had to get the node to “Ready”.

So… to identify the problem with the “Not Ready” state I issued:

$ kubectl describe node win-node-vmware

and learned that the problem was that the kubelet process had “stopped posting node status,” which means that the master doesn’t know anything about the status of the node, which will obviously prevent it from having containers assigned to it.

Okay, no problem, right? Just got to get the Windows node to start posting its status.

I thought, “maybe it has to do with network connectivity, the start-kubelet.ps1 helper script throws a bunch of errors.”

After digging into it, the problem was with line 102 where the script identifies its management subnet. So, I hardcoded that value.

Now, when I execute c:kstart-kubelet.ps1 on the windows host and query for information on the cluster I see this:

$ kubectl get nodes
 NAME              STATUS ROLES AGE   VERSION
 win-node-vmware   Ready <none> 27m   v1.9.11
 xen-lin-node-1    Ready master 24h   v1.12.1
 xen-lin-node-2    Ready worker 24h   v1.12.1

Next, it is important to set the role on the Windows node so that it can handle pods. To do that, I executed:

kubectl label node win-node-vmware node-role.kubernetes.io/worker=worker

Then everything looked good:

$ kubectl get nodes
 NAME              STATUS ROLES AGE   VERSION
 win-node-vmware   Ready worker 27m   v1.9.11
 xen-lin-node-1    Ready master 24h   v1.12.1
 xen-lin-node-2    Ready worker 24h   v1.12.1 

I was extremely suspicious of this. Generally, the “right” practice is not to remove lines from a script and assume there will be no repercussions. To test it, I ran some pods to see if containers could run on the Windows node:

$ wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/WebServer.yaml -O win-webserver.yaml
 $ kubectl apply -f win-webserver.yaml
 $ watch kubectl get pods -o wide

After waiting for a minute or two…

$ kubectl get pods
 NAME                                  READY STATUS RESTARTS AGE
 win-webserver-844d48bd48-784wt        0/1 ContainerCreating 0 2m23s

Things aren’t working correctly, so I dig in and see: “Failed create pod sandbox”

“Failed create pod sandbox” can mean a couple of things. In this case, I think the more illuminating error message can be found on the Windows host, where the log is full of error messages involving the cni network plugin failing to set up the pod. This can be caused by DNS failing to resolve, so I updated the script again to hardcode the correct DNS server as identified by creating a busybox pod and cat’ing /etc/resolv.conf.

Unfortunately, this did not result in any difference. I continued to believe that the most likely cause of my trouble was the HNS related calls from above. Most notably, the helper scripts rely on the Get-HnsNetwork and New-HnsNetwork PowerShell functions. These functions were  mysteriously absent from the Windows Server build that I was using. I posted the error to a forum and, within a few days, a friendly forum member responded with a solution:

“For Windows Server 2016 version 10.0.14393 (1607), after installing Docker make sure that Windows Containers Feature is installed, then add HostNetworkingService Folder to the PowerSshell modules folder (C:WindowsSystem32WindowsPowerShellv1.0Modules).”

I tried this, but ultimately was unsuccessful.

Later, I realized that many of the errors that I was seeing on the windows node were likely due to the fact that I had neglected to set up and configure the the Flannel network overlay. However, I was then fortunate enough to obtain two critical pieces:

  1. A Windows Server (version 1709) iso
  2. This guide

The above guide does almost exactly what I was trying to do, albeit with a different version of Windows Server (i should stop thinking that Windows Server versions will be compatible with one another).

I ran through the guide above. Unfortunately, this time I was stymied by a different error regarding Windows versions:

“Error response from daemon: container [container-id] encountered an error during CreateContainer: failure in a Windows system call: The operating system of the container does not match the operating system of the host.”

This issue is fairly well known, but I thought that it was resolved by the following steps (documented in several of the guides):

c:> docker pull microsoft/nanoserver:[ltsc2016|1709|1803] 
 c:> docker tag microsoft/nanoserver:[ltsc2016|1709|1803] microsoft/nanoserver:latest

These steps are supposed to ensure compatibility between the version of the windows node and any containers that will be deployed to it.

Theoretically, to test this, one should be able to execute:

docker run microsoft/nanoserver

and see the non-powershell c:> prompt for several seconds:

PS C:k> docker run microsoft/nanoserver
 Microsoft Windows [Version 10.0.16299.726]
 (c) 2017 Microsoft Corporation. All rights reserved.
 
 C:>
 PS C:k>

This test worked for me, so I was (and continue to be) stymied as to why running the yaml below is any different:

apiVersion: v1
 kind: Service
 metadata:
   name: win-webserver
   labels:
     app: win-webserver
 spec:
   ports:
     # the port that this service should serve on
   - port: 80
     targetPort: 80
   selector:
     app: win-webserver
   type: LoadBalancer
 ---
 apiVersion: extensions/v1beta1
 kind: Deployment
 metadata:
   labels:
     app: win-webserver
   name: win-webserver
 spec:
   replicas: 1
   template:
     metadata:
       labels:
         app: win-webserver
       name: win-webserver
     spec:
       containers:
       - name: windowswebserver
         image: microsoft/nanoserver
         imagePullPolicy: IfNotPresent
         command:
         - powershell.exe
         - -command
         - "<#code used from https://gist.github.com/wagnerandrade/5424431#> ; $$listener = New-Object System.Net.HttpListener ; $$listener.Prefixes.Add('http://*:80/') ; $$listener.Start() ; $$callerCounts = @{} ; Write-Host('Listening at http://*:80/') ; while ($$listener.IsListening) { ;$$context = $$listener.GetContext() ;$$requestUrl = $$context.Request.Url ;$$clientIP = $$context.Request.RemoteEndPoint.Address ;$$response = $$context.Response ;Write-Host '' ;Write-Host('> {0}' -f $$requestUrl) ;  ;$$count = 1 ;$$k=$$callerCounts.Get_Item($$clientIP) ;if ($$k -ne $$null) { $$count += $$k } ;$$callerCounts.Set_Item($$clientIP, $$count) ;$$ip=(Get-NetAdapter | Get-NetIpAddress); $$header='<html><body><H1>Windows Container Web Server</H1>' ;$$callerCountsString='' ;$$callerCounts.Keys | % { $$callerCountsString+='<p>IP {0} callerCount {1} ' -f $$ip[1].IPAddress,$$callerCounts.Item($$_) } ;$$footer='</body></html>' ;$$content='{0}{1}{2}' -f $$header,$$callerCountsString,$$footer ;Write-Output $$content ;$$buffer = [System.Text.Encoding]::UTF8.GetBytes($$content) ;$$response.ContentLength64 = $$buffer.Length ;$$response.OutputStream.Write($$buffer, 0, $$buffer.Length) ;$$response.Close() ;$$responseStatus = $$response.StatusCode ;Write-Host('< {0}' -f $$responseStatus)  } ; "
       nodeSelector:
         beta.kubernetes.io/os: windows
Do you require assistance on your network or want to deploy your very own?
Then click the button below to get started!