In continuation to last post where we had deployed VMware HA proxy, now we will enable a vSphere cluster for Workload Management, by configuring it as a Supervisor Cluster.
Part-1- Getting Started with Tanzu Basic – Part1
What is Workload Management
With Workload Management we can deploy and operate the compute, networking, and storage infrastructure for vSphere with Kubernetes. vSphere with Kubernetes transforms vSphere to a platform for running Kubernetes workloads natively on the hypervisor layer. When enabled on a vSphere cluster, vSphere with Kubernetes provides the capability to run Kubernetes workloads directly on ESXi hosts and to create upstream Kubernetes clusters within dedicated resource pools
Since we selected creating a Supervisor Cluster with the vSphere networking stack in previous post that means vSphere Native Pods will not be available but we can create Tanzu Kubernetes clusters.
As per our HA proxy deployment , we chosen HAProxy VM with three virtual NICs, thus connecting HAProxy to a Frontend network. DevOps users and external services can access HAProxy through virtual IPs on the Frontend network. Below are the pre-requisite to enable Workload Management
- DRS and HA should be enabled on the vSphere cluster, and ensure DRS is in the fully automated mode.
- Configure shared storage for the cluster. Shared storage is required for vSphere DRS, HA, and storing persistent volumes of containers.
- Storage Policy: Create a storage policy for the placement of Kubernetes control plane VMs.
- I have created policy two policies named “basic” & “TanzuBasic”
- NOTE: You should created policy with lower case policy name
- This policy has been created with Tag based placement rules
- Content Library: Create a subscribed content library using URL: https://wp-content.vmware.com/v2/latest/lib.json on the vCenter Server to download VM image that is used for creating nodes of Tanzu Kubernetes clusters. The library will contain the latest distributions of Kubernetes.
- Add all hosts from the cluster to a vSphere Distributed Switch and create port groups for Workload Networks
Deploy Workload Management
With the release of vSphere 7 update 1 a free trial of Tanzu is available for 60 day evaluation . Enter your details to receive communication from VMware and get started with Tanzu
Next screen takes you to choose networking options available with vCenter, make sure
- You choose correct vCenter
- For networking there are two networking stack, since we haven’t installed NSX-T it will be greyed out and unavailable, choose “vCenter Server Network” and move to “Next”
On next screen you will be presented with vSphere Clusters which are compatible for Tanzu, incase you don’t see any cluster, go on to “Incompatible” section and click on cluster which will give you guidance for the reason of incompatible, go back and fix the reason and try again
Select the size of the resource allocation you need for the Control Plane. For the evaluation, Tiny or Small should be enough and click on Next.
Storage: Select the storage policy which we created as per of pre-requisite and click on Next
Load Balancer: This section is very important and we need to ensure that we provide correct values:
- Enter a DNS-compliant, don’t use “under-score” in the name
- Select the type of Load Balancer: “HA Proxy”
- Enter the Management data plane IP Address. This is our management ip and port number assigned to VMware HA proxy management interface.In our case it is 192.168.115.10:5556.
- Enter the username and password used during deployment for the HA Proxy
- Enter the IP Address Ranges for Virtual Server. we need to provide the IP ranges for virtual servers, these are the ip-address which we had defined in the frontend network. It’s the exact same range which we used during deployment of HA-proxy configuration process, but this time we will have to write full range instead of using a CIDR format, in this case i am using: 192.168.117.33-192.168.117.62
- Finally, enter in the Server CA cert. If you have added a cert during deployment, you would use that. If you have used a self-signed cert then you can retrieve that data from the VM by browsing /etc/haproxy/ca.crt.
Management Network: Next portion is to configure IP address for Tanzu supervisor control plane VM’s, this will be from management IP range.
- We will need 5 consecutive IPs free from Management IP range, Starting IP Address this is the first IP in a range of five IPs to assign to Supervisor control plane VMs’ management network interfaces.
- One IP is assigned to each of the three Supervisor control plane VMs in the cluster
- One IP is used for a Floating IP which we will use to connect to Management plane
- One IP is reserved for use during upgrade process
- This will on mgmt port group
Service IP Address: we can take the default network subnet for “IP Address for Services. change this if you are using this subnet anywhere else. This subnet is for internal communication and it not routed.
And the last network in which we will define the Kubernetes node IP range, this applies to both the supervisor cluster as well as the guest TKG clusters. This range will be from workload IP range which we had created in the last post with vLAN 116.
- Port Group – workload
- IP Address Range – 192.168.116.32-192.168.116.63
Finally choose the content library which we had created as a part of pre-requisite
if you have provided right information with correct configuration, it will take around 20 minutes to install and configure entire TKG management plane to consume. you might see few errors while configuring Management plane but you can ignore as those operations will be retried automatically and errors will get clear when that particular task get succeed.
NOTE-Above screenshot has different cluster name as i have taken it from different environment but IP schema is same.
I hope this article helps you to enable your first “Workload Management” vSphere cluster without NSX-T. Next Blog post i will cover deployment of TKG Clusters and others things around that…