In the process of modernize your data center to run VMs and containers side by side, Run Kubernetes as part of vSphere with Tanzu Basic. Tanzu Basic embeds Kubernetes in to the vSphere control plane for the best administrative control and user experience. Provision clusters directly from vCenter and run containerized workloads with ease. Tanzu basic is the most affordable and has below components as part of Tanzu Basic:
To Install and configure Tanzu Basic without NSX-T, at high level there are four steps which we need to perform and I will be covering all the steps in three blog posts:
- vSphere7 with a cluster with HA and DRS enabled should have been already configured
- Installation of VMware HA Proxy Load Balancer – Part1
- Tanzu Basic – Enable Workload Management – Part2
- Tanzu Basic – Building TKG Cluster – Part3
Deploy VMware HAProxy
There are few topologies to setup Tanzu Basic with vSphere based networking, for this blog we will deploy the HAProxy VM with three virtual NICs, which means there will be one “Management” network , one “Workload” Network and another one will be “frontend” network which will be used by DevOps users and external services will also access HAProxy through virtual IPs on this Frontend network.
|Management||Communicating with vCenter and HA Proxy|
|Workload||IP assigned to Kubernetes Nodes|
|Front End||DevOps uses and External Services|
For This Blog, I have created three VLAN based Networks with below IP ranges:
Here is the topology diagram , HAProxy has been configured with three nics and each nic is connected to VLAN that we created above
NOTE– if you want to deep dive on this Networking refer Here , This blog post describe it very nicely and I have used the same networking schema in this Lab deployment.
Deploy VMware HA Proxy
This is not common HA Proxy, it is customized one and its Data Plane API designed to enable Kubernetes workload management with Project Pacific on vSphere 7.VMware HAProxy deployment is very simple, you can directly access/download OVA from Here and follow same procedure as you follow for any other OVA deployment on vCenter, there are few important things which I am covering below:
On Configuration screen , choose “Frontend Network” for three NIC deployment topology
Now Networking section which is heart of the solution, on this we align above created port groups to map to the Management, Workload and Frontend networks.
Management network is on VLAN 115, this is the network where the vSphere with Tanzu Supervisor control plane VMs / nodes are deployed.
Workload network is on vLAN 166, where the Tanzu Kubernetes Cluster VMs / nodes will be deployed.
Front end network which is on vLAN 117, this is where the load balancers (Supervisor API server, TKG API servers, TKG LB Services) are provisioned. Frontend network and workload network must need to route to each other for the successful wcp enablement.
Next page is most important and here we will have VMware HAProxy appliance configuration. Provide a root password and tick/untick for root login option based on your choice. The TLS fields will be automatically generated if left blank.
In the “network config” section, provide network details about the VMware HAProxy for the management network, the workload network and frontend/load balancer network. These all require static IP addresses, in the CIDR format. You will need to specify a CIDR format that matches the subnet mask of your networks.
For Management IP: 192.168.115.5/24 and GW:192.168.115.1
For Workload IP: 192.168.116.5/24 and GW:192.168.116.1
For Frontend IP: 192.168.117.5/25 and GW:192.168.117.1 . this is not optional if you had selected Frontend in “configuration” section.
In Load Balancing section, enter the Load Balancer IP ranges. these IP address will be used as Virtual IPs by the load balancer and these IP will come from Frontend network IP range.
Here I am specifying 192.168.117.32/27 , this segment will give me 30 address for VIPs for Tanzu management plane access and application exposed for external consumption.Ignore “192.168.117.30” in the image back ground.
Enter Data plane API management port. Enter: 5556 and also enter a username and password for the load balancer data plane API
Finally review the summary and click finish. this will deploy VMware HAProxy LB appliance
Once deployment completed, power on the appliance and SSH in to the VM using the management plane IP and check if all the interfaces are having correct IPs:
Also check if you can ping Front end ip ranges and other Ip ranges also. stay tuned for Part2.