With the release of the vSphere 7.0 Update 2, VMware now adds new Load Balancer option for vSphere with Tanzu which provides production-ready load balancer option for your vSphere with Tanzu deployments. This Load Balancer is called NSX Advanced Load Balancer, or NSX ALB or AVI Load Balancer, This will provide Virtual IP addresses for the Supervisor Control Plane API server, the TKG guest cluster API servers and any Kubernetes applications that require a service of type Load Balancer. In this post, I will go through a step-by-step deployment of the new NSX ALB along with vSphere with Tanzu.
VLAN & IP address Planning
There are many way to plan IP, in this Lab I will place management ,VIPs and workload nodes on three different networks. For this deployment , I will be using three VLANs, One for Tanzu Management, One for Frontend or VIP and one for Supervisor cluster and TKG clusters , here is my IP Planning sheet:
Deploying & Configuring NSX ALB (AVI)
Now Lets deploy NSX ALB controller (AVI LB) by following very similar process that we follow to deploy any other OVA and I will be assigning NSX ALB IP management address from management network address range. The NSX ALB is available as an OVA. In this deployment, I am using version 20.1.4.The only information required at deployment time are
- A static IP Address
- A subnet mask
- A default gateway
- An sysadmin login authentication key
I have deployed one controller appliance for this Lab but if you are doing production deployment , it is recommended to create three node controller cluster for high availability and better performance.
Once OVA deployment completes, power on the VM and wait for some time before you browse NSX ALB url using the IP address provided while deployment and login to Controller and then:
- Enter DNS Server Details and Backup Passphrase
- Add NTP Server IP address
- Provide Email/SMTP details ( not mandatory)
Next Choose VMware vCenter as your “Orchestrator Integration”, This creates a new cloud configuration in the NSX ALB called as Default-Cloud. Enter below details on next screen:
- Insert IP of your vCenter,
- vCenter Credential
- Permission – Write Permission
- SDN Integration – None
- Select appropriate vCenter “Data Center”
- For Default Network IP Address Management – Static
On next screen, we define the IP address pool for the Service Engines.
- Select Management Network (on this Network “management interface” of “service engine” will get connected)
- Enter IP Subnet
- Enter Free IP’s in to IP Address Pool section
- Enter Default Gateway
Select No for configuring multiple Tenants. Now we’re ready to get into the NSX ALB configuration.
Create IPAM Profile
IPAM will be used to assign VIPs to Virtual Services, Kubernetes control planes and applications running inside pods. to create IPAM Go to: Templates -> Profiles -> IPAM/DNS Profiles
- Assign Name to the Profile , This IPAM will be for “frontend” Network
- Select Type – “Avi Vantage IPAM“
- Cloud for Usable Network – Choose “Default-Cloud“
- Usable Network – Choose Port group, in my case “frontend” ( all vCenter port groups will get populated automatically by vCenter discovery)
Create and Configure DNS profile as below: ( This is optional)
Go to “Infrastructure” and click on “Cloud” and edit “Default Cloud” and update IPAM Profile and DNS profiles with the IPAM profile and DNS profile that we created above.
Configure the VIP Network
On NSX ALB console , go to “Infrastructure” and then “Networks” , this will display all the network discovered by NSX ALB. Select “frontend” network and Click on Edit
- Click on “Add Subnet“
- Enter subnet , in my case – 192.168.117.0/24
- Click on Static IP Address pool:
- Ensure “Use Static IP Address for VIPs and SE” is selected
- and enter IP Segment Pool , in my case 192.168.117.100-192.168.117.200
- Click on Save
Create New Controller Certificate
Default AVI certificate doesn’t contain IP SAN and can’t be used by vCenter/Tanzu to connect to AVI, so we need to create a custom controller and use it during Tanzu management plane deployment. let’s create controller certificate by going to Templates -> Security -> SSL/TLS Certificates -> Create -> Controller Certificate
Complete the Page with required information and make sure “Subject Alernative Name (SAN)” is NSX ALB controller IP/Cluster IP or hostname.
Then go to Administration -> Settings -> Access Settings and edit System Access Settings:
Delete all the certificates in SSL/TLS certificate filed and choose the certificate that we created in above section.
Go to Template->Security->SSL/TLS Certificates, Copy the certificate we created to use while enabling Tanzu Management plane
Since the workload network (192.168.116.0/24) is on a different subnet from the VIP network (192.168.170.0/24), we need to add a static route in NSX ALB controller, Go the Infrastructure page, navigate to Routing and then to Static Route. Click the Create button and create static routes accordingly.
Enable Tanzu Control Plane (Workload Management)
I am not going to go through the full deployment of workload management, these are similar steps detailed HERE . However, there are a few steps that are different:
- On page 6 Choose Type=AVI as your Load Balancer type.
- there is no load balancer IP Address range required, this is now provided by the NSX ALB.
- the Certificate we need to provide, should be of NSX ALB which we created in previous step.
The new NSX Advanced Load Balancer is far superior to the HA-Proxy specially in provider environment. The providers can deploy, offer and manage K8 clusters with VMware supported LB type even though the configuration requires a few additional steps, it is very simple to setup. The visibility provided into health and usage of the virtual services are going to be extremely beneficial for day-2 operations, and should provide great insights for those providers who are responsible for provisioning and managing Kubernetes distributions running on vSphere. Feel free to share any feedback…