In this blog post i will be deploying Load Balancer in to tenant organization VDC kubernetes cluster which has been deployed by Cloud Director Container Service Extension.
What is LB in Kubernetes ?
To understand load balancing on Kubernetes, we must need to understand some Kubernetes basics:
- A “pod” in Kubernetes is a set of containers that are related in terms of their function, and a “service” is a set of related pods that have the same set of functions. This level of abstraction insulates the client from the containers themselves. Pods can be created and destroyed by Kubernetes automatically, Since every new pod is assigned a new IP address, IP addresses for pods are not stable; therefore, direct communication between pods is not generally possible. However, services have their own IP addresses which are relatively stable; thus, a request from an external resource is made to a service rather than a pod, and the service then dispatches the request to an available pod.
An external load balancer will apply logic that ensures the optimal distribution of these requests. In order to create Load Balancer, your clusters must be deployed in to cloud director based cloud and follow below steps to configure load balancer for your kubernetes cluster:
METALLB Load Balancer
MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters. for more information on this, refer here
Pre-requisite
MetalLB requires the following pre-requisite to function:
- A CSE Kubernetes cluster, running Kubernetes 1.13.0 or later.
- A cluster network configuration that can coexist with MetalLB.
- Some IPv4 addresses for MetalLB to hand out.
- Here is my CSE cluster info , this cluster i will be using for this Demo:
Metallb Load Balancer Deployment
Metallb deployment is very simple three step process, follow below steps:
- Create a new namespace as below:
-
#kubectl create ns metallb-system
-
- Below command will deploy MetalLB to your cluster, under the
metallb-system
namespace.-
#kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
-
-
- The
metallb-system/controller
deployment. This is the cluster-wide controller that handles IP address assignments. - The
metallb-system/speaker
daemonset. This is the component that speaks the protocol(s) of your choice to make the services reachable.
- The
- Service accounts for the controller and speaker, along with the RBAC permissions that the components need to function.
-
#kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
-
NOTE– I am accessing my CSE cluster using NAT , thats the reason i am using “–insecure-skip-tls-verify”
Metallb LB Layer2 Configuration
The installation manifest does not include a configuration file. MetalLB’s components will still start, but will remain idle until you define and deploy a configmap. The specific configuration depends on the protocol you want to use to announce service IPs. Layer 2 mode is the simplest to configure and in many cases, you don’t need any protocol-specific configuration, only IP addresses.
- Following configuration map gives MetalLB control over IPs from
192.168.98.220
to192.168.98.250
, and configures Layer 2 mode:-
apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: default protocol: layer2 addresses: - 192.168.98.220-192.168.98.250
-
This completes installation and configuration of load balancer, let’s go ahead and publish the application using kubernetes “type=LoadBalancer”. cse and metallb will take everything.
Deploy an Application
Before deploying application, i want to show the my cloud director network topology where these container workload is getting deployed and kubernetes services are getting created.So here we have one Red segment (192.168.98.0/24) for container workload where CSE has deployed kubernetes worker nodes and on same network we have deployed our “Metallb” load balancer.
Kubernetes pods will be created on weave networks , which is internal software defined networking for CSE and services will be exposed using Load Balancer which is configured with “OrgVDC” network.
let’s get started ,so we are going to use “guestbook” application, which uses Redis to store its data. It writes its data to a Redis master instance and reads data from multiple Redis worker (slave) instances. let’s go ahead and deploy it:
- The first step is to deploy a Redis master.
- The guestbook application needs to communicate to the Redis master to write its data. You need to apply a Service to proxy the traffic to the Redis master Pod. A Service defines a policy to access the Pods.
- #kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-service.yaml
- Create & Run Redis Slave deployment
- The guestbook application needs to communicate to Redis slaves to read data. To make the Redis slaves discoverable, you need to set up a Service. A Service provides transparent load balancing to a set of Pods.
- #kubectl apply -f https://k8s.io/examples/application/guestbook/redis-slave-service.yaml
- The guestbook application has a web frontend serving the HTTP requests written in PHP. It is configured to connect to the
redis-master
Service for write requests and theredis-slave
service for Read requests.- #kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml
- Since we want guests to be able to access this guestbook, we must configure the frontend Service to be externally visible, so a client can request the Service from outside the container cluster. In this case we are going to expose Services through
LoadBalancer
- #kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml
- Inside Yaml change from type: NodePort to type: LoadBalancer
NOTE – All above steps in detail has been covered in Kubernetes.io website – here is Link
Accessing Application
To access the guestbook Service, you need to find the external IP of the Service you just set up by running the command:
- #kubectl get services
- Go back to your organization VDC and create a NAT rule, so that service can be access using routable/public IP.
- Copy the IP address in EXTERNAL-IP column, and load the page in your browser:
Very easy and simple process to deploy and access your containerised applications which are running on most secure and easy VMware Cloud Director. go ahead and start running containerised apps with upstream kubernetes on Cloud Director.
Stay tuned!!! , in next post i will be deploying Ingress on CSE cluster.
Pingback: Ingress on Cloud Director Container Service Extension | VMTECHIE
Pingback: Configuring Ingress Controller on Tanzu Kubernetes Grid | VMTECHIE
I’m truly enjoying the design and layout of your blog. It’s a very easy on the eyes which makes it much more pleasant for me to come here and visit more often. Did you hire out a designer to create your theme? Exceptional work!
LikeLike