Ingress on Cloud Director Container Service Extension

In this blog post i will be deploying Ingress controller  along with Load Balancer (LB was deployed in previous post) in to tenant organization VDC kubernetes cluster which has been deployed by Cloud Director Container Service Extension.

What is Ingress in Kubernetes

“NodePort” and “LoadBalancer”  let you expose a service by specifying that value in the service’s type. Ingress, on the other hand, is a completely independent resource to your service. You declare, create and destroy it separately to your services.

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.

11

Pre-requisite

Before we begin we’ll need to have a few pieces already in place:

  • A Kubernetes cluster (See Deployment Options for provider specific details)
  • kubectl configured with admin access to your cluster
  • RBAC must be enabled on your cluster

Install Contour

To install Contour, Run:

  • #$ kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
  • 1

This command creates:

  • A new namespace projectcontour
  • A Kubernetes Daemonset running Envoy on each node in the cluster listening on host ports 80/443
  • Now we need to retrieve the external address of the load balancer assigned to Contour by our Load Balancer that we deployed in previous port. to get the LB IP run this command:
    • 2
  • “External IP” is of the range to IP addresses that we had given in LB config , we will NAT this IP on VDC Edge gateway to access this from outside or internet.
    • 3

Deploy an Application

Next we need to deploy at least one Ingress object before Contour can serve traffic. Note that as a security feature, Contour does not expose a port to the internet unless there’s a reason it should. A great way to test your Contour installation is to deploy the

In this example we will deploy a simple web application and then configure load balancing for that application using the Ingress resource and will access it using load balancer IP/FQDN.This application is hosted at Github , can be downloaded from Here. Once downloaded:

  • Create the coffee and the tea deployments and services using
  • Create a secret with an SSL certificate and a key
  • Create an Ingress resource

This completes the deployment of the application.

Test the Application

To access the application, browse the coffee and the tea services from your desktop which has access to service network. you will also need to add hostname/ip in to /etc/hosts file or your DNS server

  • To get Coffee:
    • 8
  • If your prefer Tea:
    • 10

This completes the installation and configuration of Ingress on VMware Cloud Director Container Service Extension, Contour is VMware’s open source version of Ingress controller and offers rich feature to consume and can be found Here.

Advertisement

Load Balancer for Cloud Director Container Service Extension

In this blog post i will be deploying Load Balancer in to tenant organization VDC kubernetes cluster which has been deployed by Cloud Director Container Service Extension.

What is LB in Kubernetes ?

To understand load balancing on Kubernetes, we must need to understand some Kubernetes basics:

  • A “pod” in Kubernetes is a set of containers that are related in terms of their function, and a “service” is a set of related pods that have the same set of functions. This level of abstraction insulates the client from the containers themselves. Pods can be created and destroyed by Kubernetes automatically, Since every new pod is assigned a new IP address, IP addresses for pods are not stable; therefore, direct communication between pods is not generally possible. However, services have their own IP addresses which are relatively stable; thus, a request from an external resource is made to a service rather than a pod, and the service then dispatches the request to an available pod.

An external load balancer will apply logic that ensures the optimal distribution of these requests. In order to create Load Balancer, your clusters must be deployed in to cloud director based cloud and follow below steps to configure load balancer for your kubernetes cluster:

                                                 18

METALLB Load Balancer

MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters. for more information on this, refer here

Pre-requisite

MetalLB requires the following pre-requisite to function:

  • A CSE Kubernetes cluster, running Kubernetes 1.13.0 or later.
  • A cluster network configuration that can coexist with MetalLB.
  • Some IPv4 addresses for MetalLB to hand out.
  • Here is my CSE cluster info , this cluster i will be using for this Demo:
    • 1

Metallb Load Balancer Deployment

Metallb deployment is very simple three step process, follow below steps:

  • Create a new namespace as below:
    • #kubectl create ns metallb-system
  • Below command will deploy MetalLB to your cluster, under the metallb-system namespace.
    • #kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
    • The metallb-system/controller deployment. This is the cluster-wide controller that handles IP address assignments.
    • The metallb-system/speaker daemonset. This is the component that speaks the protocol(s) of your choice to make the services reachable.
    • 3
  • Service accounts for the controller and speaker, along with the RBAC permissions that the components need to function.
    • #kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
    • 4

NOTE– I am accessing my CSE cluster using NAT , thats the reason i am using “–insecure-skip-tls-verify”

Metallb LB Layer2 Configuration

The installation manifest does not include a configuration file. MetalLB’s components will still start, but will remain idle until you define and deploy a configmap. The specific configuration depends on the protocol you want to use to announce service IPs. Layer 2 mode is the simplest to configure and in many cases, you don’t need any protocol-specific configuration, only IP addresses.

  • Following configuration map gives MetalLB control over IPs from 192.168.98.220 to 192.168.98.250, and configures Layer 2 mode:
    • apiVersion: v1
      kind: ConfigMap
      metadata:
        namespace: metallb-system
        name: config
      data:
        config: |
          address-pools:
          - name: default
            protocol: layer2
            addresses:
            - 192.168.98.220-192.168.98.250

This completes installation and configuration of load balancer, let’s go ahead and publish the application using kubernetes “type=LoadBalancer”.  cse and metallb will take everything.

Deploy an Application

Before deploying application, i want to show the my cloud director network topology where these container workload is getting deployed and kubernetes services are getting created.So here we have one Red segment (192.168.98.0/24)  for container workload where CSE has deployed kubernetes worker nodes and on same network we have deployed our “Metallb” load balancer.

Kubernetes pods will be created on weave networks , which is internal software defined networking for CSE and services will be exposed using Load Balancer which is configured with “OrgVDC” network.

17

let’s get started ,so we are going to use “guestbook” application, which uses Redis to store its data. It writes its data to a Redis master instance and reads data from multiple Redis worker (slave) instances. let’s go ahead and deploy it:

NOTE – All above steps in detail has been covered in Kubernetes.io website – here is Link

Accessing Application

To access the guestbook Service, you need to find the external IP of the Service you just set up by running the command:

  • #kubectl get services
    • 14
  • Go back to your organization VDC and create a NAT rule, so that service can be access using routable/public IP.
    • 16
  • Copy the IP address in EXTERNAL-IP column, and load the page in your browser:
    • 15

Very easy and simple process to deploy and access your containerised applications which  are running on most secure and easy VMware Cloud Director. go ahead and start running containerised apps with upstream kubernetes on Cloud Director.

Stay tuned!!! , in next post i will be deploying Ingress on CSE cluster.

Installing Tanzu Kubernetes Grid

This blog post helps you to create Tanzu Kubernetes Grid  Clusters running on either VMware Cloud on AWS and/or vSphere 6.7 Update 3 infrastructure.

NOTETanzu Kubernetes Grid Plus is the only supported version on VMware Cloud on AWS. you can deploy Kubernetes clusters on your VMC clusters using Tanzu Kubernetes Grid Plus. Please refer to KB 78173 for detailed support matrix. 

Pre-requisite

On your vSphere/VMware Cloud on AWS instance ensure that you have the following objects in place:

  •  A resource pool in which to deploy the Tanzu Kubernetes Grid Instance (TKG)
  • A VM folder in which to collect the Tanzu Kubernetes Grid VMs (TKG)
  • Create a network segment with DHCP enabled
  • Firewall rules on compute segment.
  • This is not must but i prefer a Linux based Virtual Machine called “cli-vm” which we will be using as a “Bootstrap Environment” to install Tanzu Kubernetes Grid CLI binaries for Linux

Installing Kubectl

Kubectl is a command line tool for controlling Kubernetes clusters that we will use for managing/controlling k8 cluster deployed by TKG . To install latest version of Kubectl follow below steps on “cli-vm”:

#curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
#chmod +x kubectl
#mv kubectl /usr/local/bin

1

Installing Docker

Docker is a daemon-based container engine which allows us to deploy applications inside containers, since my VM is Cent OS , i started the docker service , to start docker service , follow below steps on cli-vm:

#systemctl start docker
#systemctl enable docker

To view the status of the daemon, run the following command:

#systemctl status docker

Install Tanzu Kubernetes Grid CLI

To use Tanzu Kubernetes Grid, you download and run the Tanzu Kubernetes Grid CLI on local system, in our case we will install on our “cli-vm”

  • Get the tkg cli binary from GA build page like:
    • For Linux platforms, download tkg-linux-amd64-v1.0.0_vmware.1.gz.
    • For Mac OS platforms, download tkg-darwin-amd64-v1.0.0_vmware.1.gz.
  • Unzip the file:
    • gunzip ./tkg-linux-amd64-v1.0.0_vmware.1.gz
    • Copy the unzip binary to /usr/local/bin:
    • chmod +x . /tkg-linux-amd64-v1.0.0_vmware.1
    • mv ./tkg-linux-amd64-v1.0.0_vmware.1 /usr/local/bin/tkg
  • Check tkg env is ready:
    • # tkg version
      Client:
      Version: v1.0.0
      Git commit: 60f6fd5f40101d6b78e95a33334498ecca86176e
  • /root/.tkg folder will be auto created for tkg config file

4

Create an SSH Key Pair

In order for Tanzu Kubernetes Grid VMs to run tasks in vSphere, you must provide the public key part of an SSH key pair to Tanzu Kubernetes Grid when you deploy the management cluster. You can use a tool such as ssh-keygen to generate a key pair.

  • On the machine on which you will run the Tanzu Kubernetes Grid CLI, run the following ssh-keygen command.
    • #ssh-keygen -t rsa -b 4096 -C “email@example.com
  • At the prompt
    • Enter file in which to save the key (/root/.ssh/id_rsa):
    •  press Enter to accept the default.
  • Enter and repeat a password for the key pair.
  • Open the file .ssh/id_rsa.pub in a text editor copy and paste it when you deploy the management cluster.

Import OVA & Create Template in VC

Before we can deploy a Tanzu Kubernetes Grid management cluster or Tanzu Kubernetes clusters to vSphere, we must provide a base OS image template to vSphere. Tanzu Kubernetes Grid creates the management cluster and Tanzu Kubernetes cluster node VMs from this template. Tanzu Kubernetes Grid provides a base OS image template in OVA format for you to import into vSphere. After importing the OVA, you must convert the VM into a vSphere VM template

  • Export TKG needs 2 ova, which are photon-3-v1.17.3+vmware.2.ova and photon-3-capv-haproxy-v0.6.3+vmware.1.ova.
  • Convert above both ova vms to templates and put them to vm folder.
  • Here are high level steps:
    • In the vSphere Client, right-click on cluster & select Deploy OVF template.
    • Choose Local file, click the button to upload files, and navigate to the photon-3-v1.17.3_vmware.2.ova file on your local machine.
    • Follow the on screen instruction to deploy VM from the OVA template.
      • choose the appliance name
      • choose the destination datacenter or folder
      • choose the destination host, cluster, or resource pool
      • Accept EULA
      • Select the disk format and datastore
      • Select the network for the VM to connect to
      • Click Finish to deploy the VM
      • Right-click on the VM, select Template and click on Convert to Template

    • Follow the same step for proxy OVA – photon-3-capv-haproxy-v0.6.3+vmware.1.ova

Installing TKG Management Cluster

Once pre-work is done , follow below steps to create Tanzu Management cluster:

  • On the machine on which you downloaded and installed the Tanzu Kubernetes Grid CLI, run the following command
    • #tkg init –ui
  • if your cli vm running X11 desktop then it will open browser with loop back IP , if not you can setup using putty on your windows desktop , like this:
    • 27
  • Once you have successfully opened the connection, open you web browser and navigate to http://127.0.0.1:8081 and you should be seeing below page
    • 5
  • Enter your IAAS provider details where TKG can create K8s cluster and in this case we need to enter VMC vCenter Server information and then click “Connect” button, you will get notification etc accept all and then “connect button” becomes “connected” from here you just need to select the details about where do you want to deploy TKG, here fill the Datacenter and SSH Key which we created in previous steps.
    • 67
  • Select Development or Production Flavour and specify an instance type, then give the K8s Management Cluster a name and select API server Load Balancer (specify the HA Proxy VM Template, which we have uploaded in previous step)
    • 8
  • Select Resource Pool(TKG), VM Folder(TKG) and WorkloadDatastore
    • 9
  • Select Network Name and leave the other as default
    • 10
  • Select the K8s PhotonOS Template , this is the VM template that we uploaded in previous steps.
    • 11
  • Review all settings to ensure they match as per your selection and then click on “Deploy Management Cluster” , which will begin the deployment process….
    • 12
  • This takes around 5 to 6 minutes to complete the entire process of setting up TKG management cluster and once the Management Cluster has been deployed, you can go close the web browser and go back to your SSH session and stop the TKG UI rendering.
    • 1314
  • you can verify that all pods are up & running by running the command:
    • #kubectl get pods -A
    • 15

Deploy Tanzu Kubernetes workload Cluster

so let’s deploy Tanzu Kubernetes Cluster as  we have completed Tanzu Kubernetes grid Management cluster, we can use the “TKG” CLI to deploy Tanzu Kubernetes clusters from your management cluster to vSphere/VMware Cloud on AWS.Run the following command to deploy TKG Cluster called “avnish” or any other name you wish to use.

  • #tkg create cluster avnish –plan=dev

above command will deploy a Tanzu Kubernetes cluster with the minimum default configuration.I am also deploying another cluster by specifying few more parameters

Deploy High Available Kubernetes Cluster

This command will deploy high available kubernetes cluster.

  • #tkg create cluster my-cluster –plan dev –controlplane-machine-count 3 –worker-machine-count 5
    • 16
  • Once the TKG Cluster is up and running , we can run the following commands to get cluster information and credentials:

    • #tkg get clusters
    • #tkg get credential <clustername>
    • 17
  • To get the context of TKG clusters , run normal Kubernetes commands:
    • #kubectl config get-contexts
    • #kubectl config use-context avnish-admin@avnish
    • 1819
  • To get the kubernetes node details , run kubernetes commands…
    • 20

Scale TKG Clusters

After we create a Tanzu Kubernetes cluster, we can scale it up or down by increasing or reducing the number of node VMs that it contains.

  • Scale up – Scale a cluster, use the
    • #tkg scale cluster
    • to change the number of control plane nodes  –controlplane-machine-count 
    • to change the number of worker nodes –worker-machine-count
    • 21
    • 22
    • 23

Installing Octant on TKG Clusters

Octant is an open source developer-centric web interface for Kubernetes that lets you inspect a Kubernetes cluster and its applications, it is simple to install and easy to use.

  • Here is  more information on Installation and configuration
  • you need to install octant on your cli VM and then proxy it so that you can open its web interface.
    • octant2octant1

This completes the installation and configuration of Tanzu Kubernetes grid, once you have management cluster ready, go ahead and deploy containerised applications on these clusters. TKG gives lots of flexibility of deploying , scaling and managing multiple TKG workload cluster and can be given based on department/projects.