Category: PKS

  • Cloud Native Runtimes for Tanzu

    Dynamic Infrastructure

    This is an IT concept whereby underlying hardware and software can respond dynamically and more efficiently to changing levels of demand. Modern Cloud Infrastrastructure built on VM and Containers requires automated:

    • Provisioning, Orchestration, Scheduling
    • Service Configuration, Discovery and Registry
    • Network Automation, Segmentation, Traffic Shaping and Observability

    What is Cloud Native Runtimes for Tanzu ?

    Cloud Native Runtimes for VMware Tanzu is a Kubernetes-based platform to deploy and manage modern Serverless workloads. Cloud Native Runtimes for Tanzu is based on Knative, and runs on a single Kubernetes cluster. Cloud Native Runtime automates all the aspects of dynamic Infrastructure requirements.

    Serverless ≠ FaaS

    ServerlessFaaS
    Multi-Threaded (Server)Cloud Provider Specific
    Cloud Provider AgnosticSingle Threaded Functions
    Long lived (days)Shortly Lived (minutes)
    offer more flexibilityManaging a large number of functions can be tricky

    Cloud Native Runtime Installation

    Command line Tools Required For Cloud Native Runtime of Tanzu

    The following command line tools are required to be downloaded and installed on a client workstation from which you will connect and manage Tanzu Kubernetes cluster and Tanzu Serverless.

    kubectl (Version 1.18 or newer)

    • Using a browser, navigate to the Kubernetes CLI Tools (available in vCenter Namespace) download URL for your environment.
    • Select the operating system and download the vsphere-plugin.zip file.
    • Extract the contents of the ZIP file to a working directory.The vsphere-plugin.zip package contains two executable files: kubectl and vSphere Plugin for kubectl. kubectl is the standard Kubernetes CLI. kubectl-vsphere is the vSphere Plugin for kubectl to help you authenticate with the Supervisor Cluster and Tanzu Kubernetes clusters using your vCenter Single Sign-On credentials.
    • Add the location of both executables to your system’s PATH variable.

    kapp (Version 0.34.0 or newer)

    kapp is a lightweight application-centric tool for deploying resources on Kubernetes. Being both explicit and application-centric it provides an easier way to deploy and view all resources created together regardless of what namespace they’re in. Download and Install as below:

    ytt (Version 0.30.0 or newer)

    ytt is a templating tool that understands YAML structure. Download, Rename and Install as below:

    kbld (Version 0.28.0 or newer)

    Orchestrates image builds (delegates to tools like Docker, pack, kubectl-buildkit) and registry pushes, works with local Docker daemon and remote registries, for development and production cases

    kn

    The Knative client kn is your door to the Knative world. It allows you to create Knative resources interactively from the command line or from within scripts. Download, Rename and Install as below:

    Download Cloud Native Runtimes for Tanzu (Beta)

    To install Cloud Native Runtimes for Tanzu, you must first download the installation package from VMware Tanzu Network:

    1. Log into VMware Tanzu Network.
    2. Navigate to the Cloud Native Runtimes for Tanzu release page.
    3. Download the serverless.tgz archive and release.lock
    4. Create a directory named tanzu-serverless.
    5. Extract the contents of serverless.tgz into your tanzu-serverless directory:
    #tar xvf serverless.tar.gz

    Install Cloud Native Runtimes for Tanzu on Tanzu Kubernetes Grid Cluster

    For this installation i am using a TKG cluster deployed on vSphere7 with Tanzu.To install Cloud Native Runtimes for Tanzu on Tanzu Kubernetes Grid: First target the cluster you want to use and verify that you are targeting the correct Kubernetes cluster by running:

    #kubectl cluster-info

    Run the installation script from the tanzu-serverless directory and wait for progress to get over

    #./bin/install-serverless.sh

    During my installation, I faced couple of issues like this..

    i just rerun the installation, which automatically fixed these issues..

    Verify Installation

    To verify that your serving installation was successful, create an example Knative service. For information about Knative example services, see Hello World – Go in the Knative documentation. let’s deploy a sample web application using the kn cli. Run:

    #kn service create hello --image gcr.io/knative-samples/helloworld-go - default

    Take above external URL and either add Contour IP with host name in local hosts file or add an DNS entry and browse and if everything is done correctly your first application is running sucessfully.

    You can list and describe the service by running command:

    #kn service list -A
    #kn service describe hello -n default

    It looks like everything is up and ready as we configured it. Some other things you can do with the Knative CLI are to describe and list the routes with the app:

    #kn route describe hello -n default

    Create your own app

    This demo used an existing Knative example, why not make our own app from an image, let do it using below yaml:

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: helloworld
      namespace: default
    spec:
     template:
      spec:
       containers:
         - image: gcr.io/knative-samples/helloworld-go
           ports:
                 - containerPort: 8080
           env:
            - name: TARGET
              value: "This is my app"

    Save this to k2.yaml or something which you like, now lets deploy this new service using the kubectl apply command:

    #kubectl apply -f k2.yaml

    Next, we can list service and describe new deployment, as per the name provided in the YAML file:

    and now finally browse the URL by going to http://helloworld.default.example.com (you would need to add entry in DNS or hosts files)

    This proves your application is running successfully, Cloud Native Runtimes for Tanzu is a great way for developers to move quickly go on serverless development with networking, autoscaling (even to zero), and revision tracking etc that allow users to see changes in apps immediately. GO ahead and try this in your Lab and once GA in production.

  • Getting Started with Tanzu Basic

    In the process of modernize your data center to run VMs and containers side by side, Run Kubernetes as part of vSphere with Tanzu Basic. Tanzu Basic embeds Kubernetes in to the vSphere control plane for the best administrative control and user experience. Provision clusters directly from vCenter and run containerized workloads with ease. Tanzu basic is the most affordable and has below components as part of Tanzu Basic:

    To Install and configure Tanzu Basic without NSX-T, at high level there are four steps which we need to perform and I will be covering all the steps in three blog posts:

    1. vSphere7 with a cluster with HA and DRS enabled should have been already configured
    2. Installation of VMware HA Proxy Load Balancer – Part1
    3. Tanzu Basic – Enable Workload Management – Part2
    4. Tanzu Basic – Building TKG Cluster – Part3

    Deploy VMware HAProxy

    There are few topologies to setup Tanzu Basic with vSphere based networking, for this blog we will deploy the HAProxy VM with three virtual NICs, which means there will be one “Management” network , one “Workload” Network and another one will be “frontend” network which will be used by DevOps users and external services will also access HAProxy through virtual IPs on this Frontend network.

    NetworkUse
    ManagementCommunicating with vCenter and HA Proxy
    WorkloadIP assigned to Kubernetes Nodes
    Front EndDevOps uses and External Services

    For This Blog, I have created three VLAN based Networks with below IP ranges:

    NetworkIP RangeVLAN
    tkgmgmt192.168.115/24115
    Workload192.168.116/24116
    Frontend192.168.117/24117

    Here is the topology diagram , HAProxy has been configured with three nics and each nic is connected to VLAN that we created above

    NOTE– if you want to deep dive on this Networking refer Here , This blog post describe it very nicely and I have used the same networking schema in this Lab deployment.

    Deploy VMware HA Proxy

    This is not common HA Proxy, it is customized one and its Data Plane API designed to enable Kubernetes workload management with Project Pacific on vSphere 7.VMware HAProxy deployment is very simple, you can directly access/download OVA from Here and follow same procedure as you follow for any other OVA deployment on vCenter, there are few important things which I am covering below:

    On Configuration screen , choose “Frontend Network” for three NIC deployment topology

    Now Networking section which is heart of the solution, on this we align above created port groups to map to the Management, Workload and Frontend networks.

    Management network is on VLAN 115, this is the network where the vSphere with Tanzu Supervisor control plane VMs / nodes are deployed.

    Workload network is on vLAN 166, where the Tanzu Kubernetes Cluster VMs / nodes will be deployed.

    Front end network which is on vLAN 117, this is where the load balancers (Supervisor API server, TKG API servers, TKG LB Services) are provisioned. Frontend network and workload network must need to route to each other for the successful wcp enablement.

    Next page is most important and here we will have VMware HAProxy appliance configuration. Provide a root password and tick/untick for root login option based on your choice. The TLS fields will be automatically generated if left blank.

    In the “network config” section, provide network details about the VMware HAProxy for the management network, the workload network and frontend/load balancer network. These all require static IP addresses, in the CIDR format. You will need to specify a CIDR format that matches the subnet mask of your networks.

    For Management IP: 192.168.115.5/24 and GW:192.168.115.1

    For Workload IP: 192.168.116.5/24 and GW:192.168.116.1

    For Frontend IP: 192.168.117.5/25 and GW:192.168.117.1 . this is not optional if you had selected Frontend in “configuration” section.

    In Load Balancing section, enter the Load Balancer IP ranges. these IP address will be used as Virtual IPs by the load balancer and these IP will come from Frontend network IP range.

    Here I am specifying 192.168.117.32/27 , this segment will give me 30 address for VIPs for Tanzu management plane access and application exposed for external consumption.Ignore “192.168.117.30” in the image back ground.

    Enter Data plane API management port. Enter: 5556 and also enter a username and password for the load balancer data plane API

    Finally review the summary and click finish. this will deploy VMware HAProxy LB appliance

    Once deployment completed, power on the appliance and SSH in to the VM using the management plane IP and check if all the interfaces are having correct IPs:

    Also check if you can ping Front end ip ranges and other Ip ranges also. stay tuned for Part2.

  • Upgrade Tanzu Kubernetes Grid

    Tanzu Kubernetes Grid make it very simple to upgrade Kubernetes clusters , without impacting availability of control plane and also ensures rolling update for worker nodes.we just need to run two commands to upgrade Tanzu Kubernetes Grid: “tkg upgrade management-cluster” and tkg upgrade cluster CLI commands to upgrade clusters that we deployed with Tanzu Kubernetes Grid 1.0.0. In this blog post we will upgrade Tanzu Kubernetes Grid from version 1.0.0 to 1.1.0

    Pre-Requisite

    In this post we are going to upgrade TKG from version 1.0.0 to version 1.1.0 and to start upgrading we need to download the new versions of “TKG” client cli , Base OS Image Template and API server load balancer.

    1. Download and Install new version of “TKG” cli on your client VM
      1. Download and upload For Linux – VMware Tanzu Kubernetes Grid CLI 1.1.0 Linux.
      2. Unzip using
        1. #gunzip tkg-linux-amd64-v1.1.0-vmware.1.gz
      3. unzip file named will be tkg-linux-amd64-v1.1.0-vmware.1
      4. move and rename to “tkg” using
        1. #mv ./tkg-linux-amd64-v1.1.0-vmware.1 /usr/local/bin/tkg
      5. Make it executable using
        1. #chmod +x /usr/local/bin/tkg
      6. Run #tkg version , this should show updated version of “tkg cli” client command line
      7. undefined
    2. Download the new version
      1. Download OVA for Node VMs: photon-3-kube-v1.18.2-vmware.1.ova
      2. Download OVA for LB VMs: photon-3-haproxy-v1.2.4-vmware.1.ova
      3. Once download,Deploy these OVA using “Deploy OVF template” on vSphere
      4. When OVA deployment completed, right click the VM and select “Template” and click on “Convert to Template”

    Upgrade TKG Management Cluster

    As you know management cluster is a purpose-built for operating the Tanzu platform and managing the lifecycle of Tanzu Kubernetes clusters. so we need to first upgrade Tanzu Management Cluster before we upgrade our Kubernetes clusters.This is the most seamless upgrade of Kubernetes cluster, i have ever done, so let’s get in to:

    1. First list TKG management cluster which are running in my environment using, command will display the information of tkg management clusters
      1. #tkg get management Cluster
      2. undefined
    2. Once got the name of management cluster , run below command to proceed with upgrade
      1. #tkg upgrade management-cluster <management-cluster-name>
      2. undefined
      3. Upgrade process first upgrades the Cluster API providers for vSphere, then it upgrades the K8s version in control plane and worker node of management cluster.
      4. undefined
    3. if everything goes fine, it should take less than 30 minutes to complete the upgrade process of management plane.
      1. undefined
    4. Now if you run #tkg get cluster –include-management-cluster , it should show upgraded version of management cluster.
      1. undefined

    Upgrade Tanzu Kubernetes Cluster

    Now since our management plane is upgraded. lets go ahead and upgrade Tanzu Kubernetes Clusters.

    1. Here also process is same as we did for management cluster run below command to proceed with upgrade
      1. #tkg upgrade cluster <cluster-name>
      2. if the cluster is not running in”default” namespace, specify “–namespace” option (when TKG is part of vSphere7 with Kubernetes)
      3. the upgrade process will upgrade Kubernetes version across your control and worker virtual machines.
      4. Once done , you must see successful upgrade of Kubernetes clusters.
    2. Now login with your credential and see the Kubernetes version your cluster is running.

    This completes the upgrade process, such an easy process of upgrading Kubernetes clusters without impacting availability of cluster.

  • Configuring Ingress Controller on Tanzu Kubernetes Grid

    Contour is an open source Kubernetes ingress controller providing the control plane for the Envoy edge and service proxy.​ Contour supports dynamic configuration updates and multi-team ingress delegation out of the box while maintaining a lightweight profile.In this blog post i will be deploying Ingress controller along with Load Balancer (LB was deployed in this post).you can also expose Envoy proxy as node port which will allow you to access your service on each k8s node.

    What is Ingress in Kubernetes

    “NodePort” and “LoadBalancer”  let you expose a service by specifying that value in the service’s type. Ingress, on the other hand, is a completely independent resource to your service. You declare, create and destroy it separately to your services.

    Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting

    13

    Pre-requisite

    Before we begin we’ll need to have a few pieces already in place:

    • A Kubernetes cluster (See Here on How to Deploy TKG)
    • kubectl configured with admin access to your cluster
    • You have downloaded and unpacked the bundle of Tanzu Kubernetes Grid extensions which can be downloaded from here

    Install Contour Ingress Controller

    Contour is an Ingress controller for Kubernetes that works by deploying the Envoy proxy as a reverse proxy and load balancer.To install Contour follow below steps:

    • Downloaded VMware Tanzu Kubernetes Grid Extensions Manifest 1.1.0 in the pre-requisite stage ,move that to your Client VM and unzip it.
      • 1
    • You deploy Contour and Envoy directly on Tanzu Kubernetes clusters. You do not need to deploy Contour on management clusters.
    • Set the context of kubectl to the Tanzu Kubernetes cluster on which to deploy Contour.
      • #kubectl config use-context avnish-admin@avnish
    • First Install Cert-Manager on the k8 cluster
      • kubectl apply -f tkg-extensions-v1.1.0/cert-manager/
      • 2
    • Deploy Contour and Envoy on the cluster using:
      • #kubectl apply -f tkg-extensions-v1.1.0/ingress/contour/vsphere/
      • 3

    This completes installation of Contour Ingress Controller on Tanzu Kubernetes Cluster.let’s deploy an application and test the functionality.

    Deploy a Sample Application

    Next we need to deploy at least one Ingress object before Contour can serve traffic. Note that as a security feature, Contour does not expose a port to the internet unless there’s a reason it should. A great way to test your Contour installation is to deploy the application

    In this example we will deploy a simple web application and then configure load balancing for that application using the Ingress resource and will access it using load balancer IP/FQDN.This application is available within the same folder which we have downloaded from VMware inside example folder. Let’s deploy the application:

    • Run below command to deploy application which will create a new namespace named “test-ingress” , 2 services and one deployment.
      • #kubectl apply -f tkg-extensions-v1.1.0/ingress/contour/examples/common
      • 4

    Very simple way of installing the application, now lets create Ingress resource.

    Create Ingress Resource

    Let’s imagine a scenario where the “foo” team owns http://www.foo.bar.com/foo and “bar” team owns http://www.foo.bar.com/bar. considering this scenario:

    • Here is Ingress Resource Definition for our example application:
      • apiVersion: extensions/v1beta1
        kind: Ingress
        metadata:
          name: https-ingress
          namespace: test-ingress
          labels:
            app: hello
        spec:
          tls:
          - secretName: https-secret
            hosts:
              - foo.bar.com
          rules:
          - host: foo.bar.com
            http:
              paths:
              - path: /foo
                backend:
                  serviceName: s1
                  servicePort: 80
              - path: /bar
                backend:
                  serviceName: s2
                  servicePort: 80
    • Lets deploy it using below command:
      • #kubectl apply -f tkg-extensions-v1.1.0/ingress/contour/examples/https-ingress
      • 5
      • Check the status and grub the External IP address of Contour “envoy” proxy.
      • 8
      • Add an /etc/hosts entry to above IP addresses to foo.bar.com

    Test the Application

    To access the application, browse the foo and the bar services from your desktop which has access to service network.

    • if you browse bar, you will get bar service responding
      • 9
    • if you browse foo, you will get foo service responding
      • 7

    This completes the installation and configuration of Ingress on VMware Tanzu Kubernetes Grid K8 cluster. Contour is VMware’s open source version of Ingress controller and offers rich feature to consume and can be found Here and when customer chooses to Taznu portfolio , they get Contour as supported version from VMware.

  • Ingress on Cloud Director Container Service Extension

    In this blog post i will be deploying Ingress controller  along with Load Balancer (LB was deployed in previous post) in to tenant organization VDC kubernetes cluster which has been deployed by Cloud Director Container Service Extension.

    What is Ingress in Kubernetes

    “NodePort” and “LoadBalancer”  let you expose a service by specifying that value in the service’s type. Ingress, on the other hand, is a completely independent resource to your service. You declare, create and destroy it separately to your services.

    Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.

    11

    Pre-requisite

    Before we begin we’ll need to have a few pieces already in place:

    • A Kubernetes cluster (See Deployment Options for provider specific details)
    • kubectl configured with admin access to your cluster
    • RBAC must be enabled on your cluster

    Install Contour

    To install Contour, Run:

    • #$ kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
    • 1

    This command creates:

    • A new namespace projectcontour
    • A Kubernetes Daemonset running Envoy on each node in the cluster listening on host ports 80/443
    • Now we need to retrieve the external address of the load balancer assigned to Contour by our Load Balancer that we deployed in previous port. to get the LB IP run this command:
      • 2
    • “External IP” is of the range to IP addresses that we had given in LB config , we will NAT this IP on VDC Edge gateway to access this from outside or internet.
      • 3

    Deploy an Application

    Next we need to deploy at least one Ingress object before Contour can serve traffic. Note that as a security feature, Contour does not expose a port to the internet unless there’s a reason it should. A great way to test your Contour installation is to deploy the

    In this example we will deploy a simple web application and then configure load balancing for that application using the Ingress resource and will access it using load balancer IP/FQDN.This application is hosted at Github , can be downloaded from Here. Once downloaded:

    • Create the coffee and the tea deployments and services using
    • Create a secret with an SSL certificate and a key
    • Create an Ingress resource

    This completes the deployment of the application.

    Test the Application

    To access the application, browse the coffee and the tea services from your desktop which has access to service network. you will also need to add hostname/ip in to /etc/hosts file or your DNS server

    • To get Coffee:
      • 8
    • If your prefer Tea:
      • 10

    This completes the installation and configuration of Ingress on VMware Cloud Director Container Service Extension, Contour is VMware’s open source version of Ingress controller and offers rich feature to consume and can be found Here.

  • Load Balancer for Cloud Director Container Service Extension

    In this blog post i will be deploying Load Balancer in to tenant organization VDC kubernetes cluster which has been deployed by Cloud Director Container Service Extension.

    What is LB in Kubernetes ?

    To understand load balancing on Kubernetes, we must need to understand some Kubernetes basics:

    • A “pod” in Kubernetes is a set of containers that are related in terms of their function, and a “service” is a set of related pods that have the same set of functions. This level of abstraction insulates the client from the containers themselves. Pods can be created and destroyed by Kubernetes automatically, Since every new pod is assigned a new IP address, IP addresses for pods are not stable; therefore, direct communication between pods is not generally possible. However, services have their own IP addresses which are relatively stable; thus, a request from an external resource is made to a service rather than a pod, and the service then dispatches the request to an available pod.

    An external load balancer will apply logic that ensures the optimal distribution of these requests. In order to create Load Balancer, your clusters must be deployed in to cloud director based cloud and follow below steps to configure load balancer for your kubernetes cluster:

                                                     18

    METALLB Load Balancer

    MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters. for more information on this, refer here

    Pre-requisite

    MetalLB requires the following pre-requisite to function:

    • A CSE Kubernetes cluster, running Kubernetes 1.13.0 or later.
    • A cluster network configuration that can coexist with MetalLB.
    • Some IPv4 addresses for MetalLB to hand out.
    • Here is my CSE cluster info , this cluster i will be using for this Demo:
      • 1

    Metallb Load Balancer Deployment

    Metallb deployment is very simple three step process, follow below steps:

    • Create a new namespace as below:
      • #kubectl create ns metallb-system
    • Below command will deploy MetalLB to your cluster, under the metallb-system namespace.
      • #kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
      • The metallb-system/controller deployment. This is the cluster-wide controller that handles IP address assignments.
      • The metallb-system/speaker daemonset. This is the component that speaks the protocol(s) of your choice to make the services reachable.
      • 3
    • Service accounts for the controller and speaker, along with the RBAC permissions that the components need to function.
      • #kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
      • 4

    NOTE– I am accessing my CSE cluster using NAT , thats the reason i am using “–insecure-skip-tls-verify”

    Metallb LB Layer2 Configuration

    The installation manifest does not include a configuration file. MetalLB’s components will still start, but will remain idle until you define and deploy a configmap. The specific configuration depends on the protocol you want to use to announce service IPs. Layer 2 mode is the simplest to configure and in many cases, you don’t need any protocol-specific configuration, only IP addresses.

    • Following configuration map gives MetalLB control over IPs from 192.168.98.220 to 192.168.98.250, and configures Layer 2 mode:
      • apiVersion: v1
        kind: ConfigMap
        metadata:
          namespace: metallb-system
          name: config
        data:
          config: |
            address-pools:
            - name: default
              protocol: layer2
              addresses:
              - 192.168.98.220-192.168.98.250

    This completes installation and configuration of load balancer, let’s go ahead and publish the application using kubernetes “type=LoadBalancer”.  cse and metallb will take everything.

    Deploy an Application

    Before deploying application, i want to show the my cloud director network topology where these container workload is getting deployed and kubernetes services are getting created.So here we have one Red segment (192.168.98.0/24)  for container workload where CSE has deployed kubernetes worker nodes and on same network we have deployed our “Metallb” load balancer.

    Kubernetes pods will be created on weave networks , which is internal software defined networking for CSE and services will be exposed using Load Balancer which is configured with “OrgVDC” network.

    17

    let’s get started ,so we are going to use “guestbook” application, which uses Redis to store its data. It writes its data to a Redis master instance and reads data from multiple Redis worker (slave) instances. let’s go ahead and deploy it:

    NOTE – All above steps in detail has been covered in Kubernetes.io website – here is Link

    Accessing Application

    To access the guestbook Service, you need to find the external IP of the Service you just set up by running the command:

    • #kubectl get services
      • 14
    • Go back to your organization VDC and create a NAT rule, so that service can be access using routable/public IP.
      • 16
    • Copy the IP address in EXTERNAL-IP column, and load the page in your browser:
      • 15

    Very easy and simple process to deploy and access your containerised applications which  are running on most secure and easy VMware Cloud Director. go ahead and start running containerised apps with upstream kubernetes on Cloud Director.

    Stay tuned!!! , in next post i will be deploying Ingress on CSE cluster.

  • Installing Tanzu Kubernetes Grid

    This blog post helps you to create Tanzu Kubernetes Grid  Clusters running on either VMware Cloud on AWS and/or vSphere 6.7 Update 3 infrastructure.

    NOTETanzu Kubernetes Grid Plus is the only supported version on VMware Cloud on AWS. you can deploy Kubernetes clusters on your VMC clusters using Tanzu Kubernetes Grid Plus. Please refer to KB 78173 for detailed support matrix. 

    Pre-requisite

    On your vSphere/VMware Cloud on AWS instance ensure that you have the following objects in place:

    •  A resource pool in which to deploy the Tanzu Kubernetes Grid Instance (TKG)
    • A VM folder in which to collect the Tanzu Kubernetes Grid VMs (TKG)
    • Create a network segment with DHCP enabled
    • Firewall rules on compute segment.
    • This is not must but i prefer a Linux based Virtual Machine called “cli-vm” which we will be using as a “Bootstrap Environment” to install Tanzu Kubernetes Grid CLI binaries for Linux

    Installing Kubectl

    Kubectl is a command line tool for controlling Kubernetes clusters that we will use for managing/controlling k8 cluster deployed by TKG . To install latest version of Kubectl follow below steps on “cli-vm”:

    #curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
    #chmod +x kubectl
    #mv kubectl /usr/local/bin

    1

    Installing Docker

    Docker is a daemon-based container engine which allows us to deploy applications inside containers, since my VM is Cent OS , i started the docker service , to start docker service , follow below steps on cli-vm:

    #systemctl start docker
    #systemctl enable docker

    To view the status of the daemon, run the following command:

    #systemctl status docker

    Install Tanzu Kubernetes Grid CLI

    To use Tanzu Kubernetes Grid, you download and run the Tanzu Kubernetes Grid CLI on local system, in our case we will install on our “cli-vm”

    • Get the tkg cli binary from GA build page like:
      • For Linux platforms, download tkg-linux-amd64-v1.0.0_vmware.1.gz.
      • For Mac OS platforms, download tkg-darwin-amd64-v1.0.0_vmware.1.gz.
    • Unzip the file:
      • gunzip ./tkg-linux-amd64-v1.0.0_vmware.1.gz
      • Copy the unzip binary to /usr/local/bin:
      • chmod +x . /tkg-linux-amd64-v1.0.0_vmware.1
      • mv ./tkg-linux-amd64-v1.0.0_vmware.1 /usr/local/bin/tkg
    • Check tkg env is ready:
      • # tkg version
        Client:
        Version: v1.0.0
        Git commit: 60f6fd5f40101d6b78e95a33334498ecca86176e
    • /root/.tkg folder will be auto created for tkg config file

    4

    Create an SSH Key Pair

    In order for Tanzu Kubernetes Grid VMs to run tasks in vSphere, you must provide the public key part of an SSH key pair to Tanzu Kubernetes Grid when you deploy the management cluster. You can use a tool such as ssh-keygen to generate a key pair.

    • On the machine on which you will run the Tanzu Kubernetes Grid CLI, run the following ssh-keygen command.
      • #ssh-keygen -t rsa -b 4096 -C “email@example.com
    • At the prompt
      • Enter file in which to save the key (/root/.ssh/id_rsa):
      •  press Enter to accept the default.
    • Enter and repeat a password for the key pair.
    • Open the file .ssh/id_rsa.pub in a text editor copy and paste it when you deploy the management cluster.

    Import OVA & Create Template in VC

    Before we can deploy a Tanzu Kubernetes Grid management cluster or Tanzu Kubernetes clusters to vSphere, we must provide a base OS image template to vSphere. Tanzu Kubernetes Grid creates the management cluster and Tanzu Kubernetes cluster node VMs from this template. Tanzu Kubernetes Grid provides a base OS image template in OVA format for you to import into vSphere. After importing the OVA, you must convert the VM into a vSphere VM template

    • Export TKG needs 2 ova, which are photon-3-v1.17.3+vmware.2.ova and photon-3-capv-haproxy-v0.6.3+vmware.1.ova.
    • Convert above both ova vms to templates and put them to vm folder.
    • Here are high level steps:
      • In the vSphere Client, right-click on cluster & select Deploy OVF template.
      • Choose Local file, click the button to upload files, and navigate to the photon-3-v1.17.3_vmware.2.ova file on your local machine.
      • Follow the on screen instruction to deploy VM from the OVA template.
        • choose the appliance name
        • choose the destination datacenter or folder
        • choose the destination host, cluster, or resource pool
        • Accept EULA
        • Select the disk format and datastore
        • Select the network for the VM to connect to
        • Click Finish to deploy the VM
        • Right-click on the VM, select Template and click on Convert to Template

      • Follow the same step for proxy OVA – photon-3-capv-haproxy-v0.6.3+vmware.1.ova

    Installing TKG Management Cluster

    Once pre-work is done , follow below steps to create Tanzu Management cluster:

    • On the machine on which you downloaded and installed the Tanzu Kubernetes Grid CLI, run the following command
      • #tkg init –ui
    • if your cli vm running X11 desktop then it will open browser with loop back IP , if not you can setup using putty on your windows desktop , like this:
      • 27
    • Once you have successfully opened the connection, open you web browser and navigate to http://127.0.0.1:8081 and you should be seeing below page
      • 5
    • Enter your IAAS provider details where TKG can create K8s cluster and in this case we need to enter VMC vCenter Server information and then click “Connect” button, you will get notification etc accept all and then “connect button” becomes “connected” from here you just need to select the details about where do you want to deploy TKG, here fill the Datacenter and SSH Key which we created in previous steps.
      • 67
    • Select Development or Production Flavour and specify an instance type, then give the K8s Management Cluster a name and select API server Load Balancer (specify the HA Proxy VM Template, which we have uploaded in previous step)
      • 8
    • Select Resource Pool(TKG), VM Folder(TKG) and WorkloadDatastore
      • 9
    • Select Network Name and leave the other as default
      • 10
    • Select the K8s PhotonOS Template , this is the VM template that we uploaded in previous steps.
      • 11
    • Review all settings to ensure they match as per your selection and then click on “Deploy Management Cluster” , which will begin the deployment process….
      • 12
    • This takes around 5 to 6 minutes to complete the entire process of setting up TKG management cluster and once the Management Cluster has been deployed, you can go close the web browser and go back to your SSH session and stop the TKG UI rendering.
      • 1314
    • you can verify that all pods are up & running by running the command:
      • #kubectl get pods -A
      • 15

    Deploy Tanzu Kubernetes workload Cluster

    so let’s deploy Tanzu Kubernetes Cluster as  we have completed Tanzu Kubernetes grid Management cluster, we can use the “TKG” CLI to deploy Tanzu Kubernetes clusters from your management cluster to vSphere/VMware Cloud on AWS.Run the following command to deploy TKG Cluster called “avnish” or any other name you wish to use.

    • #tkg create cluster avnish –plan=dev

    above command will deploy a Tanzu Kubernetes cluster with the minimum default configuration.I am also deploying another cluster by specifying few more parameters

    Deploy High Available Kubernetes Cluster

    This command will deploy high available kubernetes cluster.

    • #tkg create cluster my-cluster –plan dev –controlplane-machine-count 3 –worker-machine-count 5
      • 16
    • Once the TKG Cluster is up and running , we can run the following commands to get cluster information and credentials:

      • #tkg get clusters
      • #tkg get credential <clustername>
      • 17
    • To get the context of TKG clusters , run normal Kubernetes commands:
      • #kubectl config get-contexts
      • #kubectl config use-context avnish-admin@avnish
      • 1819
    • To get the kubernetes node details , run kubernetes commands…
      • 20

    Scale TKG Clusters

    After we create a Tanzu Kubernetes cluster, we can scale it up or down by increasing or reducing the number of node VMs that it contains.

    • Scale up – Scale a cluster, use the
      • #tkg scale cluster
      • to change the number of control plane nodes  –controlplane-machine-count 
      • to change the number of worker nodes –worker-machine-count
      • 21
      • 22
      • 23

    Installing Octant on TKG Clusters

    Octant is an open source developer-centric web interface for Kubernetes that lets you inspect a Kubernetes cluster and its applications, it is simple to install and easy to use.

    • Here is  more information on Installation and configuration
    • you need to install octant on your cli VM and then proxy it so that you can open its web interface.
      • octant2octant1

    This completes the installation and configuration of Tanzu Kubernetes grid, once you have management cluster ready, go ahead and deploy containerised applications on these clusters. TKG gives lots of flexibility of deploying , scaling and managing multiple TKG workload cluster and can be given based on department/projects.

  • Deploy VMware PKS – Part3

    In  continuation to  my PKS installation, we are now going to install and configure the PKS Control Plane which provides a frontend API that will be used by Cloud Operators and Platform Operators to very easily interact with PKS for provisioning and managing (create, delete, list, scale up/down) Kubernetes Clusters.

    Once a Kubernetes cluster has been successfully provisioned through PKS by Cloud Operations , the operators will need to  provide  the external hostname of the K8S Cluster and the Kubectl configuration file to their developers and then developers can  start consuming this newly provisioned K8s clusters and deploying applications without knowing simplicity/complexity of PKS/NSX-T.

    Previous Posts of this series for your reference is here:

    Download PKS

    First of all download PKS from Pivotal Network , file will have extension .pivotal.

    1

    Installation Procedure

    To import the PKS Tile, go to the home page of Ops Manager and click “Import a Product” and select the PKS package to begin the import process in to ops manager , it takes some time since this is a 4+GB appliance.

    2.png

    Once the PKS Tile has been successfully imported, go ahead and click on the “plus” sign to add the PKS Tile which will make it available for us to start configuring. After that, Click the orange Pivotal Container Service tile to start the configuration process.

    12

    Assign AZ and Networks

    • Here we will place the PKS API vm in the Management AZ and on the PKS Management Network that we have created on dvs in previous posts.
    • Choose Network which PKS API VM will use to connect to Network , in our case it is management network.
    • First time installation of PKS does not apply “Service Network” but we need to choose a network , for this installation i have created a NSX-T LS called “k8s” for Service network and i can use this in future, you can also create or specify “pks-mgmt” as this does not apply on new installation.
    • 3

    Configure PKS API

    • Generate a wild card certificate for PKS API by selecting Generate RSA Certificate and create a DNS record.
    • Worker VM Max in Flight:  This makes sure how many instances of a component (non-canary worker) can start simultaneously when a cluster is created or resized. The variable defaults to 1 , which means that only one component starts at a time.
    • 4

    Create Plans

    Basically a plan defines a set of resource types used for deploying clusters. You can configure up to three plans in GUI. You must configure Plan 1.

    • Create multiple plans based on your needs like you can have master either 1 or 3.
    • you can choose to deploy number of worker VMs for each cluster and as per documentation worker nodes upto 200 has been tested but this number can go beyond 200 but sizing needs to be planned based on the other factors (like application and its requirement etc)
    • Availability Zone – Select one or more AZs for the Kubernetes clusters deployed by PKS for Master and same setting you need to configure for worker nodes and if you choose multiple AZ , then equal number of worker node will get deployed across AZs
    • Errand VM Type – select the size of the VM that contains the errand. The smallest instance may be sufficient, as the only errand running on this VM is the one that applies the Default Cluster App YAML configuration.
    • To allow users to create pods with privileged containers, select the Enable Privileged Containers – Use with caution because privileged containers is a container running as privileged essentially disables the security mechanisms provided by Docker and allows code to run on the underlying system.
    • Disable DenyEscalatingExec – This will disable Admission Control.
      • 56

    Create Plan 2 and Plan3 or just choose Inactive and create them later but remember PKS does not support changing the number of master/etcd nodes for plans with existing deployed clusters.

    Configure Kubernetes Cloud Provider (IAAS Provider)

    Here you will configure your IAAS where all these VMs will get deployed and in my case this is vSphere based cloud but now PKS supports forAWS, GCP and Azure.

    • Enter vCenter Details like Name , Credentials , data store names etc..

    8

    Configure PKS Logging

    • Logging is optional and can be configured with vRealize Log Insight , for my Lab i am leaving it default.
    • To enable clusters to drain app logs to sinks using SYSLOG://, select the Enable Sink Resources checkbox.
    • 9

    Configure Networking for Kubernetes Clusters

    NSX Manager Super User Principal Identity Certificate – As per NSX-T documentation , a principal can be an NSX-T component or a third-party application such open stack or PKS. With a principal identity, a principal can use the identity name to create an object and ensure that only an entity with the same identity name can modify or delete the object (except Enterprise Admin). A principal identity can only be created or deleted using the NSX-T API. However, you can view principal identities through the NSX Manager UI.

    We will have to create a user id and that user, id PKS API uses the NSX Manager superuser principal identity to communicate with NSX-T to create, delete, and modify networking resources for Kubernetes cluster nodes. Follow the steps here to create it.

    • Choose NSX-T as  Networking Interface
    • Specify NSX Manager hostname and generate the certificate as per above step.
    • 10
    • Pods IP Block ID – Here enter the UUID of the IP block to be used for Kubernetes pods. PKS allocates IP addresses for the pods when they are created in Kubernetes. every time a namespace is created in Kubernetes, a subnet from this IP block is allocated.
    • Nodes IP Block ID – Here enter the UUID of the IP block to be used for Kubernetes nodes. PKS allocates IP addresses for the nodes when they are created in Kubernetes. The node networks are created on a separate IP address space from the pod networks.
    • 11.png
    • T0 Router ID – Here enter the  T0 router UUID.
    • 12.png
    • Floating IP Pool ID – Here enter the ID that you created for load balancer VIPs. PKS uses these floating IP pool to allocate IP addresses to the load balancers created for each of the clusters. The load balancer routes the API requests to the master nodes and the data plane.
    • 13.png
    • Node DNS – Specify Node DNS Server Name , ensure Nodes are reachable to DNS servers.
    • vSphere Cluster Names – Here enter a comma-separated list of the vSphere clusters where you will deploy Kubernetes clusters. The NSX-T pre-check errand uses this field to verify that the hosts from the specified clusters are available in NSX-T
    • HTTP/HTTPS Proxy – Optional
    • 14.png

    Configure User Account and Authentication (UAA)

    Before users can log in and use the PKS CLI, you must configure PKS API access with UAA. You use the UAA Command Line Interface (UAAC) to target the UAA server and request an access token for the UAA admin user. If your request is successful, the UAA server returns the access token. The UAA admin access token authorizes you to make requests to the PKS API using the PKS CLI and grant cluster access to new or existing users.

    • Leaving setting default with some timer changes
    • 15

    Monitoring

    You can monitor kubernetes cluster and pods using VMware Wavefront , which i will be covering in a separate post.

    • For now leave it default.

    Usage Data

    VMware’s Customer Experience Improvement Program (CEIP) and the Pivotal Telemetry Program (Telemetry)  program.

    • choose based in your preference.

    Errands

    Errands are scripts that run at designated points during an installation.

    • Since we are running PKS with NSX-T , we must need to verify our NSX-T configuration.
    • 16.png

    Resource Config for PKS VM

    Edit resources used by the Pivotal Container Service job and if there are timeouts while accessing PKS API VM , use high resource VM Type , for this LAB i am going with Default.

    • Leave it default.
    • 17.png

    Missing Stemcell

    A stemcell is a versioned Operating System image wrapped with IaaS specific packaging.A typical stemcell contains a bare minimum OS skeleton with a few common utilities pre-installed, a BOSH Agent, and a few configuration files to securely configure the OS by default. For example: with vSphere, the official stemcell for Ubuntu Trusty is an approximately 500MB VMDK file.

    Click on missing stemcell link which will take you to StemCell Library. Here you can see PKS requires stemcell 170.15 , since i have already downloaded thats the reason it is showing 170.25 in the deployed section but in new installation cases it will show none  deployed. Click IMPORT STEMCELL and choose a stemcell which can be downloaded from Here to import.

    18.png

    Apply Changes

    Return to Ops Manager installation Dashboard and click on “Review Pending Changes” and finally “Apply Changes” , this will go ahead and deploy PKS API VM at your IAAS location which you have chosen while configuring PKS tile.

    14.png

    and if the configuration of the tile is correct , around after 30 minute , you will see a successful message that deployment has been completed , which gives very nice feeling that your hard work  and dedication resulting success (for me it failed couple of time because of storage/network and resource issues).

    To identify which VM has been deployed , you can check custom attributes or go back to  the Installation Dashboard, click the PKS tile then go to the Status tab. Here we can see the IP address of our PKS API , also notice CID which is VM name in vCenter inventory. also you can see the health of the PKS VM.

    1920

    This completes the PKS VM deployment procedure. in the next post we will deploy kubernetes Cluster.

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

  • Deploy VMware PKS – Part2

    In this part I will begin PKS installation by deploying Pivotal Ops Manager which basically provides a management interface (UI/API) for Platform Operators to manage the complete lifecycle of both BOSH and PKS starting from install then going to patch and upgrade.

    To refer other posts of this series are here:

    Getting Started with VMware PKS & NSX-T

    Deploy VMware PKS – Part1

    In addition, you can also deploy new application services using Ops Manager Tiles like adding an Enterprise-class Container Registry like VMware Harbor which can then be configured to work with PKS.

    Installing OpsManager

    2.png

    • Once Downloaded , Log into vCenter using the vSphere Web Client or HTML5 Client to deploy the Ops Manager OVA.
    • Choose your Management cluster , appropriate network and other OVA deployment options , i am not going to cover OVA deployment procedure here. Only at customize template , enter below details:
      • Admin Password: A default password for the user “ubuntu”.
        • If you do not enter a password, Ops Manager will not boot up.
      • Custom hostname: The hostname for the Ops Manager VM, in My example opsmgr.corp.local.
      • DNS: One or more DNS servers for the Ops Manager VM.
      • Default Gateway: The default gateway for Ops Manager.
      • IP Address: The IP address of the Ops Manager network interface.
      • NTP Server: The IP address of one or more NTP servers for Ops Manager.
      • Netmask: The network mask for Ops Manager.1.png
    • Create a DNS entry for the IP address that you used for Ops Manager ,which we will use in next steps. use this DNS NAME/IP address and browse on a browser , which will take you to Authentication System for initial authentication setup and for our setup, i will use “Internal Authentication” for this Lab. Click on “Internal Authentication”
      • 2
    • Next, you will be prompted to create a new admin user which we will use to manage BOSH. Once you have successfully created the user, go ahead and login with the new user account
      • 3.png
    • Once you are logged into Ops Manager, you can see that the BOSH Tile is already there but is showing as un-configured (orange colour denotes un-configured) which means BOSH has not yet been deployed yet. Go ahead and click on the tile to begin the configuration to deploy BOSH.
      • 4.png

    Before starting Bosh Tile configuration , we need to prepare NSX Manager, listed below procedure for:

    Generating and Registering the NSX Manager Certificate for PKS

    The NSX Manager CA certificate is used to authenticate PKS with NSX Manager. You create an IP-based, self-signed certificate and register it with the NSX Manager.By default, the NSX Manager includes a self-signed API certificate with its hostname as the subject and issuer. PKS Ops Manager requires strict certificate validation and expects the subject and issuer of the self-signed certificate to be either the IP address or fully qualified domain name (FQDN) of the NSX Manager. That’s the reason, we need to regenerate the self-signed certificate using the FQDN of the NSX Manager in the subject and issuer field and then register the certificate with the NSX Manager using the NSX API.

    • Create a file for the certificate request parameters named “nsx-cert.cnf” in a linux VM where openssl tool is installed.
    • Write below content in to the file which we create in above step.
      • 3.png
    • Export the NSX_MANAGER_IP_ADDRESS and NSX_MANAGER_COMMONNAME environment variables using the IP address of your NSX Manager and the FQDN of the NSX Manager host.
      • 4.png
    • Using openssl tool generate the certificate by running below command:
      • 56
    • Verify the certificate by running command as below:
      • ~$ openssl x509 -in nsx.crt -text -noout and ensure SAN has DNS name and IP addresses.
      • 7.png
    • Import this Certificate in NSX Manager , go to System -> Trust -> Certificates and click on Import -> Import Certificate
      • 8.png
    • Ensure that Certificate looks like this in your NSX Manager.
      • 9.png
    • Next is to Register the certificate with NSX Manager using below procedure , first the ID of the certificate from gui.
      • 10.png
    • Run the below command (in to your API client ) to register the certificate using below command replace “CERTIFICATE-ID” with your certificate ID.

    Now let’s configure BOSH tile , which will deploy BOSH based on our input parameters.

    Configure BOSH Tile to Deploy BOSH Director

    Click on the tile. It will open the tile’s setting tab with the vCenter Config parameters page.

    • vCenter Config 
      • Name: a unique meaning full name
      • vCenter Host: The hostname of the vCenter.
      • vCenter Username: Username for above VC with create and delete privileges for virtual machines (VMs) and folders.
      • vCenter Password: the password of above VC.
      • Datacenter Name: Exact name of data center object in vCenter
      • Virtual Disk Type: Select “Thin” or “Thick”
      • Ephemeral Datastore Names: The names of the data stores that store ephemeral VM disks deployed by Ops Manager , you can specify many data stores by using comma.
      • Persistent Datastore Names (comma delimited): The names of the datastores that store persistent VM disks deployed by Ops Manager.
      • To deploy BOSH as well as the PKS Control Plane VMs, Ops Manager will go ahead and upload a Stemcell VM ( A VM Template that PKS) and it will clone from that image for both PKS Management VMs as well as base K8S VMs.
      • 11
    • NSX-T Config 
      • Choose NSX Networking and Select NSX-T 
      • NSX Address: IP/DNS Name for NSX-T Manager.
      • NSX Username and NSX Password: NSX-T credential
      • NSX CA Cert: Open the NSX CA Cert that you generated in above section and copy/paste its content to this field.
      • 12.png
    •  Other Config
      • VM Folder: The vSphere datacenter folder where Ops Manager places VMs.
      • Template Folder: The vSphere folder where Ops Manager places stemcells(templates).
      • Disk path Folder: The vSphere datastore folder where Ops Manager creates attached disk images. You must not nest this folder.
      • And Click on “Save”.
      • 13
    • Director Config
      • For Director config , i have put in few details like:
        • NTP Server Details
        • Enable VM Resurrector Plugin
        • Enable Post Deploy Scripts
        • Recreate all VMs (optional)
      • 14
    • Availability Zone    
      • Availability zones are defined at a vSphere Cluster level. These AZs will then be used by BOSH director to determine where to deploy the PKS Management VMs as well as the Kubernetes VMs.Multiple Availability Zones allow you to provide high-availability across data centers. for this demonstration i have created two AZs, one for Management and one for my Compute.15.png
    • Create Network
      • Since i am using dvs for my PKS management component , we need to specify those details in to this segment and make sure you select the Management AZ which is the vSphere Cluster where BOSH and PKS Control Plane VM will be deployed.

    16

    • Assign AZs and Networks 
      • In this section,  Define the AZ and networking placement settings for the PKS Management VM.Singleton Availability Zone – The Ops Manager Director installs in this Availability Zone.

    17

    • Security & Syslog
      • This section i am leaving default , if required for your deployment , pls refer documentation.
    • Resource Config
      • As per in part 1 sizing , the BOSH director vm by default allocates 2 vCPUs, 8GB memory, 64GB disk and also has a persistent disk of 50GB and Each of the four Compilation VMs uses 4 vCPUs, 4GB memory, 16GB disk each. For my lab deployment i have changed it to suite to my lab resources.Bosh director needs minimum of 8 GB Memory to run , so choose options accordingly.

    18.png

    • Review Pending Changes and Apply Changes 
      • With all the input completed, return to the Installation Dashboard view by clicking Installation Dashboard at the top of the window. The BOSH Director tile will now have a green bar indicating all the required parameters have been entered. Next we click REVIEW PENDING CHANGES and Apply Changes

    22

    • Monitor Installation and Finish
      • 8.png
      • If all the inputs are right then you will see that your installation is successful

    9.png

    After you login to your vCenter , you will see a new powered on VM in your  inventory that starts with vm-UUID which is the BOSH VM. Ops Manager uses vSphere Custom Attributes to add additional metadata fields to identify the various VMs it can deploy, you can check what type of VM this is by simply looking at the deployment, instance_group or job property. In this case, we can see its been noted as p-bosh.

    24.png

    and this completes Ops Manager and BOSH deployment , next post we will install PKS tile.

     

     

  • Deploy VMware PKS – Part1

    In Continuation of my previous blog post here , where i have explained PKS component and sizing details , in this post i will be covering PKS component deployment.

    Previous Post in this Series:

    Getting Started with VMware PKS & NSX-T

    Pre-requisite:

    • Install a New or Existing server which has DNS role installed and configured , which we will use in this deployment.
    • Install vCenter and ESXi , for this Lab i have created two vSphere Cluster:
      • Management Cluster + Edge Cluster – Three Nodes
      • Compute Cluster – Two Nodes
    • Create a Ubuntu server , where you will need to install client utilities like:
      • PKS CLI
        • The PKS CLI is used to create, manage, and delete Kubernetes clusters.
      • KUBECTL
        • To deploy workloads/applications to a Kubernetes cluster created using the PKS CLI, use the Kubernetes CLI called “kubectl“.
      • UAAC
        • To manage users in Pivotal Container Service (PKS) with User Account and Authentication (UAA). Create and manage users in UAA with the UAA Command Line Interface (UAAC).

      • BOSH
        • BOSH CLI used to manage PKS management components deployments and provides information about the VMs using its Cloud Provider Interface (CPI) which is vSphere in my Lab and could be AZURE , AWS and GCP also.
      • OM
        • Bosh Operations Manager command line interface.
    • Prepare NSX-T

      For this Deployment make sure NSX-T is deployed and configured, high level steps are as below:

      • Install NSX Manager
      • Deploy NSX Controllers
      • Register Controllers with Managers as well as other controller with Master controller.
      • Deploy NSX Edge Nodes

      • Register NSX Edge Nodes with NSX Manager

      • Enable Repository Service on NSX Manager

      • Create TEP IP Pool

      • Create Overlay Transport Zone

      • Create VLAN Transport Zone

      • Create Edge Transport Nodes

      • Create Edge Cluster

      • Create T0 Logical Router and configure BGP routing with physical device

      • Configure Edge Nodes for HA

      • Prepare ESXi Servers for the PKS Compute Cluster

    My PKS deployment topology is look like below:

    8.png

    • PKS Deployment Topology – PKS management stack running out of NSX-T
      • PKS VMs (Ops Manager, BOSH, PKS Control Plane, Harbor) are deployed to a VDS backed portgroup
      • Connectivity between PKS VMs, K8S Cluster Management and T0 Router is through a physical router
      • NAT is only configured on T0 to provide POD networks access to associated K8S Cluster Namespace
    • Create a IP Pool
      • Create a new IP Pool which will be used to allocate Virtual IPs for the exposed K8S Services The network also provides IP addresses for Kubernetes API access. Go to Inventory->Groups->IP Pool and enter the following configuration:
        • Name: PKS-FLOATING-POOL
        • IP Range: 172.26.0.100 – 172.26.255.254
        • CIDR: 172.26.0.0/16
      • 5.png
    • Create POD-IP-BLOCK
      • We need to create a new POD IP Block which will by used by PKS on-demand to create smaller /24 networks and assigned those to each K8S namespace. This IP block should be sized sufficiently to ensure that you do not run out of addresses. To create POD-IP-BLOCK , go to NETWORKING->IPAM and enter the following:
      • 7
    •  Create NODEs-IP-BLOCK
      • We need to create new NODEs IP Block which will be used by PKS to assign IP address to Kubernetes master and worker nodes.Each Kubernetes cluster owns the /24 subnet , so to deploy multiple Kubernetes clusters, plan for larger than /24 subnet. (recommendation is /16)
      • 6

    Prepare Client VM

    • Create and install a small Ubuntu VM with default configuration. you can use the latest server version and insure that the VM has internet connectivity either by proxy or direct.
    • Once the Ubuntu VM is ready , download PKSCLI and KUBECTL from https://network.pivotal.io/products/pivotal-container-service

    2

    and copy both the PKS (pks-linux-amd64-1.3.0-build.126 or latest) and Kubectl                      (kubectl-linux-amd64-1.12.4 or latest) CLI to VM.

    • Now SSH to the Ubuntu VM and run the following commands to make binaries executable and renaming/relocating them to /usr/local/bin directory:
      • chmod +x pks-linux-amd64-1.3.0-build.126
      • chmod +x kubectl-linux-amd64-1.12.4
      • mv pks-linux-amd64-1.3.0-build.126 /usr/local/bin/pks
      • mv kubectl-linux-amd64-1.12.4 /usr/local/bin/kubectl
      • Check version using – pks -v and kubectl version
    • Next is to install Cloud Foundry UAAC , run this command
      • apt -y install ruby ruby-dev gcc build-essential g++
      • gem install cf-uaac
      • Check version using – uaac -v
    • Next is to install
    • 34

    This completes this part , in the next part we will start deploying PKS management VMs and their configuration.