Tanzu on Azure Native

Featured

VMware Tanzu Kubernetes Grid provides organizations with a consistent, upstream-compatible, regional Kubernetes substrate that is ready for end-user workloads and ecosystem integrations. You can deploy Tanzu Kubernetes Grid across software-defined datacenters (SDDC) and public cloud environments, including vSphere, Microsoft Azure, and Amazon EC2. in this blog post we will deploy Tanzu Kubernetes Grid 1.4 to Azure Native VMs.

Pre-requisite

  • Deploy a Client VM ,my case ubuntu VM, this VM will be the bootstrap VM for Tanzu, from where we will deploy management cluster in Azure, you can have one native Azure VM for this.
  • TKG uses a local Docker install to setup a small, ephemeral, temporary kind based Kubernetes cluster to build the TKG management cluster in Azure. you need Docker locally to run the kind cluster.
  • Download and Unpack the “Tanzu CLI” and “Kubectl” from HERE, on above VM in a new directory named "tkg" or "tanzu"
    • unpack using #> tar -xvf tanzu-cli-bundle-v1.4.0-linux-amd64.tar
    • After you unpack the bundle file, in your folder, you will see a cli folder with multiple subfolders and files
    • Install Tanzu CLI using #> sudo install core/v1.4.0/tanzu-core-linux_amd64 /usr/local/bin/tanzu
  • Unpack the kubectl binary using:
    • #> tar -xvf kubectl-linux-v1.21.2+vmware.1.gz
    • Install kubectl using #> sudo install kubectl-linux-v1.21.2+vmware.1 /usr/local/bin/kubectl
  • Run the following command from the tanzu directory to install all the Tanzu plugins:
    • #> tanzu plugin install –local cli all
    • #> tanzu plugin list

Configure Azure resources

In this section we will prepare Microsoft Azure for running Tanzu Kubernetes Grid, For the networking, i have prepared azure network as below:

  • Every Tanzu Kubernetes Grid cluster requires 2 Public IP addresses
  •  For each Kubernetes Service object with type LoadBalancer, 1 Public IP address is required.
  • A VNET with: (only required if using existing VNET else TKG can create all automatically)
    • A subnet for the management cluster control plane node
    • A Network Security Group on the control plane subnet with Allow TCP over port 22 and 6443 for any source and destination inbound security rules, to enable SSH and Kubernetes API server connections
    • One additional subnet and Network Security Group for the management cluster worker nodes.
  • Below is the high level network topology will look like when we deploy Tanzu Management Cluster:

Get Tenant ID

Make a note of the Tenant ID value as it will be used later by hovering over your account name at upper-right, or else browse to Azure Active Directory > <Your Org> > Properties > Tenant ID

Register Tanzu Kubernetes Grid as an Azure Client App & Get Application (Client) ID

Tanzu Kubernetes Grid manages Azure resources as a registered client application that accesses Azure through a service principal account. Below steps register your Tanzu Kubernetes Grid application with Azure Active Directory, create its account, create a client secret for authenticating communications, and record information needed later to deploy a management cluster.

  • Go to Active Directory > App registrations and click on + New registration.
  • Enter name and select who else can use it. leave Redirect URI (optional) field blank.
  • Click Register. This registers the application with an Azure service principal account

Make a note of the Application (client) ID value, we will use it later.

Get Subscription ID

From the Azure Portal top level, browse to Subscriptions. At the bottom of the pane, select one of the subscriptions you have access to, and make a note of Subscription ID.

Add a Role, Create and Record Secret ID

Click the subscription listing to open its overview pane and Select to Access control (IAM) and click Add a role assignment.

  • In the Add role assignment pane
    • Select the Owner role
    • Select to “user, group, or service principal
    • Under Select enter the name of your app, in my case “avnish-tkg”. It appears under Selected Members
  • Click Save. A popup appears confirming that your app was added as an owner for your subscription. You can also verify by going in to “Owned Application” section.
  • On the Azure Portal go to Azure Active Directory, click on App Registrations, select your “avnish-tkg” app under Owned applications.
  • Go to Certificates & secrets then in Client secrets click on New client secret.
  • In the Add a client secret popup, enter a Description, choose an expiration period, and click Add.
  • Azure lists the new secret with its generated value under Client Secrets. take a note of “Client Secret” value, which we will use later.

With All above steps, we will have four recorded values:

Subscription ID – XXXXXXX-XXX-4853-9cff-3d2d25758b70
Application Client ID – XXXXXXXX-6134-xxxx-b1a9-8fcbfd3ea189
Secret Value – XXXXX-xxxxxxxxxxxxxxxkB3VdcBF.c_C.
Tenant ID – XXXXXXXX-3cee-4b4a-a4d6-xxxxxdd62f0

we will use these values when we will create management cluster.

Create an SSH Key-Pair

To connect to management azure machine we must need to provide the public key part of an SSH key pair. you can use a tool "ssh-keygen" to generate one

  • #>ssh-keygen -t rsa -b 4096 -C “email@example.com
  • At the prompt Enter file in which to save the key (/root/.ssh/id_rsa): press Enter to accept the default
  • Enter and repeat a password for the key pair

Copy the content of .ssh/id_rsa.pub, which we will use in next section.

Accept Base VM image license

To run management cluster VMs on Azure, accept the license for their base Kubernetes version and machine OS by logging to azure cell and run below commands:

#> az login --service-principal --username AZURE_CLIENT_ID --password AZURE_CLIENT_SECRET --tenant AZURE_TENANT_ID

AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, and AZURE_TENANT_ID are your avnish-tkg app's client ID and secret and your tenant ID, as recorded above

#> az vm image terms accept --publisher vmware-inc --offer tkg-capi --plan k8s-1dot21dot2-ubuntu-2004 --subscription AZURE_SUBSCRIPTION_ID

In Tanzu Kubernetes Grid v1.4.0, the default cluster image --plan value is k8s-1dot21dot2-ubuntu-2004.

Start the Installer Interface

On the machine on which you downloaded and installed the Tanzu CLI, run the

#> tanzu management-cluster create -b "IP of this machine:port" -u

b = binding with interface
u = for User Interface

On the local machine, browse to the above machine’s IP address to access the installer interface and then choose “Microsoft Azure” and click on “DEPLOY”

In the IaaS Provider section, enter the Tenant IDClient IDClient Secret, and Subscription ID values for your Azure account. we recorded these values in above pre-requisite section.

  • Click Connect. The installer verifies the connection and changes the button label to Connected.
  • Select the Azure region in which to deploy the management cluster.
  • Paste the contents of your SSH public key, ".ssh/id_rsa.pub“, into the text box.
  • Under Resource Group, select either the Select an existing resource group or the Create a new resource group radio button.

In the VNET for Azure section, select either the Create a new VNET on Azure or the Select an existing VNET radio button, in my case i am using existing VNET and Subnets which i had provisioned already.

To make the management cluster private, enable the Private Azure Cluster checkbox or leave it Untick. By default, Azure management and workload clusters are public. But you can also configure them to be private, which means their API server uses an Azure internal load balancer (ILB) and is therefore only accessible from within the cluster’s own VNET or peered VNETs.

In Management Cluster Settings section choose Development or Production tiles, use the Instance type drop-down menu to select from different combinations of CPU, RAM, and storage for the control plane node VM or VMs.

Under Worker Node Instance Type, select the configuration for the worker node VM

In the optional Metadata section, optionally provide descriptive information about this management cluster.

in Kubernetes Network section check the default Cluster Service CIDR and Cluster Pod CIDR ranges. If these CIDR ranges of 100.64.0.0/13 and 100.96.0.0/11 are not available, change the values under Cluster Service CIDR and Cluster Pod CIDR.

In Identity Management section, enable/disable Enable Identity Management Settings based on your use case.

In OS Image section, select the OS and Kubernetes version image template to use for deploying Tanzu Kubernetes Grid VMs, and click Next

In the Registration URL field, copy and paste the registration URL you obtained from Tanzu Mission Control or if you don’t have TMC access or URL, move next you can register it later if you want.

In the CEIP Participation section, optionally deselect the check box to opt out of the VMware Customer Experience Improvement Program and then Click Review Configuration to see the details of the management cluster that we have configured.

and finally click on Deploy Management Cluster.

Deployment of the management cluster can take several minutes, in my case around 12 minutes.

and finally, once everything is deployed and configured. your “Management Cluster Created”

The installer saves the configuration of the management cluster to ~/.config/tanzu/tkg/clusterconfigs with a generated filename of the form UNIQUE-ID.yaml. After the deployment has completed, you can rename the configuration file to something memorable, 

if you go to azure portal, you should see three control plane VMs, for with names similar to CLUSTER-NAME-control-plane-abcde, one or three worker node VMs with name similar to CLUSTER-NAME-md-0-rh7xv, Disk and Network Interface resources for the control plane and worker node VMs, with names based on the same name patterns.

At this point we can see management cluster deployed successfully, now we can go ahead and create workload clusters based on your requirement easily. Tanzu allows to deploy and manage Kubernetes cluster on vSphere, on VMware on public clouds and native public cloud AWS and Azure native, in next post i will deploy Tanzu on AWS.

Code to Container with Tanzu Build Service

Featured

Tanzu Build Service uses the open-source Cloud Native Buildpacks project to turn application source code into container images. Build Service executes reproducible builds that align with modern container standards, and additionally keeps images up-to-date. It does so by leveraging Kubernetes infrastructure with kpack, a Cloud Native Buildpacks Platform, to orchestrate the image lifecycle. Tanzu Build Service helps customers develop and automate containerized software workflows securely and at scale.

In this post Tanzu Build Service will monitor git branch and automatically build containers with every push. Then it will upload that container to your image registry for you to pull down and run locally or on Kubernetes cluster

Tanzu Build Service Installation Pre-requisite

  • Before we install Tanzu Build Service, you must ensure we have a Kubernetes cluster and ensure that all worker nodes have at least 50 GB of ephemeral storage allocated to them and also ensure your Kubernetes cluster is configured with default StorageClass

To do this on Tanzu Kubernetes Grid Service, mount a 60GB volume at /var/lib to the worker nodes in the TanzuKubernetesCluster resource that corresponds to your TKGs cluster. I have used below yaml content for mounting volumes while creating Tanzu Kubernetes cluster.

storage:
      classes:
      - tanzu01
      - tkgontkgs
      defaultClass: tkgontkgs
  topology:
    controlPlane:
      class: best-effort-medium
      count: 1
      storageClass: tkgontkgs
      volumes:
      - capacity:
          storage: 60Gi
        mountPath: /var/lib

    workers:
      class: best-effort-medium
      count: 3
      storageClass: tkgontkgs
      volumes:
      - capacity:
          storage: 60Gi
        mountPath: /var/lib
  • Ensure you have access to an existing container registry or install one using this guidance, this will be used to install Tanzu Build Service and store the application images that will be created by build process.
  • Also on a client VM from where you will connect to your Kubernetes cluster, install “docker cli” as well as install below tools , if you need guidance to install these tools, check Here
  • Install kapp, this is a deployment tool that allows users to manage Kubernetes resources in bulk.
  • Install ytt, this is a templating tool that understands YAML structure.
# wget -O ytt https://github.com/vmware-tanzu/carvel-ytt/releases/download/ v0.35.1/ytt-linux-amd64
#chmod +x ytt
#mv ytt /usr/local/bin/ytt
  • Install kbld,this is tool that builds, pushes, and relocates container images.
#wget -O kbld https://github.com/vmware-tanzu/carvel-kbld/releases/downloa d/v0.30.0/kbld-linux-amd64
#mv kbld /usr/local/bin/kbld
#chmod +x /usr/local/bin/kbld
  • Install kp, it controls the kpack installation on Kubernetes, download the kp CLI for your operating system from the Tanzu Build Service page on Tanzu Network.
#mv kp-linux-0.3.1 /usr/local/bin/kp
#chmod +x /usr/local/bin/kp
  • Install imgpkg, it is a tool that relocates container images and pulls the release configuration files.
#wget -O imgpkg https://github.com/vmware-tanzu/carvel-imgpkg/releases/dow nload/v0.17.0/imgpkg-linux-amd64
#mv imgpkg /usr/local/bin/imgpkg
#chmod +x /usr/local/bin/imgpkg
  • and finally target the Kubernetes Cluster on which you want to install “Tanzu Build Service” using :
#kubectl config use-context <context-name> 

Relocate Images to private Registry

First we need to relocate images from the Tanzu Network registry to an internal image registry and for that login to the image registry where you want to store the images by running:

#>docker login harbor.tanzu.zpod.io - 

<harbor.tanzu.zpod.io> - this is my private registry host name 

Now login to the Tanzu Network registry with your Tanzu Network credentials:

#>docker login registry.pivotal.io

Now lets relocate the images to your local registry using “imgpkg” command:

#>imgpkg copy -b "registry.pivotal.io/build-service/bundle:1.2.2" --to-repo harbor.tanzu.zpod.io/tbs/build-service

This completes image relocation process, now lets move to installation.

Tanzu Build Service Installation

Pull the Tanzu Build Service bundle image on your client vm from your internal registry using imgpkg:

#>imgpkg pull -b  "harbor.tanzu.zpod.io/library/build-service:1.2.2" -o /tmp/bundle

Use the Carvel tools kappytt, and kbld, (those we installed in pre-requisite section) to install Build Service and define the required Build Service parameters by running:

#>ytt -f /tmp/bundle/values.yaml -f /tmp/bundle/config/ -f /tmp/ca.crt -v docker_repository='harbor.tanzu.zpod.io/tbs/build-service'     -v docker_username='admin' -v docker_password='<password>' -v tanzunet_username='tripathiavni@vmware.com' -v tanzunet_password='<password>'| kbld -f /tmp/bundle/.imgpkg/images.yml -f- | kapp deploy -a tanzu-build-service -f- -y




#/tmp/ca.crt:-path to registry CA certificate
#docker_repository:-image repository where TBS images exist
 

/tmp/ca.crt – is my CA certificate of the registry.if all above steps are correct, then you should see the “succeeded” as below.

that means, TBS installation is complete, lets move to next

Import Tanzu Build Service Dependency Resources

The Tanzu Build Service Dependencies like stacks, Buildpacks, Builders, etc. are used to build applications and keep them patched and These must be imported with the kp cli and the Dependency Descriptor (descriptor-<version>.yaml) file from the Tanzu Build Service Dependencies page

and now run below command to import all the dependencies

#>kp import -f /tmp/descriptor-100.0.146.yaml --registry-ca-cert-path /tmp/CA.cer

Verify Installation

To verify Build Service installation, lets target the Kubernetes Cluster where Tanzu Build Service has been installed on and run “kp” command which we installed as part of pre-req.

List the cluster builders available in your installation using:

#> kp clusterbuilder list

you should see output like below.

Few Additional commands, you can also run

#> kp clusterstack list
#> kp clusterstore list

This completes the installation of Tanzu build Service.

Build and Deploy a Sample App

First lets create a “secret” for “gitlab” as i have installed gitlab in to my vSphere Lab environment

#> kp secret create github-creds --git http://10.96.63.48 --git-user demoadmin -n tbs-demo

Lets also create secret for the private registry, in my case my registry is hosted on “harbor.tanzu.zpod.io” in vSphere environment

#> kp secret create my-registry-creds --registry harbor.tanzu.zpod.io --registry-user admin --namespace tbs-demo

With the next command, we are telling Tanzu Build Service where to retrieve the source code. Tanzu Build Service will be configured to watch the master branch by default, but you can configure it to watch your own development branch for whatever feature or bug you happen to be working on. Finally, the tag is where the image will be pushed in your registry. Lets create an image using source code from my git repository by running:

#>kp image create spring-petclinic --tag harbor.tanzu.zpod.io/library/sp                                                    ring-petclinic:latest --namespace tbs-demo --git http://10.96.63.48/demoadmin/wwcp.get

it will download its dependencies and start building image

Once completed, Tanzu Build Service will put a copy of the image into Harbor registry, as well as onto your local Kubernetes cluster within the default namespace.

You can check the image status by running below command.

We can now deploy this image either on local or Kubernetes environment or you can setup continuous deployment to deploy build images on any Kubernetes platform. in below example I am installing on my Tanzu Kubernetes cluster from the private registry where build service pushed the container image.

As you can see once image is deployed, i can access it easily from my browser successfully.

This completes the Step-by-Step procedure to install and use Tanzu Build Service. If you would like to dive deeper into VMware Tanzu Build Service, check out the documentation section.

Deploy Tanzu Kubernetes Clusters using Tanzu Mission Control

Featured

VMware Tanzu Mission Control is a centralized management platform for consistently operating and securing your Kubernetes infrastructure and modern applications across multiple teams and clouds.

TMC is Available through VMware Cloud services, Tanzu Mission Control provides operators with a single control point to give developers the independence they need to drive business forward, while ensuring consistent management and operations across environments for increased security and governance.

Use Tanzu Mission Control to manage your entire Kubernetes footprint, regardless of where your clusters reside.

Getting Started with Tanzu Mission Control

To get Started with Tanzu Mission Control, use VMware Cloud Services tools to gain access to VMware Tanzu Mission Control

Launch the TMC Console – Log in to the Tanzu Mission Control console to start managing clusters

Create a Cluster Group – Create a cluster group to help organize and manage your clusters

Register TKGs Management Cluster – When customer registers a Tanzu Kubernetes Grids management cluster, you can bring all of its workload clusters under the management of Tanzu Mission Control, which helps customer to facilitate consistent management using all of the capabilities of Tanzu Mission Control, as well as provisioning resources and creating new clusters directly from Tanzu Mission Control.

Once customer has access to Tanzu Mission Control and created cluster group and registered management cluster, follow below video to deploy Kubernetes clusters on vCenter using management cluster. Video has step-by-Step instruction to help customers for their TMC journey.

Tanzu mission control help customers to bring all the K8s clusters together and once together you can manage policies and configuration of these clusters and help developers to self services.

Now Tanzu Mission Control is available as VMware Cloud Provider Partners as a Software-as-a-Service (SaaS) offering through the VMware Cloud Partner Navigator, this unlocks new opportunities for cloud providers to offer Kubernetes (K8s) managed services for a multi-cloud and multi-team environment. For more details check HERE or reach out to your Business Development Manager.

Deploy Harbor Registry on TKG Clusters

Featured

Tanzu Kubernetes Grid Service, informally known as TKGS, lets you create and operate Tanzu Kubernetes clusters natively in vSphere with Tanzu. You use the Kubernetes CLI to invoke the Tanzu Kubernetes Grid Service and provision and manage Tanzu Kubernetes clusters. The Kubernetes clusters provisioned by the service are fully conformant, so you can deploy all types of Kubernetes workloads you would expect. vSphere with Tanzu leverages many reliable vSphere features to improve the Kubernetes experience, including vCenter SSO, the Content Library for Kubernetes software distributions, vSphere networking, vSphere storage, vSphere HA and DRS, and vSphere security.

Harbor is an open source, trusted, cloud native container registry that stores, signs, and scans content. Harbor extends the open source Docker Distribution by adding the functionalities usually required by users such as security, identity control and management. so lets go ahead and deploy harbor.I have already provisioned an TKG cluster and you can login to TKG cluster by using below command:

#kubectl vsphere login --server=<supervisor-cluster-ip> --tanzu-kubernetes-cluster=<namespace-name> --tanzu-kubernetes-cluster-name=<cluster-name>

Set the correct context as you might have many clusters by using below command:

#kubectl config use-context <cluster-name01>

Add Harbor Helm repository

Now lets install Harbor, you can use various Helm repositories.

Harbor –  https://github.com/goharbor/harbor-helm or also the one from

Bitnami –  https://github.com/bitnami/charts/tree/master/bitnami/harbor which I’m going to use.

Add the repository of your choice to your client…

#helm repo add harbor https://helm.goharbor.io
#helm repo add bitnami https://charts.bitnami.com/bitnami

…and update Helm subsequently.

#helm repo update

Installing Harbor

We will deploy Harbor in a new Kubernetes Namespace which we will name “tanzu-system-registry”. Create the Namespace with kubectl create ns harbor and start the deployment process by executing the following helm command with some corresponding options:

helm install harbor bitnami/harbor \
--set harborAdminPassword=admin \
--set global.storageClass=tkgontkgs \
--set service.type=LoadBalancer \
--set externalURL=harbor.tanzu.zpod.io \
--set service.tls.commonName=harbor.tanzu.zpod.io \
-n tanzu-system-registry

Go and check the pods status by using this command:

#kubectl get pods -n tanzu-system-registry

lets check the services running inside “tanzu-system-registry” namespace, this will give us external IP of the service.

#kubectl get svc -n tanzu-system-registry

Above command will give us an “External IP” which got auto configured in NSX-T, Lets browse using external IP using user name as “admin” and password which we set in the helm command

Now we can successfully browse and access the registry successfully

You can push images to the Harbor registry to make them available to all clusters that are running in the Tanzu Kubernetes Grid instance. for me this i have deployed for my “Tanzu Build Service” installation as TBS needs registry as pre-requisite.

Cloud Native Runtimes for Tanzu

Featured

Dynamic Infrastructure

This is an IT concept whereby underlying hardware and software can respond dynamically and more efficiently to changing levels of demand. Modern Cloud Infrastrastructure built on VM and Containers requires automated:

  • Provisioning, Orchestration, Scheduling
  • Service Configuration, Discovery and Registry
  • Network Automation, Segmentation, Traffic Shaping and Observability

What is Cloud Native Runtimes for Tanzu ?

Cloud Native Runtimes for VMware Tanzu is a Kubernetes-based platform to deploy and manage modern Serverless workloads. Cloud Native Runtimes for Tanzu is based on Knative, and runs on a single Kubernetes cluster. Cloud Native Runtime automates all the aspects of dynamic Infrastructure requirements.

Serverless ≠ FaaS

ServerlessFaaS
Multi-Threaded (Server)Cloud Provider Specific
Cloud Provider AgnosticSingle Threaded Functions
Long lived (days)Shortly Lived (minutes)
offer more flexibilityManaging a large number of functions can be tricky

Cloud Native Runtime Installation

Command line Tools Required For Cloud Native Runtime of Tanzu

The following command line tools are required to be downloaded and installed on a client workstation from which you will connect and manage Tanzu Kubernetes cluster and Tanzu Serverless.

kubectl (Version 1.18 or newer)

  • Using a browser, navigate to the Kubernetes CLI Tools (available in vCenter Namespace) download URL for your environment.
  • Select the operating system and download the vsphere-plugin.zip file.
  • Extract the contents of the ZIP file to a working directory.The vsphere-plugin.zip package contains two executable files: kubectl and vSphere Plugin for kubectl. kubectl is the standard Kubernetes CLI. kubectl-vsphere is the vSphere Plugin for kubectl to help you authenticate with the Supervisor Cluster and Tanzu Kubernetes clusters using your vCenter Single Sign-On credentials.
  • Add the location of both executables to your system’s PATH variable.

kapp (Version 0.34.0 or newer)

kapp is a lightweight application-centric tool for deploying resources on Kubernetes. Being both explicit and application-centric it provides an easier way to deploy and view all resources created together regardless of what namespace they’re in. Download and Install as below:

ytt (Version 0.30.0 or newer)

ytt is a templating tool that understands YAML structure. Download, Rename and Install as below:

kbld (Version 0.28.0 or newer)

Orchestrates image builds (delegates to tools like Docker, pack, kubectl-buildkit) and registry pushes, works with local Docker daemon and remote registries, for development and production cases

kn

The Knative client kn is your door to the Knative world. It allows you to create Knative resources interactively from the command line or from within scripts. Download, Rename and Install as below:

Download Cloud Native Runtimes for Tanzu (Beta)

To install Cloud Native Runtimes for Tanzu, you must first download the installation package from VMware Tanzu Network:

  1. Log into VMware Tanzu Network.
  2. Navigate to the Cloud Native Runtimes for Tanzu release page.
  3. Download the serverless.tgz archive and release.lock
  4. Create a directory named tanzu-serverless.
  5. Extract the contents of serverless.tgz into your tanzu-serverless directory:
#tar xvf serverless.tar.gz

Install Cloud Native Runtimes for Tanzu on Tanzu Kubernetes Grid Cluster

For this installation i am using a TKG cluster deployed on vSphere7 with Tanzu.To install Cloud Native Runtimes for Tanzu on Tanzu Kubernetes Grid: First target the cluster you want to use and verify that you are targeting the correct Kubernetes cluster by running:

#kubectl cluster-info

Run the installation script from the tanzu-serverless directory and wait for progress to get over

#./bin/install-serverless.sh

During my installation, I faced couple of issues like this..

i just rerun the installation, which automatically fixed these issues..

Verify Installation

To verify that your serving installation was successful, create an example Knative service. For information about Knative example services, see Hello World – Go in the Knative documentation. let’s deploy a sample web application using the kn cli. Run:

#kn service create hello --image gcr.io/knative-samples/helloworld-go - default

Take above external URL and either add Contour IP with host name in local hosts file or add an DNS entry and browse and if everything is done correctly your first application is running sucessfully.

You can list and describe the service by running command:

#kn service list -A
#kn service describe hello -n default

It looks like everything is up and ready as we configured it. Some other things you can do with the Knative CLI are to describe and list the routes with the app:

#kn route describe hello -n default

Create your own app

This demo used an existing Knative example, why not make our own app from an image, let do it using below yaml:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: helloworld
  namespace: default
spec:
 template:
  spec:
   containers:
     - image: gcr.io/knative-samples/helloworld-go
       ports:
             - containerPort: 8080
       env:
        - name: TARGET
          value: "This is my app"

Save this to k2.yaml or something which you like, now lets deploy this new service using the kubectl apply command:

#kubectl apply -f k2.yaml

Next, we can list service and describe new deployment, as per the name provided in the YAML file:

and now finally browse the URL by going to http://helloworld.default.example.com (you would need to add entry in DNS or hosts files)

This proves your application is running successfully, Cloud Native Runtimes for Tanzu is a great way for developers to move quickly go on serverless development with networking, autoscaling (even to zero), and revision tracking etc that allow users to see changes in apps immediately. GO ahead and try this in your Lab and once GA in production.

Quick Tip – Delete Stale Entries on Cloud Director CSE

Featured

Container Service Extension (CSE) is a VMware vCloud Director (VCD) extension that helps tenants create and work with Kubernetes clusters.CSE brings Kubernetes as a Service to VCD, by creating customized VM templates (Kubernetes templates) and enabling tenant users to deploy fully functional Kubernetes clusters as self-contained vApps.

Due to any reason, if tenant’s cluster creation stuck and it continue to show “CREATE:IN_PROGRESS” or “Creating” for many hours, it means that the cluster creation has failed for unknown reason, and the representing defined entity has not transitioned to the ERROR state .

Solution

To fix this, provider admin need to get in to API and delete these stale entries, there are very few simple steps to clean those stale entries.

First – Let’s get the “X-VMWARE-VCLOUD-ACCESS-TOKEN” for API calls by calling below API call:

  • https://<vcd url>/cloudapi/1.0.0/sessions/provider
  • Authentication Type: Basic
  • Username/password – <adminid@system>/<password>

Above API call will return “X-VMWARE-VCLOUD-ACCESS-TOKEN”, inside header section of response window. copy this token and use as “Bearer” token in subsequent API calls.

Second – we need to get the “Cluster ID” of the stale cluster which we want to delete, and to get “Cluster ID” – Go in to Cloud Director Kubernetes Container Extension and click on cluster which is stuck and get Cluster IP in URN Format.

Third (Optional) – Get the cluster details using below API call and using authentication using Bearer token , which we first step:

Get  https://<vcd-fqdn>/cloudapi/1.0.0/entities/<cluster-id>/

Fourth – Delete the stale cluster using below API call by providing “ClusterID“, which we captured in second step and using authenticate type a “Bearer Token

Delete https://<vcd-fqdn>/cloudapi/1.0.0/entities/<cluster-id>/

Above API call should respond with “204 No Content”, it means API call has been executed sucessfully.

Now if you login to Cloud Director “Kubernetes Container Cluster” extension, above API call must have deleted the stale/stuck cluster entry

Now you can go to Cloud Director vAPP section and see if any vAPP/VM is running for that cluster , shutdown that VM and Delete it from Cloud Director. simple three API calls to complete the process.

vSphere Tanzu with AVI Load Balancer

Featured

With the release of the vSphere 7.0 Update 2, VMware now adds new Load Balancer option for vSphere with Tanzu which provides production-ready load balancer option for your vSphere with Tanzu deployments. This Load Balancer is called NSX Advanced Load Balancer, or NSX ALB or AVI Load Balancer, This will provide Virtual IP addresses for the Supervisor Control Plane API server, the TKG guest cluster API servers and any Kubernetes applications that require a service of type Load Balancer. In this post, I will go through a step-by-step deployment of the new NSX ALB along with vSphere with Tanzu.

VLAN & IP address Planning

There are many way to plan IP, in this Lab I will place management ,VIPs and workload nodes on three different networks. For this deployment , I will be using three VLANs, One for Tanzu Management, One for Frontend or VIP and one for Supervisor cluster and TKG clusters , here is my IP Planning sheet:

Deploying & Configuring NSX ALB (AVI)

Now Lets deploy NSX ALB controller (AVI LB) by following very similar process that we follow to deploy any other OVA and I will be assigning NSX ALB IP management address from management network address range. The NSX ALB is available as an OVA. In this deployment, I am using version 20.1.4.The only information required at deployment time are

  • A static IP Address
  • A subnet mask
  • A default gateway
  • An sysadmin login authentication key

I have deployed one controller appliance for this Lab but if you are doing production deployment , it is recommended to create three node controller cluster for high availability and better performance.

Once OVA deployment completes, power on the VM and wait for some time before you browse NSX ALB url using the IP address provided while deployment and login to Controller and then:

  • Enter DNS Server Details and Backup Passphrase
  • Add NTP Server IP address
  • Provide Email/SMTP details ( not mandatory)

Next Choose VMware vCenter as your “Orchestrator Integration”, This creates a new cloud configuration in the NSX ALB called as Default-Cloud. Enter below details on next screen:

  • Insert IP of your vCenter,
  • vCenter Credential
  • Permission – Write Permission
  • SDN Integration – None
  • Select appropriate vCenter “Data Center”
  • For Default Network IP Address Management – Static

On next screen, we define the IP address pool for the Service Engines.

  • Select Management Network (on this Network “management interface” of “service engine” will get connected)
  • Enter IP Subnet
  • Enter Free IP’s in to IP Address Pool section
  • Enter Default Gateway

Select No for configuring multiple Tenants. Now we’re ready to get into the NSX ALB configuration.

Create IPAM Profile

IPAM will be used to assign VIPs to Virtual Services, Kubernetes control planes and applications running inside pods. to create IPAM Go to: Templates -> Profiles -> IPAM/DNS Profiles

  • Assign Name to the Profile , This IPAM will be for “frontend” Network
  • Select Type – “Avi Vantage IPAM
  • Cloud for Usable Network – Choose “Default-Cloud
  • Usable Network – Choose Port group, in my case “frontend” ( all vCenter port groups will get populated automatically by vCenter discovery)

Create and Configure DNS profile as below: ( This is optional)

Go to “Infrastructure” and click on “Cloud” and edit “Default Cloud” and update IPAM Profile and DNS profiles with the IPAM profile and DNS profile that we created above.

 Configure the VIP Network

On NSX ALB console , go to “Infrastructure” and then “Networks” , this will display all the network discovered by NSX ALB. Select “frontend” network and Click on Edit

  • Click on “Add Subnet
  • Enter subnet , in my case – 192.168.117.0/24
  • Click on Static IP Address pool:
  • Ensure “Use Static IP Address for VIPs and SE” is selected
    • and enter IP Segment Pool , in my case 192.168.117.100-192.168.117.200
    • Click on Save

Create New Controller Certificate

Default AVI certificate doesn’t contain IP SAN and can’t be used by vCenter/Tanzu to connect to AVI, so we need to create a custom controller and use it during Tanzu management plane deployment. let’s create controller certificate by going to Templates -> Security -> SSL/TLS Certificates -> Create -> Controller Certificate

Complete the Page with required information and make sure “Subject Alernative Name (SAN)” is NSX ALB controller IP/Cluster IP or hostname.

Then go to Administration -> Settings -> Access Settings and edit System Access Settings:

Delete all the certificates in SSL/TLS certificate filed and choose the certificate that we created in above section.

Go to Template->Security->SSL/TLS Certificates, Copy the certificate we created to use while enabling Tanzu Management plane

Configure Routing

Since the workload network (192.168.116.0/24) is on a different subnet from the VIP network (192.168.170.0/24), we need to add a static route in NSX ALB controller, Go the Infrastructure page, navigate to Routing and then to Static Route. Click the Create button and create static routes accordingly.

Enable Tanzu Control Plane (Workload Management)

I am not going to go through the full deployment of workload management, these are similar steps detailed HERE . However, there are a few steps that are different:

  • On page 6 Choose Type=AVI as your Load Balancer type.
  • there is no load balancer IP Address range required, this is now provided by the NSX ALB.
  • the Certificate we need to provide, should be of NSX ALB which we created in previous step.

The new NSX Advanced Load Balancer is far superior to the HA-Proxy specially in provider environment. The providers can deploy, offer and manage K8 clusters with VMware supported LB type even though the configuration requires a few additional steps, it is very simple to setup. The visibility provided into health and usage of the virtual services are going to be extremely beneficial for day-2 operations, and should provide great insights for those providers who are responsible for provisioning and managing Kubernetes distributions running on vSphere. Feel free to share any feedback…

Tanzu Basic – Enable Workload Management

Featured

In continuation to last post where we had deployed VMware HA proxy, now we will enable a vSphere cluster for Workload Management, by configuring it as a Supervisor Cluster.

Part-1- Getting Started with Tanzu Basic – Part1

What is Workload Management

With Workload Management we can deploy and operate the compute, networking, and storage infrastructure for vSphere with Kubernetes. vSphere with Kubernetes transforms vSphere to a platform for running Kubernetes workloads natively on the hypervisor layer. When enabled on a vSphere cluster, vSphere with Kubernetes provides the capability to run Kubernetes workloads directly on ESXi hosts and to create upstream Kubernetes clusters within dedicated resource pools

Since we selected creating a Supervisor Cluster with the vSphere networking stack in previous post that means vSphere Native Pods will not be available but we can create Tanzu Kubernetes clusters.

Pre-Requisite

As per our HA proxy deployment , we chosen HAProxy VM with three virtual NICs, thus connecting HAProxy to a Frontend network. DevOps users and external services can access HAProxy through virtual IPs on the Frontend network. Below are the pre-requisite to enable Workload Management

  • DRS and HA should be enabled on the vSphere cluster, and ensure DRS is in the fully automated mode.
  • Configure shared storage for the cluster. Shared storage is required for vSphere DRS, HA, and storing persistent volumes of containers.
  • Storage Policy: Create a storage policy for the placement of Kubernetes control plane VMs.
    • I have created policy two policies named “basic” & “TanzuBasic”
    • NOTE: You should created policy with lower case policy name
    • This policy has been created with Tag based placement rules
  • Content Library: Create a subscribed content library using URL: https://wp-content.vmware.com/v2/latest/lib.json on the vCenter Server to download VM image that is used for creating nodes of Tanzu Kubernetes clusters. The library will contain the latest distributions of Kubernetes.
  • Add all hosts from the cluster to a vSphere Distributed Switch and create port groups for Workload Networks

Deploy Workload Management

With the release of vSphere 7 update 1 a free trial of Tanzu is available for 60 day evaluation . Enter your details to receive communication from VMware and get started with Tanzu

 Next screen takes you to choose networking options available with vCenter, make sure

  • You choose correct vCenter
  • For networking there are two networking stack, since we haven’t installed NSX-T it will be greyed out and unavailable, choose “vCenter Server Network” and move to “Next”

On next screen you will be presented with vSphere Clusters which are compatible for Tanzu, incase you don’t see any cluster, go on to “Incompatible” section and click on cluster which will give you guidance for the reason of incompatible, go back and fix the reason and try again

Select the size of the resource allocation you need for the Control Plane. For the evaluation, Tiny or Small should be enough and click on Next.

Storage: Select the storage policy which we created as per of pre-requisite and click on Next

Load Balancer: This section is very important and we need to ensure that we provide correct values:

  • Enter a DNS-compliant, don’t use “under-score” in the name
  • Select the type of Load Balancer: “HA Proxy”
  • Enter the Management data plane IP Address. This is our management ip and port number assigned to VMware HA proxy management interface.In our case it is 192.168.115.10:5556.
  • Enter the username and password used during deployment for the HA Proxy
  • Enter the IP Address Ranges for Virtual Server. we need to provide the IP ranges for virtual servers, these are the ip-address which we had defined in the frontend network. It’s the exact same range which we used during deployment of HA-proxy configuration process, but this time we will have to write full range instead of using a CIDR format, in this case i am using: 192.168.117.33-192.168.117.62
  • Finally, enter in the Server CA cert. If you have added a cert during deployment, you would use that. If you have used a self-signed cert then you can retrieve that data from the VM by browsing /etc/haproxy/ca.crt.

Management Network: Next portion is to configure IP address for Tanzu supervisor control plane VM’s, this will be from management IP range.

  • We will need 5 consecutive IPs free from Management IP range, Starting IP Address this is the first IP in a range of five IPs to assign to Supervisor control plane VMs’ management network interfaces.
  • One IP is assigned to each of the three Supervisor control plane VMs in the cluster
  • One IP is used for a Floating IP which we will use to connect to Management plane
  • One IP is reserved for use during upgrade process
  • This will on mgmt port group

Workload Network:

Service IP Address: we can take the default network subnet for “IP Address for Services. change this if you are using this subnet anywhere else. This subnet is for internal communication and it not routed.

And the last network in which we will define the Kubernetes node IP range, this applies to both the supervisor cluster as well as the guest TKG clusters. This range will be from workload IP range which we had created in the last post with vLAN 116.

  • Port Group – workload
  • IP Address Range – 192.168.116.32-192.168.116.63

Finally choose the content library which we had created as a part of pre-requisite

if you have provided right information with correct configuration, it will take around 20 minutes to install and configure entire TKG management plane to consume. you might see few errors while configuring Management plane but you can ignore as those operations will be retried automatically and errors will get clear when that particular task get succeed.

NOTE-Above screenshot has different cluster name as i have taken it from different environment but IP schema is same.

I hope this article helps you to enable your first “Workload Management” vSphere cluster without NSX-T. Next Blog post i will cover deployment of TKG Clusters and others things around that…

Getting Started with Tanzu Basic

In the process of modernize your data center to run VMs and containers side by side, Run Kubernetes as part of vSphere with Tanzu Basic. Tanzu Basic embeds Kubernetes in to the vSphere control plane for the best administrative control and user experience. Provision clusters directly from vCenter and run containerized workloads with ease. Tanzu basic is the most affordable and has below components as part of Tanzu Basic:

To Install and configure Tanzu Basic without NSX-T, at high level there are four steps which we need to perform and I will be covering all the steps in three blog posts:

  1. vSphere7 with a cluster with HA and DRS enabled should have been already configured
  2. Installation of VMware HA Proxy Load Balancer – Part1
  3. Tanzu Basic – Enable Workload Management – Part2
  4. Tanzu Basic – Building TKG Cluster – Part3

Deploy VMware HAProxy

There are few topologies to setup Tanzu Basic with vSphere based networking, for this blog we will deploy the HAProxy VM with three virtual NICs, which means there will be one “Management” network , one “Workload” Network and another one will be “frontend” network which will be used by DevOps users and external services will also access HAProxy through virtual IPs on this Frontend network.

NetworkUse
ManagementCommunicating with vCenter and HA Proxy
WorkloadIP assigned to Kubernetes Nodes
Front EndDevOps uses and External Services

For This Blog, I have created three VLAN based Networks with below IP ranges:

NetworkIP RangeVLAN
tkgmgmt192.168.115/24115
Workload192.168.116/24116
Frontend192.168.117/24117

Here is the topology diagram , HAProxy has been configured with three nics and each nic is connected to VLAN that we created above

NOTE– if you want to deep dive on this Networking refer Here , This blog post describe it very nicely and I have used the same networking schema in this Lab deployment.

Deploy VMware HA Proxy

This is not common HA Proxy, it is customized one and its Data Plane API designed to enable Kubernetes workload management with Project Pacific on vSphere 7.VMware HAProxy deployment is very simple, you can directly access/download OVA from Here and follow same procedure as you follow for any other OVA deployment on vCenter, there are few important things which I am covering below:

On Configuration screen , choose “Frontend Network” for three NIC deployment topology

Now Networking section which is heart of the solution, on this we align above created port groups to map to the Management, Workload and Frontend networks.

Management network is on VLAN 115, this is the network where the vSphere with Tanzu Supervisor control plane VMs / nodes are deployed.

Workload network is on vLAN 166, where the Tanzu Kubernetes Cluster VMs / nodes will be deployed.

Front end network which is on vLAN 117, this is where the load balancers (Supervisor API server, TKG API servers, TKG LB Services) are provisioned. Frontend network and workload network must need to route to each other for the successful wcp enablement.

Next page is most important and here we will have VMware HAProxy appliance configuration. Provide a root password and tick/untick for root login option based on your choice. The TLS fields will be automatically generated if left blank.

In the “network config” section, provide network details about the VMware HAProxy for the management network, the workload network and frontend/load balancer network. These all require static IP addresses, in the CIDR format. You will need to specify a CIDR format that matches the subnet mask of your networks.

For Management IP: 192.168.115.5/24 and GW:192.168.115.1

For Workload IP: 192.168.116.5/24 and GW:192.168.116.1

For Frontend IP: 192.168.117.5/25 and GW:192.168.117.1 . this is not optional if you had selected Frontend in “configuration” section.

In Load Balancing section, enter the Load Balancer IP ranges. these IP address will be used as Virtual IPs by the load balancer and these IP will come from Frontend network IP range.

Here I am specifying 192.168.117.32/27 , this segment will give me 30 address for VIPs for Tanzu management plane access and application exposed for external consumption.Ignore “192.168.117.30” in the image back ground.

Enter Data plane API management port. Enter: 5556 and also enter a username and password for the load balancer data plane API

Finally review the summary and click finish. this will deploy VMware HAProxy LB appliance

Once deployment completed, power on the appliance and SSH in to the VM using the management plane IP and check if all the interfaces are having correct IPs:

Also check if you can ping Front end ip ranges and other Ip ranges also. stay tuned for Part2.